The IEEE 754 Format

The Problem

It's really easy to write integers as binary numbers in two's complement form. It's a lot more difficult to express floating point numbers in a form that a computer can understand. The biggest problem, of course, is keeping track of the decimal point. There are lots of possible ways to write floating point numbers as strings of binary digits, and there are many things to consider when picking a standard method to do this:

A Solution

The method that the developers of IEEE 754 Form finally hit upon uses the idea of scientific notation. Scientific notation is a standard way to express numbers; it makes them easy to read and compare. You're probably familiar with scientific notation with base-10 numbers. You just factor your number into two parts: a value whose magnitude is in the range of $1 \le n < 10$, and a power of 10. For example: $$3498523 \quad \textrm{ is written as } \quad 3.498523 \times 10^6$$ $$-0.0432 \quad \textrm{ is written as } \quad -4.32 \times 10^{-2}$$ The same idea applies here, except that you need to use powers of 2 because the computer works efficiently with binary numbers. Just factor your number into a value whose magnitude is in the range $1 \le n < 2$, and a power of 2. (Note, there should only be one way to do this -- do you see why?) $$-6.84 \quad \textrm{ is written as } \quad -1.71 \times 2^2$$ $$0.05 \quad \textrm{ is written as } \quad 1.6 \times 2^{-5}$$ To create the bitstring, we need to massage this product so that it takes the following form: $$(-1)^{\color{purple}{\textrm{sign bit}}} (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{$\color{green}{\textrm{exponent}}$ - bias}}$$ Once this is done, we will have three key pieces of information (shown in color above) that, when taken together, identify the number: When you have calculated these binary values, you can put them into a 32- or 64-bit field. The digits are arranged like this:

(The numbers in parentheses show how many bits are required in each field.)

By arranging the fields in this way, so that the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits -- the resulting value will actually be ordered properly for comparisons, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware.

There are some special cases:

Example: Converting to IEEE 754 Form

Suppose we wish to put 0.085 in single-precision format. Here's what has to happen:
  1. The first step is to look at the sign of the number.
    Because 0.085 is positive, the sign bit = 0.

  2. Next, we write 0.085 in base-2 scientific notation
    This means that we must factor it into a number in the range $(1 \le n < 2)$ and a power of 2.

    $$\begin{array}{rcl} 0.085 &=& (-1)^0 (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{power}}, \quad \textrm{ or, equivalently: }\\ 0.085 \quad / \quad 2^{\textrm{power}} &=& 1 + \color{red}{\textrm{fraction}}\\ \end{array}$$ As such, we divide 0.085 by a power of 2 to get the $(1 + \color{red}{\textrm{fraction}})$: $$\begin{array}{rcl} 0.085 \quad / \quad 2^{-1} &=& 0.17\\ 0.085 \quad / \quad 2^{-2} &=& 0.34\\ 0.085 \quad / \quad 2^{-3} &=& 0.68\\ 0.085 \quad / \quad 2^{-4} &=& 1.36\\ \end{array}$$ Therefore, $0.085 = 1.36 \times 2^{-4}$
  3. Now, we find the exponent
    The power of 2 used above was -4, and the bias for the single-precision format is 127. Thus, $$\color{green}{\textrm{exponent}} = -4+127 = 123 = \color{green}{01111011}_{\textrm{binary}}$$
  4. Then, we write the fraction in binary form

    Successive multiplications by 2 (while temporarily ignoring the unit's digit) quickly yields the binary form:

    0.36 x 2 = 0.72
    0.72 x 2 = 1.44
    0.44 x 2 = 0.88
    0.88 x 2 = 1.76
    0.76 x 2 = 1.52
    0.52 x 2 = 1.04
    0.04 x 2 = 0.08     Once this process terminates or starts repeating,  
    0.08 x 2 = 0.16     repeating, we read the unit's digits from top to 
    0.16 x 2 = 0.32     bottom to reveal the binary form for 0.36: 
    0.32 x 2 = 0.64
    0.64 x 2 = 1.28      0.01011100001010001111010111000...
    0.28 x 2 = 0.56
    0.56 x 2 = 1.12
    0.12 x 2 = 0.24
    0.24 x 2 = 0.48
    0.48 x 2 = 0.96
    0.96 x 2 = 1.92
    0.92 x 2 = 1.84
    0.84 x 2 = 1.68
    0.68 x 2 = 1.36
    0.36 x 2 =  ...  (at this point the list starts repeating)
    

    As you can see, 0.36 has a a non-terminating, repeating binary form. This is very similar to how a fraction, like 5/27 has a non-terminating, repeating decimal form. (i.e., 0.185185185...)

    However, single-precision format only affords us 23 bits to work with to represent the fraction part of our number. We will have to settle for an approximation, rounding things to the 23rd digit. One should be careful here -- while it doesn't happen in this example, rounding can affect more than just the last digit. This shouldn't be surprising -- consider what happens when one rounds in base 10 the value 123999.5 to the nearest integer and gets 124000. Rounding the infinite string of digits found above to just 23 digits results in the bits 0.01011100001010001111011.

    (Note, we round "up" as the binary value 0.0111000... is greater than the decimal value 0.05.)

    This rounding that we have to perform to get our value to fit into the number of bits afforded to us is why floating-point numbers frequently have some small degree of error when you put them in IEEE 754 format. It is very important to remember the presence of this error when using the standard Java types (float and double) for representing floating-point numbers!

  5. Finally, we put the binary strings in the correct order.
    Recall, we use 1 bit for the sign, followed by 8 bits for the exponent, and 23 bits for the fraction.

    So 0.85 in IEEE 754 format is:

    0 01111011 01011100001010001111011

Example Converting from IEEE 754 Form

Suppose we wish to convert the following single-precision IEEE 754 number into a floating-point decimal value:
11000000110110011001100110011010
  1. First, we divide the bits into three groups:
    1   10000001   10110011001100110011010
    The first bit shows us the sign of the the number. The next 8 bits give us the exponent. The last 23 bits give us the fraction.

  2. Now we look at the sign bit
    If this bit is a 1, the number is negative; if it is 0, the number is positive. Here, the bit is a 1, so the number is negative.

  3. Next, we get the exponent and the correct bias
    To get the exponent, we simply convert the binary number 10000001 back to base-10 form, yielding 129

    Remember that we will have to subtract an appropriate bias from this exponent to find the power of 2 we need. Since this is a single-precision number, the bias is 127.

  4. Then we must convert the fraction bits back into base 10
    To do this, we multiply each digit by the corresponding power of 2 and sum the results: $$\begin{array}{rcl} {0.\color{red}{10110011001100110011010}}_{\textrm{binary}} &=& 1 \cdot 2^{-1} + 0 \cdot 2^{-2} + 1 \cdot 2^{-3} + 1 \cdot 2^{-4} + 0 \cdot 2^{-5} + \cdots\\ &=& 1/2 + 1/8 + 1/16 + \cdots\\ &=&\color{red}{0.7000000476837158}\\ \end{array}$$ Remember, this number is most likely just an approximation of some other number. There will most likely be some error.

  5. We have all the information we need. Now we just calculate the following expression: $$\begin{array}{rcl} (-1)^{\color{purple}{\textrm{sign bit}}} (1 + \color{red}{\textrm{fraction}}) \times 2^{\textrm{$\color{green}{\textrm{exponent}}$ - bias}} &=& (-1)^{\color{purple}{1}} (1.\color{red}{7000000476837158}) \times 2^{\color{green}{129}-127}\\ &=& -6.800000190734863\\ \end{array}$$ Thus, the IEEE 754 number 11000000110110011001100110011010 gives the floating-point decimal value -6.800000190734863. It is reasonable to suspect that the original number stored was probably -6.8, although this would be hard to prove... (One can verify that -6.8 does result in the exact same bit string, however.)

Original text by S. Orley and J. Mathews of Iowa State University; adapted by P. Oser