# Is floating point math broken?

## Is floating point math broken?

Consider the following code:

```
0.1 + 0.2 == 0.3 -> false
```

`0.1 + 0.2 -> 0.30000000000000004`

Why do these inaccuracies happen?

1 Answer

192 Views

Gorden Linoff
Punditsdkoslkdosdkoskdo

Consider the following code:

```
0.1 + 0.2 == 0.3 -> false
```

`0.1 + 0.2 -> 0.30000000000000004`

Why do these inaccuracies happen?

Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's `double`

. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as `0.1`

, which is `1/10`

) whose denominator is not a power of two cannot be exactly represented.

For `0.1`

in the standard `binary64`

format, the representation can be written exactly as

`0.1000000000000000055511151231257827021181583404541015625`

in decimal, or`0x1.999999999999ap-4`

in C99 hexfloat notation.

In contrast, the rational number `0.1`

, which is `1/10`

, can be written exactly as

`0.1`

in decimal, or`0x1.99999999999999...p-4`

in an analogue of C99 hexfloat notation, where the`...`

represents an unending sequence of 9's.

The constants `0.2`

and `0.3`

in your program will also be approximations to their true values. It happens that the closest `double`

to `0.2`

is larger than the rational number `0.2`

but that the closest `double`

to `0.3`

is smaller than the rational number `0.3`

. The sum of `0.1`

and `0.2`

winds up being larger than the rational number `0.3`

and hence disagreeing with the constant in your code.

A fairly comprehensive treatment of floating-point arithmetic issues is *What Every Computer Scientist Should Know About Floating-Point Arithmetic*. For an easier-to-digest explanation, see floating-point-gui.de.