The Floating Point Precision Error
The floating-point precision error is an interesting issue caused by how computers do math. Let's learn what it is, and how it could be affecting your code.
Join the DZone community and get the full member experience.Join For Free
console.log(0.1 * 0.2) // Returns 0.020000000000000004 // Should return 0.02
How Binary Handles Numbers Differently
In base 10, which is the normal base we use in day-to-day life, we can represent a number such as 0.1 as a finite number; that is to say, it can be represented to one decimal place. If we try to write this as binary, however, we get something weird:
0.0001100110011001101 ... repeats to infinity
So, 0.1 in binary is a repeating decimal. That means if we round this number, to say, 4 decimal places, we get 0.0001, which when we convert back to base 10, is 1/16; a little bit different than our original value of 1/10.
What Is a Floating Point?
In computing, there is a balance between accuracy, and performance. To try to establish a balance, a group came up with a standard type called the floating-point, which is defined by standard IEE 754.
This takes a number, like 0.1, and represents it in binary representation. As you might've guessed, since 0.1 is a repeating decimal in binary, the floating-point representation will limit 0.1 to a certain number of decimal places. Its representation is shown below:
SIGN | EXPONENT | MANTISSA 0 01111011 10011001100110011001101
The 32-bit floating-point type represents our number in scientific notation, which you may have seen shown as 1 x 10-4 in the past. For binary, the difference we use x2 since it is base 2, so an example would be 1 x 2-4. Therefore, the floating-point binary representation is split into a few pieces:
- The first digit is the sign, so it is 1 if negative.
- The next 8 digits are the exponent, i.e. the number we raise the x 2 to.
- The next 23 digits are the mantissa, which is the number that is multiplied by the x 2.
This gives us the balance of having a large number of numbers we can reference, while still having a good performance.
Back to 0.1
In our floating-point notation, we limit 0.1 in binary to a certain number of decimal places. The mathematical notation we have conveyed into binary is shown below:
SIGN | EXPONENT | MANTISSA 0 01111011 10011001100110011001101 # is the equivalent in mathematical notation of ... 1.600000023841858 * 2^-4
If you try to put 1.600000023841858 x 2-4 into Google, you'll get a value of 0.10000000149. As we've said, since binary represents numbers differently, meaning certain numbers are repeating decimals when they wouldn't normally be in base 10. When we try to present them in binary, we have to round them to a certain amount of decimals.
This rounding then shows up when we go back to base 10, meaning our once exact base 10 number, is now 0.10000000149. Now, if we try to use this in a calculation, we end up with a rounding error, which is why
console.log(0.1 * 0.2) returns 0.020000000000000004.
Can I Avoid It?
Some languages have a decimal data type, which is a way to solve this issue. However, you don't necessarily need this. For simple errors like this, you can round the number down to your required decimal length. Overall, this is an interesting example of how computers attempt to balance speed with performance.
Published at DZone with permission of Johnny Simpson, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.