In JavaScript, everyone knows the famous calculation: `0.1 + 0.2 = 0.30000000000000004`

. But why does JavaScript print this value instead of printing the more accurate and precise `0.300000000000000044408920985006`

?

The default rule for JavaScript when converting a `Number`

value to a decimal numeral is to use just enough digits to distinguish the `Number`

value. (You can request more or fewer digits by using the `toPrecision`

method.)

JavaScript uses IEEE-754 basic 64-bit binary floating-point for its `Number`

type. Using IEEE-754, the result of `.1 + .2`

is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:

- Converting “.1” to the nearest value representable in the
`Number`

type. - Converting “.2” to the nearest value representable in the
`Number`

type. - Adding the above two values and rounding the result to the nearest value representable in the
`Number`

type.

When formatting this `Number`

value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:

`0.299999999999999988897769753748434595763683319091796875`

,`0.3000000000000000444089209850062616169452667236328125`

, and`0.300000000000000099920072216264088638126850128173828125`

.

If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible `Number`

value the original value was.

This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a `Number`

value *m* to a decimal numeral for the `ToString`

operation:

Otherwise, let

n,k, andsbe integers such thatk? 1, 10^{k?1}?s< 10^{k}, the Number value fors× 10^{n?k}ism, andkis as small as possible.

The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for *s* × 10^{n?k}”, the standard means the `Number`

value that is the result of converting the mathematical value *s* × 10^{n?k} to the `Number`

type (with the usual rounding). In this description, *k* is the number of significant digits that will be used, and this step is telling us to minimize *k*, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the `Number`

type, produce the original number *m*.

©2020 All rights reserved.