Why 0.1 + 0.2 == 0.3 is false in JS? Mystery Unsolved With Solution
JS follows a 32-bit floating point representation for numbers.
Out of 32 bits :
1 is sign bit(number is negative if the value is 1),
8 bits indicate the exponent value(e) and other
23 bits represent the fraction value.
With the 32 bits, the value is computed by this serious looking formula:
To really understand why 0.1 cannot be represented properly as a 32-bit floating-point number, you must understand binary. Representing many decimals in binary requires an infinite number of digits.This is because binary numbers are represented by 2 power n(2ˆn) where n is an integer. While trying to calculate 0.1, long division will go on forever.
Calculating 0.1 (1/10) results in an indefinite number of decimal points.
Check the error rate while conversion from decimal to binary numbers in more depth at this link
Now the question is can we solve this problem?
— Yes. We can and We will.
We would use Number.EPSILON for this.
Number.EPSILON returns the smallest interval between two representable numbers.This is useful for the problem with floating-point approximation.
STEPS INSIDE FUNCTION after numberEquals(0.1 + 0.2, 0.3) call:
= Math.abs((0.1 + 0.2) - 0.3) < Number.EPSILON
= Math.abs((0.30000000000000004) - 0.3) < Number.EPSILON
= Math.abs((5.551115123125783e-17) - 0.3) < Number.EPSILON
= Math.abs((5.551115123125783e-17) - 0.3) < 2.220446049250313e-16
Hence it returned “true”.
This function works by checking whether the difference between the two numbers are smaller than Number.EPSILON. Remember that Number.EPSILON is the smallest difference between two representable numbers. The difference between 0.1+0.2 and 0.3 will be smaller than Number.EPSILON.
❤️ Thank You For Making It To The End. Kudos to you. Give a clap and share it to show your support and love 👏🎯