You don’t understand floating points.

Some days back, I was scrolling through the feeds when I came across a very weird problem with floating points (decimal numbers). The problem was about performing arithmetic in between two floating point. On trying to perform arithmetic, the person was receiving a very unexpected answer.

Such as, what would you expect with the subtraction of “1.2 – 1.0”? 0.2… No? Most of us would. But surprisingly when we perform the same calculation in Python language we receive “0.199999999999999996” as the answer, weird.

Floating points are faulty. Computers normally can’t express numbers in fraction notation. It’s a problem caused by the internal representation of floating point numbers, which uses a fixed number of binary digits to represent a decimal number because floating point numbers only have 32 or 64 bits of precision depending upon the architecture of the processor or the implementation of the language. Thus, some floating point numbers can’t be represented exactly in binary so the digits are cut off at some point resulting in small round-off errors. Though, some programming languages have the ability, which allows these problems to be avoided at a certain degree.

Round-off Error

A round-off error, also called rounding error is the difference between an approximation of a number used in computation and its exact (correct) value.

When people talk about round-off error, it is the error between the number and its representation. For example, 200/3 would be represented as 66.6667 in a six significant digit computer that rounds off the last digit. The last digit has been rounded up from 6 to a 7. The difference between 200/3 and 66.6667, that is, (200/3) – 66.6667 is the round off error.

Catastrophic Cancellation (Loss of significance)

Loss of significance, also known as Catastrophic Cancellation is the error generated when attempted to subtract two nearly equal numbers. The effect is that the number of significant digits (each of the digits of a number that are used to express it to the required degree of accuracy, starting from the first non-zero digit) in the result is reduced.

Consider the decimal number, “0.1234567891234567890”. A floating-point representation of this number on a machine that keeps 10 floating-point digits would be “0.1234567891”. The first is accurate to 10×10−19, while the second is only accurate to 10×10−10. Now perform the calculation,

0.1234567891234567890 − 0.1234567890000000000

Upon performing the calculation you’d see that we have “0.0000000001234567890” as the answer on the accuracy of 20 significant digits whereas, “0.0000000001” on the 10-digit floating-point machine, i.e. 1.000000000×10−10 and so, we lost many significant digits in the computation and which is irreversible.

Thus, One of the consequences of this is that, it is dangerous to compare the result of some computation to a floating point number with “==”. Tiny inaccuracies may mean that “==” fails. Instead, you have to check that the difference between the two numbers is less than a certain threshold:

epsilon = 0.0000000000001 # Tiny allowed error
expected_result = 0.4

if (expected_result - epsilon) <= computation() <= (expected_result + epsilon):

Moreover, these little errors could really make disastrous things happen if taken otherwise. An example could be of the fate of Ariane rocket launched on June 4, 1996 (European Space Agency 1996). In the 37th second of flight, the inertial reference system attempted to convert a 64-bit floating-point number to a 16-bit number, but instead triggered an overflow error which was interpreted by the guidance system as flight data, causing the rocket to veer off course and be destroyed.

That’s it. Thanks for reading. Feel free to correct me or share your reading experience with me in the comments below.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.