Can someone give me an example of a floating point number (double precision), that needs more than 16 significant decimal digits to represent it?
I have found in this thread that sometimes you need up to 17 digits, but I am not able to find an example of such a number (16 seems enough to me).
Can somebody clarify this?
Best Answer
My other answer was dead wrong.
Compile and run to see:
a and b are just 2*(253-1) and 2*(253-2).
Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.