I have a simple routine which calculates the aspect ratio from a floating point value. So for the value 1.77777779, the routine returns the string "16:9". I have tested this on my machine and it works fine.
The routine is given as :
public string AspectRatioAsString(float f)
{
bool carryon = true;
int index = 0;
double roundedUpValue = 0;
while (carryon)
{
index++;
float upper = index * f;
roundedUpValue = Math.Ceiling(upper);
if (roundedUpValue - upper <= (double)0.1 || index > 20)
{
carryon = false;
}
}
return roundedUpValue + ":" + index;
}
Now on another machine, I get completely different results. So on my machine, 1.77777779 gives "16:9" but on another machine I get "38:21".
Best Answer
Here's an interesting bit of the C# specifiction, from section 4.1.6:
It is possible that this is one of the "measurable effects" thanks to that call to Ceiling. Taking the ceiling of a floating point number, as others have noted, magnifies a difference of 0.000000002 by nine orders of magnitude because it turns 15.99999999 into 16 and 16.00000001 into 17. Two numbers that differ slightly before the operation differ massively afterwards; the tiny difference might be accounted for by the fact that different machines can have more or less "extra precision" in their floating point operations.
Some related issues:
C# XNA Visual Studio: Difference between "release" and "debug" modes?
CLR JIT optimizations violates causality?
To address your specific problem of how to compute an aspect ratio from a float: I'd possibly solve this a completely different way. I'd make a table like this:
and now your implementation is
Notice how the code now reads very much like the operation you are trying to perform: from this list of common ratios, choose the one where the difference between the given ratio and the common ratio is minimized.