A little test program:
#include <iostream>
const float TEST_FLOAT = 1/60;
const float TEST_A = 1;
const float TEST_B = 60;
const float TEST_C = TEST_A / TEST_B;
int main()
{
std::cout << TEST_FLOAT << std::endl;
std::cout << TEST_C << std::endl;
std::cin.ignore();
return 0;
}
Result :
0
0.0166667
Tested on Visual Studio 2008 & 2010.
- I worked on other compilers that, if I remember well, made the first result like the second result. Now my memory could be wrong, but shouldn't TEST_FLOAT have the same value than TEST_C? If not, why?
- Is TEST_C value resolved at compile time or at runtime? I always assumed the former but now that I see those results I have some doubts…
Best Answer
In
Both of the operands are integers, so integer arithmetic is performed. To perform floating point arithmetic, at least one of the operands needs to have a floating point type. For example, any of the following would perform floating point division:
(You might choose to use
1.0f
instead, to avoid any precision reduction warnings;1.0
has typedouble
, while1.0f
has typefloat
)In the
TEST_FLOAT
case, integer division is performed and then the result of the integer division is converted tofloat
in the assignment.In the
TEST_C
case, the integer literals1
and60
are converted tofloat
when they are assigned toTEST_A
andTEST_B
; then floating-point division is performed on those floats and the result is assigned toTEST_C
.It depends on the compiler; either method would be standards-conforming.