C++ Performance – Efficient Integer Floor Function

c++floorperformanceprocessing-efficiencyx86-64

I want to define an efficient integer floor function, i.e. a conversion from float or double that performs truncation towards minus infinity.

We can assume that the values are such that no integer overflow occurs. So far I have a few options

  • casting to int; this requires special handling of the negative values, as the cast truncates toward zero;

    I= int(F); if (I < 0 && I != F) I--;
    
  • casting the result of floor to int;

    int(floor(F));
    
  • casting to int with a large shift to get positives (this can return wrong results for large values);

    int(F + double(0x7fffffff)) - 0x7fffffff;
    

Casting to int is notoriously slow. So are if tests. I have not timed the floor function, but seen posts claiming it is also slow.

Can you think of better alternatives in terms of speed, accuracy or allowed range ? It doesn't need to be portable. The targets are recent x86/x64 architectures.

Best Answer

Casting to int is notoriously slow.

Maybe you've been living under a rock since x86-64, or otherwise missed that this hasn't been true for a while on x86. :)

SSE/SSE2 have an instruction to convert with truncation (instead of the default rounding mode). The ISA supports this operation efficiently precisely because conversion with C semantics is not rare in actual codebases. x86-64 code uses SSE/SSE2 XMM registers for scalar FP math, not x87, because of this and other things that make it more efficient. Even modern 32-bit code uses XMM registers for scalar math.

When compiling for x87 (without SSE3 fisttp), compilers used to have to change the x87 rounding mode to truncation, FP store to memory, then change the rounding mode back again. (And then reload the integer from memory, typically from a local on the stack, if doing further stuff with it.) x87 was terrible for this.

Yes that was horribly slow, e.g. in 2006 when the link in @Kirjain's answer was written, if you still had a 32-bit CPU or were using an x86-64 CPU to run 32-bit code.


Converting with a rounding mode other than truncation or default (nearest) isn't directly supported, and until SSE4.1 roundps/roundpd your best bet was magic-number tricks like in the 2006 link from @Kirjain's answer.

Some nice tricks there, but only for double -> 32-bit integer. Unlikely to be worth expanding to double if you have float.

Or more usually, simply add a large-magnitude number to trigger rounding, then subtract it again to get back to the original range. This can work for float without expanding to double, but I'm not sure how easy it is to make floor work.


Anyway, the obvious solution here is _mm256_floor_ps() and _mm256_cvtps_epi32 (vroundps and vcvtps2dq). A non-AVX version of this can work with SSE4.1.

I'm not sure if we can do even better; If you had a huge array to process (and couldn't manage to interleave this work with other work), you could set the MXCSR rounding mode to "towards -Inf" (floor) and simply use vcvtps2dq (which uses the current rounding mode). Then set it back. But it's probably better to cache-block your conversion or do it on the fly as you generate the data, presumably from other FP calculations that need the FP rounding mode set to the default Nearest.

roundps/pd/ss/sd is 2 uops on Intel CPUs, but only 1 uop (per 128-bit lane) on AMD Ryzen. cvtps2dq is also 1 uop. packed double->int conversion also includes a shuffle. Scalar FP->int conversion (that copies to an integer register) usually also costs an extra uop for that.

So there's room for the possibility of magic-number tricks being a win in some cases; it maybe worth investigating if _mm256_floor_ps() + cvt are part of a critical bottleneck (or more likely if you have double and want int32).


@Cássio Renan's int foo = floorf(f) will actually auto-vectorize if compiled with gcc -O3 -fno-trapping-math (or -ffast-math), with -march= something that has SSE4.1 or AVX. https://godbolt.org/z/ae_KPv

That's maybe useful if you're using this with other scalar code that's not manually vectorized. Especially if you're hoping the compiler will auto-vectorize the whole thing.

Related Question