In the C99 Standard, 7.18.1.3 Fastest minimum-width integer types.
(7.18.1.3p1) "Each of the following types designates an integer type that is usually fastest225) to operate with among all integer types that have at least the specified width."
225) "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."
and
(7.18.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer
type with a width of at least N."
The types int_fastN_t
and uint_fastN_t
are counterparts to the exact-width integer types intN_t
and uintN_t
. The implementation guarantees that they take at least N
bits, but the implementation can take more bits if it can perform optimization using larger types; it just guarantees they take at least N
bits.
For example, on a 32-bit machine, uint_fast16_t
could be defined as an unsigned int
rather than as an unsigned short
because working with types of machine word size would be more efficent.
Another reason of their existence is the exact-width integer types are optional in C but the fastest minimum-width integer types and the minimum-width integer types (int_leastN_t
and uint_leastN_t
) are required.
On your platform, they're all names for the same underlying data type. On other platforms, they aren't.
int64_t
is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.
int_least64_t
is the smallest data type with at least 64 bits. If int64_t
is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.
int_fast64_t
is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t
and int_fast16_t
, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.
If you're porting an algorithm, you probably want to be using int_fast32_t
, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t
.
Best Answer
The difference is defined in the sections of the C99 standard that Carl Norum quoted. But it may be useful to have an example.
Suppose you have a C compiler for a 36-bit system, with
char
= 9 bits,short
= 18 bits,int
= 36 bits, andlong
= 72 bits. Thenint8_t
does not exist, because there is no way to satisfy the constraint of having exactly 8 value bits with no padding.int_least8_t
is a typedef ofchar
. NOT ofshort
orint
, because the standard requires the smallest type with at least 8 bits.int_fast8_t
can be anything. It's likely to be a typedef ofint
if the "native" size is considered to be "fast".