Reputation: 19918
I've seen some code where they don't use primitive types int, float, double etc. directly. They usually typedef it and use it or use things like uint_8 etc.
Is it really necessary even these days? Or is C/C++ standardized enough that it is preferable to use int, float etc directly.
Upvotes: 9
Views: 40309
Reputation: 215261
uint8_t
is rather useless, because due to other requirements in the standard, it exists if and only if unsigned char
is 8-bit, in which case you could just use unsigned char
. The others, however, are extremely useful. int
is (and will probably always be) 32-bit on most modern platforms, but on some ancient stuff it's 16-bit, and on a few rare early 64-bit systems, int
is 64-bit. It could also of course be various odd sizes on DSPs.
If you want a 32-bit type, use int32_t
or uint32_t
, and so on. It's a lot cleaner and easier than all the nasty legacy hacks of detecting the sizes of types and trying to use the right one yourself...
Upvotes: 5
Reputation: 8467
Most code I read, and write, uses the fixed-size typedefs only when the size is an important assumption in the code.
For example if you're parsing a binary protocol that has two 32-bit fields, you should use a typedef guaranteed to be 32-bit, if only as documentation.
I'd only use int16 or int64 when the size must be that, say for a binary protocol or to avoid overflow or keep a struct small. Otherwise just use int.
If you're just doing "int i" to use i in a for loop, then I would not write "int32" for that. I would never expect any "typical" (meaning "not weird embedded firmware") C/C++ code to see a 16-bit "int," and the vast majority of C/C++ code out there would implode if faced with 16-bit ints. So if you start to care about "int" being 16 bit, either you're writing code that cares about weird embedded firmware stuff, or you're sort of a language pedant. Just assume "int" is the best int for the platform at hand and don't type extra noise in your code.
Upvotes: 3
Reputation: 70909
C and C++ purposefully don't define the exact size of an int. This is because of a number of reasons, but that's not important in considering this problem.
Since int isn't set to a standard size, those who want a standard size must do a bit of work to guarantee a certain number of bits. The code that defines uint_8 does that work, and without it (or a technique like it) you wouldn't have a means of defining an unsigned 8 bit number.
Upvotes: 2
Reputation: 112366
Because the types like char
, short
, int
, long
, and so forth, are ambiguous: they depend on the underlying hardware. Back in the days when C was basically considered an assembler language for people in a hurry, this was okay. Now, in order to write programs that are portable -- which means "programs that mean the same thing on any machine" -- people have built special libraries of typedefs
and #defines
that allow them to make machine-independent definitions.
The secret code is really quite straight-forward. Here, you have uint_8, which is interpreted
u
for unsigned
int
to say it's treated as a number_8
for the size in bits.In other words, this is an unsigned integer with 8 bits (minimum) or what we used to call, in the mists of C history, an "unsigned char".
Upvotes: 26
Reputation: 3697
C and C++ don't restrict the exact size of the numeric types, the standards only specify a minimum range of values that has to be represented. This means that int
can be larger than you expect.
The reason for this is that often a particular architecture will have a size for which arithmetic works faster than other sizes. Allowing the implementor to use this size for int
and not forcing it to use a narrower type may make arithmetic with ints faster.
This isn't going to go away any time soon. Even once servers and desktops are all fully transitioned to 64-bit platforms, mobile and embedded platforms may well be operating with a different integer size. Apart from anything else, you don't know what architectures might be released in the future. If you want your code to be portable, you have to use a fixed-size typedef anywhere that the type size is important to you.
Upvotes: 0
Reputation: 30969
The sizes of types in C are not particularly well standardized. 64-bit integers are one example: a 64-bit integer could be long long
, __int64
, or even int
on some systems. To get better portability, C99 introduced the <stdint.h>
header, which has types like int32_t
to get a signed type that is exactly 32 bits; many programs had their own, similar sets of typedefs before that.
Upvotes: 2
Reputation: 15172
The width of primitive types often depends on the system, not just the C++ standard or compiler. If you want true consistency across platforms when you're doing scientific computing, for example, you should use the specific uint_8
or whatever so that the same errors (or precision errors for floats) appear on different machines, so that the memory overhead is the same, etc.
Upvotes: 1