Reputation: 1692
Currently I am working with a code base (C, C++ mixed) targeted for a 32 bit MIPS platform. The processor is a fairly modern one [just to mention that we have a good amount of processing power and memory].
The code base uses data types like uint8[1 byte wide unsigned integer], uint16[2 byte wide unsigned integer], uint32[4 byte wide unsigned integer] etc.
I know how the usage of these constructs are helpful while porting the code to different platforms.
My questions are:
What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
Will there be any savings in memory usage in using shorter data types (considering data alignment)?
If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
Upvotes: 23
Views: 63757
Reputation: 93476
First of all if you have types such as uint16 defined, where are they defined? They are not standard types, so will be defined in some proprietary header - maybe yours or may be supplied by some third party library; in which case you have to ask yourself how portable that code is, and whether you are creating a dependency that might not make sense in some other application.
Another problem is that many libraries (ill-advisedly IMO) define such types with various names such as UINT16, uint16, U16 UI16 etc. that it becomes somewhat of a nightmare ensuring type agreement and avoiding name clashes. If such names are defined, they should ideally be placed in a namespace or given a library specific prefix to indicate what library they were defined for use with, for example rtos::uint16
to rtos_uint16
.
Since the ISO C99 standard library provides standard bit-length specific types in stdint.h, you should prefer their use over any defined in a proprietary or third-party header. These types have a _t
suffix, e.g. uint16_t
. In C++ they may be placed in the std::
namespace (though that is not a given since the header was introduced in C99).
1] What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
Apart from my earlier advice to prefer stdint.h
's uint16_t
, there are at least two legitimate reasons to use length specific types:
2] Will there be any savings in memory usage in using shorter data types (considering data alignment)?
Possibly, but if memory is not your problem, that is not a good reason to use them. Worth considering perhaps for large data objects or arrays, but applying globally is seldom worth the effort.
3] If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
See [2]. "Modern hardware" however does not necessarily imply large resources; there are plenty of 32 bit ARM Cortex-M devices with only a few Kb of RAM for example. That is more about die space, cost and power consumption than it is about age of design or architecture.
Upvotes: 9
Reputation: 64223
What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
There are CPUs where unsigned char
is 16-bit value. Unit testing such code would be difficult without the use of typedefs (uint16 is just a typedef for appropriate type).
Also, with the use of these typedefs, it is easier to build on different platforms without many problems.
Will there be any savings in memory usage in using shorter data types (considering data alignment)?
No, that is not a point. If uint16
is a typedef for unsigned short
, then you can use unsigned short
everywhere, but you might get different types on different platforms.
Of course, use of a type that is smaller will reduce memory consumption. For example, using uint16 instead of uint32, but only if you use arrays.
If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
That depends on the platform :
Upvotes: 1
Reputation: 20027
One has to check the produced machine code / assembler to verify there are any saving of code. In RISC type architectures the typical immediate is 16-bit, but using uint16_t will anyway consume a full 32-bit register -- thus, if using int types, but committing to use values near zero will produce the same results and being more portable.
IMO saving memory is worthwhile in modern platforms too. Tighter code leads to e.g. better battery life and more fluent UX. However, I'd suggest micro-managing the size only when working with (large) arrays, or when the variable maps to some real HW resource.
ps. Compilers are smart, but the folks writing them work at the moment making them even better.
Upvotes: 2
Reputation: 62048
What is the use of/benefit in using a uint16 where an uint32 will also suffice(if, there is any)?
If those uint16s
are parts of arrays or structures, you can save memory and perhaps be able to handle larger data sets than with uint32s
in those same arrays or structures. It really depends on your code.
Data protocols and file formats may use uint16s
and it may not be correct to use uint32s
instead. This depends on the format and semantics (e.g. if you need values to wrap around from 65535 to 0, uint16
will do that automatically while uint32
won't).
OTOH, if those uint16s
are just single local or global variables, replacing them with 32-bit ones might make no significant difference because they are likely to occupy the same space due to alignment and they are passed as 32-bit parameters (on the stack or in registers) on MIPS anyway.
Will there be any savings in memory usage in using shorter data types (considering data alignment)?
There may be savings, especially when uint16s
are parts of many structures or elements of big arrays.
If it is to save a few bytes of memory, is it something sensible to do in modern hardware?
Yes, you lower the memory bandwidth (which is always a good thing) and you often lower various cache misses (data caches and TLB) when you operate on less data.
Upvotes: 24
Reputation: 4207
cstdint
has loads of typedef
s for different purposes.
intN_t
for a specific widthint_fastN_t
for the fastest integer, which has at least N bitsint_leastN_t
for the smallest integer, which has at least N bitsunsigned
equivalentsYou should choose depending on your circumstances. Storing thousands in a std::vector
and not doing loads of computation? intN_t
is probably your man. Need fast computation on a small number of integers? int_fastN_t
is probably your guy.
Upvotes: 3
Reputation: 51840
Using exact width integer types like int32_t
and friends is useful in avoiding sign extension bugs between platforms that have different sizes for int
and long
. These can occur when applying bit masks or when bit shifting, for example. If you do these operations on a long
for example, and your code works for a 32-bit long
, it might break for a 64-bit long
. If on the other hand you use a uint32_t
, you know exactly what results you'll get regardless of platform.
They're also useful for exchanging binary data between platforms, where you only have to worry about the endianness of the stored data and not the bit width; if you write an int64_t
to a file, you know that on another platform you can read it and store that into an int64_t
. If you were writing out a long
instead that's 64 bits on one platform, you might end up needing a long long
on another platform because long
there is only 32 bits.
Saving memory is usually not the reason, unless you're talking about very limited environments (embedded stuff) or large data sets (like an array with 50 million elements or such.)
Upvotes: 1
Reputation: 1197
Ans. 1. Software has certain requirements and specifications which strictly tells to take only 8/16-bits of a parameter while encoding/decoding or some other certain use. So, even if u assign a value bigger than 127 into a u8 say, it trims the data automatically for you.
Ans. 2. We should not forget that our compilers are way beyond intelligent to do the optimization, be it memory or complexity. So it is always recommended to use a smaller memory when possible.
Ans. 3. Of course saving memory makes sense on modern h/w.
Upvotes: 2
Reputation: 12635
The answers to your questions boils down to one key concept: How big is the data? If you are crunching a lot of it, then the benefit of using smaller data types is obvious. Think of it this way: Simply calculating the newly-discovered largest known prime could run you out of memory on a typical workstation. The number itself takes upwards of a gigabyte just to store. That doesn't include working up to calculating the actual number. If you were to use a thick data type instead of a thin one, you may be looking at two gigabytes instead. A simplistic example, but a good one nonetheless.
Upvotes: 1
Reputation: 1
Using uint16_t
instead of uint32_t
is saving memory. It might also be a hardware constraint (e.g. some peripheral controller is really sending 16 bits!) However, it may not worth using it, because of cache and alignment considerations (you really have to benchmark).
Upvotes: 1