C data type sizes intmax_t vs any other integer, void * vs any other pointer

Is it true that:

I know that intmax_t can store any value any other signed integer can store, but is it required that the size is also the largest size of any integer? Or is it possible that some other integer uses so many more padding bits than intmax_t that the size of that integer becomes larger than intmax_t even when this integer can't hold any value that intmax_t can't hold?

Similarly to pointers, i know that any other data pointer can by converted to char * and void * and back again without losing information, but does that mean the size of char * and void * has to be the largest possible size of all data pointers?

I ask this question because i have a function in a library that converts a string to a different type which is indicated by an integer. To test if this conversation can be done for a given string (the format is correct) a function exists that test this for different data types, the function should reserve enough memory for all possible types, do the conversation, check for errors, and free this memory again. Is it enough to make sure it is large enough for intmax_t to cover all integer types?

Upvotes: 0

Views: 223

Answers (1)

Joshua
Joshua

Reputation: 43317

There's an intptr_t now that's guaranteed to work; however the platforms of years past where intptr_t would need to be larger than ptrdiff_t don't have intptr_t because they're frozen in time, and the native compilers for the target platforms just don't have it. If you have a modern compiler such as OpenWatcom targeting the old architectures it would work.

While they did fix this stuff for modern compilers cross-compiling to embedded CPUs, you end up with other aberrations that could exist instead that are just as bad and equally hard to test for. The compilers I've had to deal with in the embedded world had some real nasties; if these are typical, trying to make a platform neutral library that doesn't assume flat architecture is fraught with pearl:

  • NULL is not 0. This means that memset() doesn't initialize pointers in structures to NULL and neither does calloc(); nor are they NULL when declared as static unless explicitly initialized to NULL.
  • sizeof(char *) < sizeof(const char *) and sizeof(void *) < sizeof(const void *); this particular compiler didn't give an intptr_t and ptrdiff_t could not contain the result of arbitrary subtraction of two pointers in the same character array (but was avoidable by making the character array no bigger than PTRDIFF_T_MAX).
  • Compiler didn't implement C99 and didn't have an intptr_t and ptrdiff_t was too small to hold a pointer.
  • free() didn't do anything
  • The compiler statically removed null pointer tests where it could prove the pointer pointed to a struct, but NULL was a possible address of a global variable.

I've never tried to make a nontrivial library work in the embedded world and certainly never one that messed with pointers like this.

TL;DR All your four assertions ought to be true, but when put to the test by fire where it matters you end up dealing with ugliness.

It occurs to me if you're converting between pointer and string, sprintf and sscanf have format specifiers that can do this.

Upvotes: 1

Related Questions