Ryan Dignard
Ryan Dignard

Reputation: 669

Why does the C standard provide unsized types (int, long long, char vs. int32_t, int64_t, uint8_t etc.)?

Why weren't the contents of stdint.h the standard when it was included in the standard (no int no short no float, but int32_t, int16_t, float32_t etc.)? What advantage did/does ambiguous type sizes provide?

In objective-C, why was it decided that CGFloat, NSInteger, NSUInteger have different sizes on different platforms?

Upvotes: 1

Views: 594

Answers (2)

Jens Gustedt
Jens Gustedt

Reputation: 78903

C is meant to be portable from enbedded devices, over your phone, to descktops, mainfraimes and beyond. These don't have the same base types, e.g the later may have uint128_t where others don't. Writing code with fixed width types would severely restrict portability in some cases.

This is why with preference you should neither use uintX_t nor int, long etc but the semantic typedefs such as size_t and ptrdiff_t. These are really the ones that make your code portable.

Upvotes: 2

Dietrich Epp
Dietrich Epp

Reputation: 213338

When C was designed, there were computers with different word sizes. Not just multiples of 8, but other sizes like the 18-bit word size on the PDP-7. So sometimes an int was 16 bits, but maybe it was 18 bits, or 32 bits, or some other size entirely. On a Cray-1 an int was 64 bits. As a result, int meant "whatever is convenient for this computer, but at least 16 bits".

That was about forty years ago. Computers have changed, so it certainly looks odd now.

NSInteger is used to denote the computer's word size, since it makes no sense to ask for the 5 billionth element of an array on a 32-bit system, but it makes perfect sense on a 64-bit system.

I can't speak for why CGFloat is a double on 64-bit system. That baffles me.

Upvotes: 3

Related Questions