Angus
Angus

Reputation: 12631

data type ranges differing with operating systems

The 8-bit,16-bit,32-bit,64-bit operating systems have different data range for integers,float and double values.

Is this the compiler or the processor that makes difference(8bit,16bit,32bit,64bit).

If in a network if a 16 bit integer data from one system is transferred to a 32 bit system or vice-versa will the data be correctly represented in memory.Please help me to understand.

Upvotes: 5

Views: 403

Answers (5)

Oliver Charlesworth
Oliver Charlesworth

Reputation: 272802

Ultimately, it is up to the compiler. The compiler is free to choose any data types it likes*, even if it has to emulate their behaviour with software routines. Of course, typically, for efficiency it will try to replicate the native types of the underlying hardware.

As to your second question, yes, of course, if you transfer the raw representation from one architecture to another, it may be interpreted incorrectly (endianness is another issue). That is why functions like ntohs() are used.

* Well, not literally anything it likes. The C standard places some constraints, such as that an int must be at least as large as a short.

Upvotes: 7

unkulunkulu
unkulunkulu

Reputation: 11922

It depends not on just the compiler and operating system. It is dictated by the architecture (processor at least).

When passing data between possibly different architectures they use special fixed size data type, e.g. uint64_t, uint32_t instead of int, short etc.

But the size of integers is not the only concern when communicating between computers with different architectures, there's a byte order issue too (try googling about BigEndian and LittleEndian)

Upvotes: 2

glglgl
glglgl

Reputation: 91159

In a network it has to be defined in the protocol which data sizes you have. For endianness, it is highly recommended to use big endian values.

If there weren't the issue with the APIs, a compiler would be free to set its short, int, long as it wants. Bot often, the API calls are connected to these types. E.g. the open() function returns an int, whose size should be correct.

But the types might as well be part of the ABI definition.

Upvotes: 1

Abimaran Kugathasan
Abimaran Kugathasan

Reputation: 32498

The compiler (more properly the "implementation") is free to choose the sizes, subject to the limits in the C standard. The set of sizes offered by C for its various types depends in part on the hardware it runs on; i.e. the compiler makes the choice but, it (except in cases like Java where datatypes are explicitly independent of underlying hardware) is strongly influenced by what the hardware offers.

Upvotes: 3

Arnaud Le Blanc
Arnaud Le Blanc

Reputation: 99919

The size of a given type depends on the CPU and the conventions on the operating system.

If you want to have an int of a specific size, use the stdint.h header [wikipedia]. It defines the int8_t, int16_t, int32_t, int64_t, some others and their unsigned equivalent.

For communications between different computers, the protocol should define the sizes and byte order to use.

Upvotes: 1

Related Questions