Reputation:
A question was asked, and I am not sure whether I gave an accurate answer or not.
The question was, why use int
, why not char
, why are they separate? It's all reserved in memory, and bits, why data types have categories?
Can anyone shed some light upon it?
Upvotes: 6
Views: 1293
Reputation: 170539
char
is the smallest addressable chunk of memory – suits well for manipulating data buffers, but can't hold more than 256 distinct values (if char
is 8 bits which is usual) and therefore not very good for numeric calculations. int
is usually bigger than char
– more suitable for calculations, but not so suitable for byte-level manipulation.
Upvotes: 10
Reputation:
The standard mandates very few limitations on char and int :
A char must be able to hold an ASCII value, that is 7 bits mininum (EDIT: CHAR_BIT is at least 8 according to the C standard). It is also the smallest addressable block of memory.
An int is at least 16 bits wide and the "recommended" default integer type. This recommendation is left to the implementation (your C compiler.)
Upvotes: 1
Reputation: 86492
Remember that C
is sometimes used as a higher level assembly language - to interact with low level hardware. You need data types to match machine-level features, such as byte-wide I/O registers.
From Wikipedia, C (programming language):
C's primary use is for "system programming", including implementing operating systems and embedded system applications, due to a combination of desirable characteristics such as code portability and efficiency, ability to access specific hardware addresses, ability to "pun" types to match externally imposed data access requirements, and low runtime demand on system resources.
Upvotes: 3
Reputation: 3156
In the past, computers had little memory. That was the prime reason why you had different data types. If you needed a variable to only hold small numbers, you could use an 8-bit char instead of using a 32-bit long. However, memory is cheap today. Therefore, this reason is less applicable now but has stuck anyway.
However, bear in mind that every processor has a default data type in the sense that it operates at a certain width (usually 32-bit). So, if you used an 8-bit char, the value would need to be extended to 32-bits and back again for computation. This may actually slow down your algorithm slightly.
Upvotes: 1
Reputation: 61803
int
is the "natural" integer type, you should use it for most computations.
char
is essentially a byte; it's the smallest memory unit addressable. char
is not 8-bit wide on all platforms, although it's the case most of the time.
Upvotes: 0
Reputation: 30442
In general, there are algorithms and designs which are abstractions and data types help in implementing those abstractions. For example - there is a good chance that weight is usually represented as a rational number which can be best implemented in storage in the form of float/double i.e. a number which has a precision part to it.
I hope this helps.
Upvotes: 0