For small values of integer is the memory space is wasted

I Know that an integer datatype take 2 or 4 bytes of memory. I want to know that if the value of int datatype variable value is less then is the space is wasted?

#include <stdio.h>
int main(void)
{
  int a=1;
  printf("%d\n",a);
}

a binary value is 00000001 which is 1 byte, the int data type allocates 2byte of space for the a value.is the remaining 1 byte is wasted?

Upvotes: 5

Views: 3275

Answers (6)

LucaG
LucaG

Reputation: 84

Usually char,short, int`` long, long long,float and double (with relative unsigned type) have a specific number of bytes, like explained in the following [link] [ 1].

For example, for compute, 2 bytes for a char (that usually is 1 byte). For example the ARM architecture has a specific assembly instruction for manipulate 16bit memory location; the compiler can choose to adopt 2 bytes for a speed and space. However, the programmer must not be concerned with making conversion because the compiler make them.   In this cases the extra bytes are not used in your code.

Upvotes: 0

Clifford
Clifford

Reputation: 93476

"wasted" is the wrong word - all binary-digits of a type are significant in its value. However for values of a limited range of possible values, you can choose to use a smaller type. For example char is an integer type too, and typically (though not universally) 8 bit.

If you want to be explicit about storage size requirements, use the stdint.h types such as uint8_t, int8_t, uint16_t, int16_t etc.

That said on may platforms there is often limited benefit is using the smallest possible type since processor data alignment and register storage requirements of inefficiencies may "waste" space in any case due to architectural restrictions or performance efficiency.

On the other hand, if you are writing a file record or implementing a communications packet for example, where alignment may not be an architectural issue, then using the smaller data type may be significant in space and I/O performance.

Further you could use bitfields to specify the minimum number of bits necessary to represent a value. But what you save in storage may be offset against the additional code generated to access the bitfields, and alignment and packing remains compiler/architecture dependent, so it is not a given that there will be any saving whatsoever.

Upvotes: 0

Sergey Kalinichenko
Sergey Kalinichenko

Reputation: 726639

To determine how much space is wasted, if at all, you need to consider the range of values that you want to store in your int variable, not just the current value.

If your int is 32-bit in size, and you want to store positive and negative values in it in the range between -2,000,000,000 and 2,000,000,000, then you need all 32 bits, so none of the bits in your int are wasted. If, on the other hand, the range is from -30,000 to 30,000, then you could have used a 16-bit data type, so two bytes are wasted.

Note that sometimes "wasting" a few bytes comes with an improvement in speed, because a larger size happens to be the "native" size for the CPU's registers. In this case a "waste" becomes a "trade-off", because you get extra speed for using additional memory space.

Upvotes: 1

Lundin
Lundin

Reputation: 213960

In theory, yes the space is wasted. Although on a 32 bit CPU, allocating 32 bits of data might mean faster access since it suits the alignment. So using a 32 bit variable just to store the value 1 could be an optimization of speed over memory consumption.

On microcontroller systems, programmers have far less memory and are therefore more picky with variable declarations, using the types from stdint.h instead, to allocate just as much memory as needed. They would use uint8_t rather than int.

If you want the best of both worlds - fastest access and then low memory consumption if possible - use the uint_fast8_t type. Then the compiler will pick the fastest possible type that can store values up to 255.

Upvotes: 5

Bathsheba
Bathsheba

Reputation: 234715

I Know that an integer datatype take 2 or 4 bytes of memory

Do you? All the C standard states is that an int must be capable of storing a number between the inclusive range -32767 and +32767, and is no smaller than a short or a char.

An exotic system might even have unused padding bits at the end of an int. Over the coming years, we may well see the "normal" int being 64 bit.

If you want to minimise wasted space then use a signed char type. That must have a range -127 to +127. And sizeof(char) is 1 by the standard. And the number of bits used is given by CHAR_BIT, which is normally 8.

Finally note that minimising space may well have little bearing on the execution speed, particularly in C, where int is normally the CPU's native type, and narrower types than int are widened to int anyway in the majority of expressions in C.

Upvotes: 2

gsamaras
gsamaras

Reputation: 73376

Practically yes, since the value you want to store could be represented with less memory.

I mean if you just wanted to represent binary values, 0 and 1, then one bit would suffice. Everything that uses more memory than one bit to represent these values, consumes extra memory.

That's why some people store small values to chars.

Upvotes: 0

Related Questions