sjna
sjna

Reputation: 21

Why am I getting two different outputs in c when I use int or double?

So here is my code

  double blocksize = 32;
  double indexSize, tagSize, offsetSize;

int main(int argc, char** argv) {
   double index;
   double cachesize = 1;

   offsetSize = log(blocksize) / log(2.0);
   index = cachesize/blocksize * 1024;
   indexSize = (log(index) / log(2.0));
   tagSize = 32 - indexSize - offsetSize;


   printf("Offset : %f\n", offsetSize);
   printf("Index: %f\n", index);
   printf("Index  : %f\n", indexSize);
   printf("Tag    :  %f\n", tagSize);

   return (EXIT_SUCCESS);
}

So the problem is that when I assign everything to int, I get this output:

Offset : 5

Index: 0

Index : -2147483648

Tag : -2147483621

However, when I assign everything as a double I get this output:

Offset : 5.000000

Index: 32.000000

Index : 5.000000

Tag : 22.000000

Why am I getting two different outputs? I thought the only difference between a double and int is that int is a whole number but double is not a whole number. It would return as an output similar to what I got 5.000000, 32.0000, ect. So why am I getting two different outputs?

Upvotes: 0

Views: 87

Answers (1)

Serdalis
Serdalis

Reputation: 10489

There are quite a few differences between integers and doubles, but where your example is going wrong is on this line:

index = cachesize/blocksize * 1024;

When cachesize and blocksize are double, they are doing double division, which allows decimal places.

When cachesize and blocksize are integers, they are doing integer division, which truncates the decimals.

To fix your line of code you could convert one of the arguments to a double to force double division, like so:

index = ((double)cachesize / blocksize) * 1024;

Which will get you the results:

Offset : 5
Index  : 32
Index  : 5
Tag    : 22

Upvotes: 1

Related Questions