Reputation: 592
I used sizeof to check the sizes of longs and floats in my 64
bit amd opteron machine. Both show up as 4.
When I check limits.h
and float.h
for maximum float and long values these are the values I get:
Max value of Float:340282346638528859811704183484516925440.000000
Max value of long:9223372036854775807
Since they both are of the same size, how can a float store such a huge value when compared to the long?
I assume that they have a different storage representation for float. If so, does this impact performance:ie ,is using longs faster than using floats?
Upvotes: 2
Views: 412
Reputation: 263537
It's unlikely to matter whether operations on long
are faster than operations on float
, or vice versa.
If you only need to represent whole number values, use an integer type. Which type you should use depends on what you're using it for (signed vs. unsigned, short
vs. int
vs. long
vs. long long
, or one of the exact-width types in <stdint.h>
).
If you need to represent real numbers, use one of the floating-point types: float
, double
, or long double
. (float
is actually not used much unless memory space is at a premium; double
has better precision and often is no slower than float
.)
In short, choose a type whose semantics match what you need, and worry about performance later. There's no great advantageously in getting wrong answers quickly.
As for storage representation, the other answers have pretty much covered that. Typically unsigned integers user all their bits to represent the value, signed integers devote one bit to representing the sign (though usually not directly), and floating-point types devote one bit for the sign, a few bits for an exponent, and the rest for the value. (That's a gross oversimplification.)
Upvotes: 1
Reputation: 16315
If so, does this impact performance: ie, is using longs faster than using floats?
Yes, arithmetic with long
s will be faster than with float
s.
I assume that they have a different storage representation for float.
Yes. The float
types are in IEEE 754 (single precision) format.
Since they both are of the same size, how can a float store such a huge value when compared to the long?
It's optimized to store numbers at a few points (near 0 for example), but it's not optimized to be accurate. For example you could add 1 to 1000000000. With the float
, there probably won't be any difference in the sum (1000000000 instead of 1000000001), but with the long
there will be.
Upvotes: 0
Reputation: 4495
It is a tradeoff.
A 32 bit signed integer can express every integer between -231 and +231-1.
A 32 bit float uses exponential notation and can express a much wider range of numbers, but would be unable to express all of the numbers in the range -- not even all of the integers. It uses some of the bits to represent a fraction, and the rest to represent an exponent. It is effectively the binary equivalent of a notation like 6.023*1023 or what have you, with the distance between representable numbers quite large at the ends of the range.
For more information, I would read this article, "What Every Computer Scientist Should Know About Floating Point Arithmetic" by David Goldberg: http://web.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf
By the way, on your platform, I would expect a float to be a 32 bit quantity and a long to be a 64 bit quantity, but that isn't really germane to the overall point.
Performance is kind of hard to define here. Floating point operations may or may not take significantly longer than integer operations depending on the nature of the operations and whether hardware acceleration is used for them. Typically, operations like addition and subtraction are much faster in integer -- multiplication and division less so. At one point, people trying to bum every cycle out when doing computation would represent real numbers as "fixed point" arithmetic and use integers to represent them, but that sort of trick is much rarer now. (On an Opteron, such as you are using, floating point arithmetic is indeed hardware accelerated.)
Almost all platforms that C runs on have distinct "float" and "double" representations, with "double" floats being double precision, that is, a representation that occupies twice as many bits. In addition to the space tradeoff, operations on these are often somewhat slower, and again, people highly concerned about performance will try to use floats if the precision of their calculation does not demand doubles.
Upvotes: 7
Reputation: 29266
Floating point maths is a subject all to itself, but yes: int types are typically faster than float types.
One trick to remember is that not all values can be expressed as a float. e.g. the closest you may be able to get to 1.9 is 1.899999999. This leads to fun bugs where you say if (v == 1.9) things behave unexpectedly!
Upvotes: 0