Reputation: 51
I have to code C for my college exams and I have practice of declaring double
variables instead of float
variables. Is it a bad habit? Can they deduct marks for it? (we never exceed the float
limit)
I think using double
is better than float
because 0.71
is a double
literal and if we declare float one = 1.1;
we are converting a double
to float
.
Upvotes: 4
Views: 465
Reputation: 144949
Your question calls for opinion based answers... Let's try to stick to the facts:
I have to code C for my college exams and I have practice of declaring
double
variables instead offloat
variables. Is it a bad habit?
Not at all. In most situations on modern systems, double
computations are not noticeably slower than float
computations and provide better precision and a wider domain. In special cases, such as interfacing with specialized hardware for video games or neural networks, float
can be a better or even the only option, but for general problems, double
seems preferable.
I think using
double
is better thanfloat
because0.71
is adouble
literal and if we declarefloat one = 1.1;
we are convertingdouble
tofloat
.
You are absolutely correct about this. The declaration should be written float one = 1.1F;
to avoid the conversion, although the conversion should be performed at compile time. Note also that one == 1.1
would probably evaluate to false.
Can they deduct marks for it?
This depends on local conventions and the corrector's habits or opinions. You should ask them what they expect and defend your point orally. It is usually a bad idea for students to go against teachers preconceptions, even if these are invalid, especially doing so in public.
Upvotes: 1
Reputation: 750
In much discussion on this website, participants prefer double
over float
and recommend replacing float
with double
if they see it in a question posed.
Generally speaking, just use type double when you need a floating point value/variable. Literal floating point values used in expressions will be treated as doubles by default,
People expect floating point calculations on Windows desktops and servers to be identical between the two floating-point types.
On some architectures, there is no dedicated hardware for doubles, ... giving you a worse throughput and twice the latency. On others (the x86 FPU, for example), both types are converted to the same internal format 80-bit floating point, in the case of x86), so performance is identical.
Some smartphone and embedded architectures may get a performance benefit from using float
rather than double
.
With IEEE 754, float
is 32 bit (24 bit mantissa, 8 bit exponent) and double
is 64 bit (53 bit mantissa, 11 bit exponent). The exponent size determines the maximum (and minimum) values for each type. The mantissa size determines the maximum level of precision for each type.
Unless you have a macro for "narrow" floating-point calculations, most mathmatical functions (of the type for which you include math.h) return double
rather than float
. A C++ declaration like const auto cosine = cos(theta)
will declare a double
called cosine
. Trigonometric, rounding and exponentiation functions take double
as parameters and return double
.
OpenGL is an exception where float
is often preferred because the API handles floats. With graphics calculations that require precision, you might do your own calculation with double
and cast down to float
.
Should I use double or float? What is the difference between float and double?
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
Upvotes: 3