Kevin
Kevin

Reputation: 56089

Why bother using a float / double literal when not needed?

Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway? And when a fractional value is needed, why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?

For example, I often see code similar to the following

float foo = 3.0f;
double bar = 5.0;
// And, unfortunately, even
double baz = 7.0f;

and

void quux(float foo) {
     ...
}

...

quux(7.0f);

But as far as I can tell those are equivalent to

float foo = 3;
// or
// float foo = 3.0;
double bar = 5;
double baz = 7;
quux(9);

I can understand the method call if you are in a language with overloading (c++, java) where it can actually make a functional difference if the function is overloaded (or will be in the future), but I'm more concerned with C (and to a lesser extent Objective-C), which doesn't have overloading.

So is there any reason to bother with the extra decimal and/or f? Especially in the initialization case, where the declared type is right there?

Upvotes: 4

Views: 281

Answers (4)

this
this

Reputation: 5290

C doesn't have overloading, but it has something called variadic functions. This is where the .0 matters.

void Test( int n , ... )
{
    va_list list ;
    va_start( list , n ) ;
    double d = va_arg( list , double ) ;
    ...
}

Calling the function without specifying the number is a double will cause undefined behaviour, since the va_arg macro will interpret the variable memory as a double, when in reality it is an integer.

Test( 1 , 3 ) ; has to be Test( 1 , 3.0 ) ;


But you might say; I will never write variadic functions, so why bother?

printf( and family ) are variadic functions.

The call, should generate a warning:

printf("%lf" , 3 ) ;   //will cause undefined behavior

But depending on the warning level, compiler, and forgetting to include the correct header, you will get no warning at all.

The problem is also present if the types are switched:

printf("%d" , 3.0 ) ;    //undefined behaviour

Upvotes: 5

Keith Thompson
Keith Thompson

Reputation: 263307

In most cases, it's simply a matter of saying what you mean.

For example, you can certainly write:

#include <math.h>
...
const double sqrt_2 = sqrt(2);

and the compiler will generate an implicit conversion (note: not a cast) of the int value 2 to double before passing it to the sqrt function. So the call sqrt(2) is equivalent to sqrt(2.0), and will very likely generate exactly the same machine code.

But sqrt(2.0) is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double argument, writing 2.0 rather than 2 could be much clearer.

And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5) has to use a floating-point literal, and

My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2 rather than 2.0? (I don't consider saving two keystrokes to be a significant benefit.)

Upvotes: 1

Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway?

First off, "implicit cast" is an oxymoron (casts are explicit by definition). The expression you're looking for is "implicit [type] conversion".

As to why: because it's more explicit (no pun intended). It's better for the eye and the brain if you have some visual indication about the type of the literal.

why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?

For example, because double and float have different precision. Since floating-point is weird and often unintuitive, it is possible that the conversion from double to float (which is lossy) will result in a value that is different from what you actually want if you don't specify the float type manually.

Upvotes: 3

Tavian Barnes
Tavian Barnes

Reputation: 12922

Many people learned the hard way that

double x = 1 / 3;

doesn't work as expected. So they (myself included) program defensively by using floating-point literals instead of relying on the implicit conversion.

Upvotes: 4

Related Questions