Iancovici
Iancovici

Reputation: 5731

C/C++ how to decrease float's number of significant decimals of a variable

Generating a random number between [0, 1.0]

i = (float) rand() / RAND_MAX / 1;

I don't just want to print one decimal point, i want the variable to have one decimal point How do we do that?

(i.e. 0.1 ; not 0.1224828)

Upvotes: 1

Views: 1540

Answers (3)

When you store a value 'R' in a floating-point variable, you don't often get to store 'R'. Instead, you have to store the legitimate floating-point value that's closest to 'R'. comp.lang.c FAQ

Even if you're not trying to produce random integers within a range, you should read How can I get random integers in a certain range? This expression

(int)((double)rand() / ((double)RAND_MAX + 1) * N)

is better than the obvious rand() % N; Also, its lessons apply in other contexts.

Upvotes: 3

David Heffernan
David Heffernan

Reputation: 613461

What you actually describe is a random integer with possible values in the range 0 to 10. So you would write:

int i = rand() / (RAND_MAX / 11 + 1);

or indeed some other way to generate an integer i such that 0 <= i <= 10.

Now, i is your value scaled by 10.

If you then want to divide by 10, do just that:

float f = i / 10.0;

Do be aware that a binary floating point variable cannot exactly represent 0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8 and 0.9. The only values in your list that can be exactly expressed are 0.0, 0.5 and 1.0. This is why it is better to store the information in an integer variable.

Note: As has been pointed by others (thank you), the obvious rand() % 11 has poor randomness properties. I updated the answer to avoid using that.

Upvotes: 6

Alex
Alex

Reputation: 10136

Try:

i = ((float)( rand()%11) ) /10;

Upvotes: 0

Related Questions