MagB
MagB

Reputation: 2221

Which numeric type conversion is better for simple math operation?

I want to know which conversion is better (regarding performance/speed and precision/most less loss) for a simple math operation and what makes them different?

Example:

double double1 = integer1 / (5 * integer2);
var double2 = integer1 / (5.0 * integer2);
var double3 = integer1 / (5D * integer2);
var double4 = (double) integer1 / (5 * integer2);
var double5 = integer1 / (double) (5 * integer2);
var double6 = integer1 / ((double) 5 * integer2);
var double7 = integer1 / (5 * (double) integer2);
var double8 = Convert.ToDouble(integer1 / (5 * integer2));
var double9 = integer1 / Convert.ToDouble(5 * integer2);

Actually my question is about the conversion not the type itself.

Upvotes: 8

Views: 495

Answers (3)

Matthew Watson
Matthew Watson

Reputation: 109732

EDIT

In response to your totally changed question:

The first line double double1 = integer1 / (5 * integer2); does an integer division, so don't do that.

Also the line var double8 = Convert.ToDouble(integer1 / (5 * integer2)); is doing integer division before converting the result to a double, so don't do that either.

Other than that, all the different approaches you list will end up calling the IL instruction Conv.R8 once for each line in your sample code.

The only real difference is that Convert.ToDouble() will make a method call to do so, so you should avoid that.

The results for every line other than double1 and double8 will be identical.

So you should probably go for the simplest: var double2 = integer1 / (5.0 * integer2);

In a more complicated situation, time your code to see if there's any differences.

Upvotes: 5

Cheng Chen
Cheng Chen

Reputation: 43523

The differences between your lines of codes are not about conversions, some of them are totally different things and the values are not the same.

1.    float float1 = integer1 / (5 * integer2);

5 * interger2 gives an int, int divides int gives an int, and you assign the int value to a float variable, using implicit conversion because int has a small range than float. float float1 = 1 / (5 * 2), you will get a System.Single 0 as the result.

2.    var float2 = integer1 / (5.0 * integer2);

5.0 is essentially 5.0d, so the type of float2 is System.Double, var float2 = 1 / (5.0 * 2) you will get a System.Double 0.1 as the result.

3.   var float3 = integer1 / (5F * integer2);

Using the same values above you will get a System.Single 0.1, which is probably what you want.

4.   var float4 = (float)integer1 / (5 * integer2);

You will get the same as item 3. The difference is item3 is int divides by float while item4 is float divides by int.

5.   var float5 = integer1 / (float) (5 * integer2);
6.   var float6 = integer1 / ((float) 5 * integer2);
7.   var float7 = integer1 / (5 * (float) integer2);

The three are almost the same as item 3, it calculates an int divides by float, just different ways to build the divider.

8.   var float8 = Convert.ToDecimal(integer1 / (5 * integer2));
9.   var float9 = integer1 / Convert.ToDecimal(5 * integer2);

The two will give you System.Decimal values, which has a higher precision. Item8 has the same issue as item1, you will get 0 because the parameter of Convert.ToDecimal is a int 0.

Upvotes: 2

sumngh
sumngh

Reputation: 566

Instead of float/decimal use double:

A more generic answer for the generic question "Decimal vs Double": Decimal for monetary calculations to preserve the precision, Double for scientific calculations that do not get affected by small differences. Since Double is a type which is native to the CPU (internal representation is stored in base 2), calculations made with Double perform better then Decimal (which is represented in base 10 internally).

Upvotes: 1

Related Questions