Reputation: 1
I'm using the decimal data type throughout my project in the belief that it would give me the most accurate results. However, I've come across a situation where rounding errors are creeping in and it appears I'd be better off using doubles.
I have this calculation:
decimal tempDecimal = 15M / 78M;
decimal resultDecimal = tempDecimal * 13M;
Here resultDecimal is 2.4999999999999999 when the correct answer for 13*15/78 is 2.5. It seems this is because tempDecimal (the result of 15/78) is a recurring decimal value.
I am subsequently rounding this result to zero decimal places (away from zero) which I was expecting to be 3 in this case but actually becomes 2.
If I use doubles instead:
double tempDouble = 15D / 78D;
double resultDouble = tempDouble * 13D;
Then I get 2.5 in resultDouble which is the answer I'm looking for.
From this example it feels like I'm better of using doubles or floats even though they are lower precision. I'm assuming I manage to get the incorrect result of 2.4999999999999999 simply because a decimal can store a result to that number of decimal places whereas the double rounds it off.
Should I use doubles instead?
EDIT: This calculation is being used in financial software to decide how many contracts are allocated to different portfolios so deciding between 2 or 3 is important. I am more concerned with the correct calculation than with speed.
Upvotes: 0
Views: 1573
Reputation: 130
Strange thing is, if you write it all on one line it results in 2.5.
If precision is cruical (financial calculations) you should definately use decimal. You can print a rounded decimal using Math.Round(myDecimalValue, signesAfterDecimalPoint)
or String.Format("{0:0.00}", myDecimalValue)
, but make the calculations with the exact number. Otherwise double will be just fine.
Upvotes: 1