Tom
Tom

Reputation:

Difference between decimal, float and double in .NET?

What is the difference between decimal, float and double in .NET?

When would someone use one of these?

Upvotes: 2479

Views: 1299609

Answers (19)

JacquesB
JacquesB

Reputation: 42639

Use:

  • int for whole numbers
  • decimal anywhere numbers with decimals are displayed to end users.
  • double anywhere else you need to support fractions

Float, double and decimal are all floating-point types which means they support fractions and can represent both very large and very small numbers. Decimal is a decimal (base-10) format, while float and double are both binary (base-2) floating points, just with different precision.

The most significant distinction is between the decimal and binary floating-points, so here is a comparison:

float / double decimal
rounding behavior weird and confusing, sometimes looks like a bug intuitive and looks correct to humans
performance fast slow
exponent base base-2 (binary) base-10 (decimal)
size 32 / 64 bit 128 bit
precision ~6-9 digits / ~15-17 digits 28-29 digits
standardization universal standard .net-specific type
result of divide by zero magic NaN value throws error
result of overflow magic Infinity value throws error
normalization normalizes trailing zeros keeps trailing zeros

(source for precision)

Perhaps the most notable difference is the rounding behavior:

Console.WriteLine("0.1 + 0.2 using decimal: " + (0.1m + 0.2m));
Console.WriteLine("0.1 + 0.2 using double: " + (0.1d + 0.2d));

results in:

0.1 + 0.2 using decimal: 0.3
0.1 + 0.2 using double: 0.30000000000000004

As you see, the rounding behavior of doubles might seem surprising for someone without a computer-science degree. For this reason, the decimal-type is preferred when numbers are presented to end-users. The classic examples are monetary amounts in accounting and bank transactions, but the same would apply if these were amounts in a recipe or measurements on a blueprint. Basically anywhere numbers are displayed in decimal format for an end user.

The above could give the impression that only doubles has problems due to rounding, but this is not the case. See this example:

Console.WriteLine("(1 / 3) * 3 using decimal: " + ((1m / 3m) * 3m));
Console.WriteLine("(1 / 3) * 3 using double: " + ((1d / 3d) * 3d));

Which result in:

(1 / 3) * 3 using decimal: 0.9999999999999999999999999999
(1 / 3) * 3 using double: 1

The fact is that any numeric type will have rounding issues. It is unavoidable because there are infinitely many real numbers but a numeric format can only express a finite set of different numbers. It is just that we humans are used to decimal numbers and therefore the rounding behavior of decimals are easier to understand and seem "less wrong". We understand that 1/3 is rounded to 0.3333 (since we don't have infinite decimals) and that 0.3333 multiplied by 3 then would be 0.9999. The rounding behavior of the base-2 double is harder to explain without going into binary arithmetic and algorithm for converting between base-2 and decimal

But as you probably know, computers prefer to think in binary and therefore doubles are far more efficient. All modern processors have native support for doubles, while decimals are partially implemented in software which is slower. Float/double are standardized across platforms and languages, while decimals are a .net specific type.

Upvotes: 0

Abbas Aryanpour
Abbas Aryanpour

Reputation: 583

Float:

It is a floating binary point type variable. Which means it represents a number in it’s binary form. Float is a single precision 32 bits(6-9 significant figures) data type. It is used mostly in graphic libraries because of very high demand for processing power, and also in conditions where rounding errors are not very important.

Double:

It is also a floating binary point type variable with double precision and 64 bits size(15-17 significant figures). Double are probably the most generally used data type for real values, except for financial applications and places where high accuracy is desired.

Decimal:

It is a floating decimal point type variable. Which means it represents a number using decimal numbers (0-9). It uses 128 bits(28-29 significant figures) for storing and representing data. Therefore, it has more precision than float and double. They are mostly used in financial applications because of their high precision and easy to avoid rounding errors.

Example:

using System;
  
public class GFG {
  
    static public void Main()
    {
  
        double d = 0.42e2;    //double data type
        Console.WriteLine(d); // output 42
  
        float f = 134.45E-2f;  //float data type
        Console.WriteLine(f); // output: 1.3445
  
        decimal m = 1.5E6m;   //decimal data type
        Console.WriteLine(m); // output: 1500000
    }
}

Comparison between Float, Double and Decimal on the Basis of:

No. of Bits used:

  • Float uses 32 bits to represent data.
  • Double uses 64 bits to represent data.
  • Decimal uses 128 bits to represent data.

Range of values:

  • The float value ranges from approximately ±1.5e-45 to ±3.4e38.

  • The double value ranges from approximately ±5.0e-324 to ±1.7e308.

  • The Decimal value ranges from approximately ±1.0e-28 to ±7.9e28.

Precision:

  • Float represent data with single precision.
  • Double represent data with double precision.
  • Decimal has higher precision than float and Double.

Accuracy:

  • Float is less accurate than Double and Decimal.
  • Double is more accurate than Float but less accurate than Decimal.
  • Decimal is more accurate than Float and Double.

Upvotes: 9

Jon Skeet
Jon Skeet

Reputation: 1499660

float (the C# alias for System.Single) and double (the C# alias for System.Double) are floating binary point types. float is 32-bit; double is 64-bit. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal (the C# alias for System.Decimal) is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

Upvotes: 2607

user3776645
user3776645

Reputation: 407

The main difference between each of these is the precision.

  • float is a 32-bit number
  • double is a 64-bit number
  • decimal is a 128-bit number

Upvotes: 3

user2389722
user2389722

Reputation:

+---------+----------------+---------+----------+---------------------------------------------------------+
| C#      | .Net Framework | Signed? | Bytes    | Possible Values                                         |
| Type    | (System) type  |         | Occupied |                                                         |
+---------+----------------+---------+----------+---------------------------------------------------------+
| sbyte   | System.Sbyte   | Yes     | 1        | -128 to 127                                             |
| short   | System.Int16   | Yes     | 2        | -32,768 to 32,767                                       |
| int     | System.Int32   | Yes     | 4        | -2,147,483,648 to 2,147,483,647                         |
| long    | System.Int64   | Yes     | 8        | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
| byte    | System.Byte    | No      | 1        | 0 to 255                                                |
| ushort  | System.Uint16  | No      | 2        | 0 to 65,535                                             |
| uint    | System.UInt32  | No      | 4        | 0 to 4,294,967,295                                      |
| ulong   | System.Uint64  | No      | 8        | 0 to 18,446,744,073,709,551,615                         |
| float   | System.Single  | Yes     | 4        | Approximately ±1.5e-45 to ±3.4e38                       |
|         |                |         |          |  with ~6-9 significant figures                          |
| double  | System.Double  | Yes     | 8        | Approximately ±5.0e-324 to ±1.7e308                     |
|         |                |         |          |  with ~15-17 significant figures                        |
| decimal | System.Decimal | Yes     | 16       | Approximately ±1.0e-28 to ±7.9e28                       |
|         |                |         |          |  with 28-29 significant figures                         |
| char    | System.Char    | N/A     | 2        | Any Unicode character (16 bit)                          |
| bool    | System.Boolean | N/A     | 1 / 2    | true or false                                           |
+---------+----------------+---------+----------+---------------------------------------------------------+

See here for more information.

Upvotes: 136

Purnima Bhatia
Purnima Bhatia

Reputation: 67

To define Decimal, Float and Double in .Net (c#)

you must mention values as:

Decimal dec = 12M/6;
Double dbl = 11D/6;
float fl = 15F/6;

and check the results.

And Bytes Occupied by each are

Float - 4
Double - 8
Decimal - 12

Upvotes: -3

Mukesh Kumar
Mukesh Kumar

Reputation: 2376

  • float: ±1.5 x 10^-45 to ±3.4 x 10^38 (~7 significant figures
  • double: ±5.0 x 10^-324 to ±1.7 x 10^308 (15-16 significant figures)
  • decimal: ±1.0 x 10^-28 to ±7.9 x 10^28 (28-29 significant figures)

Upvotes: 19

cgreeno
cgreeno

Reputation: 32371

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333  
double: 0.333333333333333  
decimal: 0.3333333333333333333333333333

Upvotes: 1291

daniel
daniel

Reputation: 405

Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int, you type double or float. Like this:

float myFloat;
double myDouble;

(float is short for "floating point", and just means a number with a point something on the end.)

The difference between the two is in the size of the numbers that they can hold. For float, you can have up to 7 digits in your number. For doubles, you can have up to 16 digits. To be more precise, here's the official size:

float:  1.5 × 10^-45  to 3.4 × 10^38  
double: 5.0 × 10^-324 to 1.7 × 10^308

float is a 32-bit number, and double is a 64-bit number.

Double click your new button to get at the code. Add the following three lines to your button code:

double myDouble;
myDouble = 0.007;
MessageBox.Show(myDouble.ToString());

Halt your program and return to the coding window. Change this line:

myDouble = 0.007;
myDouble = 12345678.1234567;

Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!

Upvotes: 31

Mike Gledhill
Mike Gledhill

Reputation: 29151

This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal having less precision than a float.

In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service to save into a SQL Server database.

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);
}

Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse lost the value completely. One such example is

  • cellValue = 0.00006317592

  • Decimal.TryParse(cellValue.ToString(), out value); // would return 0

The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal:

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal) valueDouble;
    …
}

Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, double.TryParse was actually able to retrieve such small numbers, whereas decimal.TryParse would set them to zero.

Odd. Very odd.

Upvotes: 16

IndustProg
IndustProg

Reputation: 647

In simple words:

  1. The Decimal, Double, and Float variable types are different in the way that they store the values.
  2. Precision is the main difference (Notice that this is not the single difference) where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
  3. The summary table:

/==========================================================================================
    Type       Bits    Have up to                   Approximate Range 
/==========================================================================================
    float      32      7 digits                     -3.4 × 10 ^ (38)   to +3.4 × 10 ^ (38)
    double     64      15-16 digits                 ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308)
    decimal    128     28-29 significant digits     ±7.9 x 10 ^ (28) or (1 to 10 ^ (28)
/==========================================================================================
You can read more here, Float, Double, and Decimal.

Upvotes: 4

schlebe
schlebe

Reputation: 3716

The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example

Dim fMean as Double = 1.18
Dim fDelta as Double = 0.08
Dim fLimit as Double = 1.1

If fMean - fDelta < fLimit Then
    bLower = True
Else
    bLower = False
End If

Question: Which value does bLower variable contain ?

Answer: On a 32 bit machine bLower contains TRUE !!!

If I replace Double by Decimal, bLower contains FALSE which is the good answer.

In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.

Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.

In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !

It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type

BINARY or COMP like float or double or decimal
PACKED-DECIMAL or COMP-3 (2 digit in 1 byte)
ZONED-DECIMAL (1 digit in 1 byte) 

Upvotes: 5

tomosius
tomosius

Reputation: 1429

I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).

Upvotes: 97

yoyo
yoyo

Reputation: 8708

For applications such as games and embedded systems where memory and performance are both critical, float is usually the numeric type of choice as it is faster and half the size of a double. Integers used to be the weapon of choice, but floating point performance has overtaken integer in modern processors. Decimal is right out!

Upvotes: 9

GorkemHalulu
GorkemHalulu

Reputation: 3065

No one has mentioned that

In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.

I mean

decimal myNumber = decimal.MaxValue;
myNumber += 1;

throws OverflowException.

But these do not:

float myNumber = float.MaxValue;
myNumber += 1;

&

double myNumber = double.MaxValue;
myNumber += 1;

Upvotes: 49

warnerl
warnerl

Reputation: 159

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

Float - 32 bit (7 digits)

Double - 64 bit (15-16 digits)

Decimal - 128 bit (28-29 significant digits)

More about...the difference between Decimal, Float and Double

Upvotes: 15

CharithJ
CharithJ

Reputation: 47490

float 7 digits of precision

double has about 15 digits of precision

decimal has about 28 digits of precision

If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

Upvotes: 59

Mark Jones
Mark Jones

Reputation: 2064

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

Upvotes: 108

Display Name
Display Name

Reputation: 15071

  1. Double and float can be divided by integer zero without an exception at both compilation and run time.
  2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

Upvotes: 29

Related Questions