Noah
Noah

Reputation: 1966

Testing differences between SqlDecimal and C# Decimal

I'm trying to validate that a decimal in c# will fit into a db column decimal. The SqlDecimal object allows you to pass in a precision and scale as well as the decimal bits to the constructor. I know the size of the column and so before we write the data I check each input so I can produce business required output.

In this case we are storing a percent, so the precision is 13 and the scale is 10. I have a testing harness that I've condensed below into a unti test for SO. This sample is throwing an Arithmetic Overflow error on the SqlDecimal constructor line:

    [TestMethod]
    public void TestDecimalFits()
    {
        decimal d = 10.3m;
        SqlDecimal sqlDecimal = new SqlDecimal(13, 10, d >= 0, Decimal.GetBits(d));
        Assert.AreEqual(d, sqlDecimal.Value);
    }

Does anyone know why this blows up?

Thanks!

Upvotes: 1

Views: 1240

Answers (2)

D Stanley
D Stanley

Reputation: 152614

The return of Decimal.GetBits is not compatible with the SqlDecimal's constructor parameter.

Decimal.GetBits returns an array that represents the exact structure of the decimal, which includes the 96-bit scaled integer value and 8 bits for the exponent (plus 1 sign bit and 27 unused bits).

The SqlDecimal constructor that you are using takes an int array that represents "The 128-bit unsigned integer that provides the value of the new SqlDecimal." - NOT the represenatation of that decimal value. The scale parameter determines what the actual value of the decimal will be.

So you are effectively passing in a different value than the constructor expects. The .NET decimal equivalent of 10.3m is

0000000001100111-0000000000000000-0000000000000000-10000000000000000

Where 1100111 is the binary equivalent of 103, and 1 is the scale.

The integer equivalent of that binary value is more than 13 digits long, which is why you get an overflow when passing it to the SqlDecimal constructor.

I would not play around with bit-fiddling and instead just use the raw decimal value, letting SQL convert it to the right precision and scale automatically.

I'm trying to validate that a decimal in c# will fit into a db column decimal.

Well, the largest value that will fit into a DECIMAL(13,10) is 999.9999999999, which is well below the maximum value of a decimal. So no, you can't store any C# decimal value in a DECIMAL(13,10) SQL column.

(technically I think you can store 9999999999999 by dropping the precision, but even that is well below the maximum value of a decimal).

Upvotes: 1

Andy McLaughlin
Andy McLaughlin

Reputation: 11

My guess is that Decimal.GetBits(d) is overflowing the Int32 return type. Try doing just Int32 bits = Decimal.GetBits(d) and see if it throws the same error.

Upvotes: 0

Related Questions