Simon
Simon

Reputation: 1207

Full precision output of floating point types in SQL Server Management Studio

I have the value 1555.4899999999998 stored in a float column with default precision (53). When I do a simple select, SSMS rounds the output instead of printing it with all available precision. It's caused some gotchas for me recently since the value printed doesn't work as a float literal to match the actual stored value.

For example (note that both of these numbers have an exact representation in the default float),

declare @f1 float;
declare @f2 float;
set @f1 = 1555.49;
set @f2 = 1555.4899999999998;
select @f1, @f2;
select STR(@f1,30,15), STR(@f2,30,15);

Outputs:

1555.49 1555.49
1555.490000000000000    1555.489999999999800

In Query Analyzer, that first select outputs:

1555.49 1555.4899999999998

That's the behavior I want to get from Management Studio. Is there a way to prevent SSMS from rounding in its result display?

Upvotes: 11

Views: 14776

Answers (3)

gonsalu
gonsalu

Reputation: 3194

No.

SQL Server Management Studio rounds floating point values for display purposes; there is a Connect suggestion to change this behavior, but it is closed "as By Design". (Microsoft Connect, a public issue tracker for Microsoft software has been retired)

However, SQLCMD, osql and the Query Analyzer do not.

SQLCMD -E -S server -Q"SELECT CONVERT(FLOAT, 1555.4899999999998)"

Upvotes: 9

user600410
user600410

Reputation: 17

Edit to my original post. I was trying to solve a slightly different problem when I ran across this page. I just wanted a float type to return a "regular" number, not scientific notation.

Suggested two converts originally. Wound up with this in then end. The 7 below could be changed to show as many decimal places as you would like. Replace and trims get rid of leading and trailing zeroes.

REPLACE(RTRIM(LTRIM(REPLACE(STR(x, 20, 7),'0',' '))),' ','0')

Upvotes: -1

Brandon Moore
Brandon Moore

Reputation: 8780

Is there a reason you would rather use a float type than a decimal type? Floats are stored as fractions, which causes them to often be slightly innacurate when doing operations on them. This is okay when you have a graphics application where the innaccuracy is much less significant than the size of a pixel, but it's a huge issue in something like an accounting application where you're dealing with money.

I would venture to say that the accuracy of a decimal is more important to most applications than any benefit in speed or size they would get from using a float.

Upvotes: 0

Related Questions