Reputation: 2015
I am working with a SQL Server database where I need to get data from a column with the type real
into another column in another table with the type float
. From my understanding, real
is essentially just float with less precision (24) compared to float
which by default has a precision of 53. Therefore, when casting from real to float I would expect to actually get more precision or at least not lose the precision of the source value. However, some precision actually seems to be lost when doing this:
Why does this happen and is there a way to at least keep the precision of the source values when doing this?
Upvotes: 0
Views: 1298
Reputation: 10570
Just as addition to David Browne answer:
Looks like direct casting doesn't help you. You can get 'correct' (better) results, casting through the character type, like next:
select cast(cast(Valuef as nvarchar(20)) as float)
As an example, select cast(cast(cast(2.1 as real) as nvarchar(20)) as float)
displays just 2.1
.
Upvotes: 1
Reputation: 89091
I am confused why SSMS rounds real values when displaying them but does not do the same for float values
The nearest single-precision floating point number (real
) to 2.1 is something like 2.0999999. So it makes sense to display it as 2.1.
The nearest double-precision floating point number (float
) to 2.1 is quite a long way from
2.09999990000000, which is approximately what you get when you convert 2.0999999 from real
to float
.
SSMS will display a floats closer to 2.1 as 2.1, eg
select cast(2.1 as float), cast(2.1 as float) - 0.000000000000001
is displayed as
---------------------- ----------------------
2.1 2.1
Here's a paper that reviews algorithms for this conversion and presents a new one: Printing Floating-Point Numbers Quickly and Accurately with Integers
Upvotes: 1