Reputation: 1455
Why is this query pulling 99.73
when I have specified cast(measure_rate as decimal)= 100
?
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY zip_code ORDER BY cast(measure_rate as decimal) DESC) AS rn
FROM AmbulatoryMeasures
where measure_rate != 'n/a'
)
SELECT *
FROM cte
WHERE rn = 1
and cast(measure_rate as decimal) = 100
The result of the measure_rate
column -
100
99.73
100
100
The data type of the measure_rate
column is a varchar(50).
I would not expect 99.73
value in the result set because of the where clause.
Upvotes: 0
Views: 1244
Reputation: 6227
The default number of decimals is 0
. If you specify the decimal with precision and scale you get the correct result you are after:
SELECT CAST('99.73' AS DECIMAL(18,2))
Upvotes: 2
Reputation: 52088
If you cast 99.73 to DECIMAL you get 100 so the result is expected, to consider the decimal part do something like this for 2 decimals precision
CAST(measure_rate AS DECIMAL(38, 2))
(38 for precision is just an example, select a number that suits your data better)
Upvotes: 1
Reputation:
You should specify precision, while casting to DECIMAL. Consider following two snippets.
--Your situation - no precision
DECLARE @test AS TABLE(MYVALUE VARCHAR(50));
INSERT INTO @test
SELECT '99.73';
SELECT myvalue,
CAST(myvalue AS DECIMAL)
FROM @test
WHERE CAST(myvalue AS DECIMAL) = 100;
--Casting with precision
DECLARE @test2 AS TABLE(MYVALUE VARCHAR(50));
INSERT INTO @test2
SELECT '99.73';
SELECT myvalue,
CAST(myvalue AS DECIMAL(10,2))
FROM @test2
The result is shown in the picture below.
Upvotes: 0
Reputation: 21
You need to define precision of decimal number of digits and number oof decimal placies you can use
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY zip_code ORDER BY cast(measure_rate as decimal) DESC) AS rn
FROM AmbulatoryMeasures
where measure_rate != 'n/a'
)
SELECT *
FROM cte
WHERE rn = 1
and cast(measure_rate as decimal(16,2)) = 100
Upvotes: 0
Reputation: 96016
You don't have a scale or precision for your decimal
so the default is used. This likely means that the scale is set to 0 so '99.73'
is cast to 100
due to rounding.
If you change your data type of your column in your table from a varchar
todecimal
with an appropriate scale and precision this won't happen. This is just one reason why choosing the wrong data type is a bad decision.
Upvotes: 0