Reputation: 42957
I have the following situation:
I have create a C# method that perform an insert query into a SQL Server database table. Into this method I have something like it:
// [Severity] insertion on the DB:
if (v.Severity != null)
{
_strSQL += ",[Severity] ";
strSQLParametri += ", @SEVERITY ";
_addParameter(command, "@SEVERITY", v.Severity);
}
where Severity
is the table column and v.Severity
is the value that I have to put into this column for the new row.
On the table the Severity
column is definied as float
My problem is that if the value of v.Severity
is something like 3.7
it perform 3.7
value into the Severity
column but if it is something like 3.0
it put 3
and not 3.0
What have I to do to have 3.0
instead 3
into my Severity
column?
Upvotes: 0
Views: 1876
Reputation: 704
This is one way of inserting an integer that will be automatically changed to float
create table #tablefloat
(
Severity_amount float
)
insert into #tablefloat select 212
select * from #tablefloat
Upvotes: 1
Reputation: 100547
Float numbers don't have concept of "important trailing zeros". As result 3.0, 3, 3.0000 are all the same when present in binary form.
If you must preserve value as formatted somewhere - store as string in the database (or any other storage). If you know exactly how many digits you want to be present consider custom type that will store value pre-multiplied (like 3.0
as 3.0*10
, or even (int)30
) and carefully shift floating point during math operations.
If you just want to display with at least one digit after decimal point - use appropriate format.
Upvotes: 3