Reputation: 2687
I have a few tables that store quite large numbers, so I've chosen decimal(17, 2)
as the data type for all the columns.
Example of schema:
Table1 (value1, value2, value3, value4, value5)
Table2 (value1, value2, value3, value4, value5)
For argument sake, if I have 100,000 rows in each table, and each value is 100.00
, how will the query performance be compared to if the data type was decimal(5, 2)
? Will it be negligible? Will the main difference be the storage space taken up?
Upvotes: 1
Views: 2714
Reputation: 9927
Precision Storage bytes
--------------------------
1 - 9 5
10-19 9
So, It decimal(5, 2)
takes 45% less than decimal(17, 2)
in the storage.
The difference in execution time depends on the tasks and configuration of your server. If the bottleneck is the load on the I / O system and then increase likely will be ~40%.
Upvotes: 0
Reputation: 48482
If you are using a Decimal type strictly because you are working with 'quite large numbers' then you are using the Decimal type for the wrong reason. The decimal type is used when rounding accuracy is the major concern, not because the numbers are large. Actually, for really large numbers, a float would be a better choice, since it can hold larger numbers. As far as performance is concerned, your best bet is to simply test it for yourself. Fill a table with 100,000 rows of decimals, floats reals etc. and test query performance.
Upvotes: 1