Reputation: 64207
For a scientific project of mine I am developing (in C#4 and T-SQL) an application potentially meant to handle very big quantities of very simple records performing simple operations with them (a scientific simulation engine, not a linear time-series cruncher). I'd like to use 64-bit integers as primary keys for better capacity.
I am going to integrate using of Entity Framework, POCO collections and arrays processing and T-SQL stored procedures practically.
I am going to store a database on an SQL Server 2008 and access it from multiple application instances simultaneously for distributed processing.
SQL Server and application instances are going to be run on 32-bit Windows XP systems, sometimes on completely 64-bit-unaware hardware.
What penalties am I going to face for using 64-bit integer types as primary keys?
Upvotes: 3
Views: 863
Reputation: 67362
As long as you stick to reading and writing those numbers (ie no arithmetic, just database queries), the performance hit will be negligible. It will be like using 2 int
s as parameters instead of 1.
Once you start doing arithmetic on them however, it starts to get messy. Addition and subtraction is roughly 3 times as slow as for normal int
s. Multiplication and division is a LOT slower, over an order of magnitude. I posted the code for multiplying 2 64-bit numbers on a 32-bit cpu somewhere on this site, I could look it up if you want, but it's over 3 pages long.
Seeing how you're talking about ID fields however, you shouldn't be doing any arithmetic on them right? So you should be fine.
Upvotes: 3