Reputation: 2413
The biggest database that I have experience with has been a SQL Server database in which one of the tables contained 200,000 rows. I used Guid
as primary key in that database and NOT sequential guid. I experienced no performance issues in that system which had about 30 concurrent users in it.
Recently I have designed and developed an enterprise application development framework. To take advantage of the "Unit of Work" pattern I used a sequential guid as the primary key, so that the records would be ordered physically. Since my experience with big databases is limited to what I just mentioned, I have a serious concern if I will be going to use this framework to develop an enterprise application for a large organization with 1000 concurrent users that will be going to keep millions of records of data using a sequential guid as the primary key will lead to performance issues?
If yes to what extent? And then again if yes, could it be resolved by improving database server hardware (processor and RAM) and then again to what extent?
Thanks in advance for sharing your experience and knowledge
Upvotes: 1
Views: 2023
Reputation: 1269513
The issue of sequential GUID versus "regular" GUIDs comes up in the following circumstance:
For a clustered index, SQL Server adds new records to the table "in order". So, larger values go at the "end" of the table -- in this case, on the last data page. This is handy for identity columns because they are guaranteed to be larger than any previous value. And the last data page is -- by definition -- not fragmented.
GUIDs do not have this property. They end up being inserted "in the middle", causing fragmentation.
Why do you not see this as a problem? There could be various reasons:
As for the latter point, if the records are small enough, then over a thousand could appear on each page. With 200 pages of data, fragmentation might not be a significant issue.
With 30 concurrent users, you might simply have no overlap of transactions. If each user modified the database once per minute, that gives you 2 seconds to complete a transaction -- typically quite enough time.
Nevertheless, I recommend using a sequential GUID or identity column. This will keep the database cleaner. However, regularly defragging the database is another option that might work.
Upvotes: 1
Reputation: 754268
GUID
may seem to be a natural choice for your primary key - and if you really must, you could probably argue to use it for the primary key of the table. What I'd strongly recommend not to do is use the GUID
column as the clustering key, which SQL Server does by default, unless you specifically tell it not to.
You really need to keep two issues apart:
The primary key is a logical construct - one of the candidate keys that uniquely and reliably identifies every row in your table. This can be anything, really - an INT
, a GUID
, a string - pick what makes most sense for your scenario.
The clustering key (the column or columns that define the "clustered index" on the table) - this is a physical storage-related thing, and here, a small, stable, ever-increasing data type is your best pick - INT
or BIGINT
as your default option.
By default, the primary key on a SQL Server table is also used as the clustering key - but that doesn't need to be that way! I've personally seen massive performance gains when breaking up the previous GUID-based primary / clustered key into two separate keys - the primary (logical) key on the GUID
, and the clustering (ordering) key on a separate INT IDENTITY(1,1)
column.
As Kimberly Tripp - the Queen of Indexing - and others have stated a great many times - a GUID
as the clustering key isn't optimal, since due to its randomness, it will lead to massive page and index fragmentation and to generally bad performance.
Yes, I know - there's newsequentialid()
in SQL Server 2005 and up - but even that is not truly and fully sequential and thus also suffers from the same problems as the GUID
- just a bit less prominently so.
Then there's another issue to consider: the clustering key on a table will be added to each and every entry on each and every non-clustered index on your table as well - thus you really want to make sure it's as small as possible. Typically, an INT
with 2+ billion rows should be sufficient for the vast majority of tables - and compared to a GUID
as the clustering key, you can save yourself hundreds of megabytes of storage on disk and in server memory.
Quick calculation - using INT
vs. GUID
as primary and clustering key:
TOTAL: 25 MB vs. 106 MB - and that's just on a single table!
And yes - a larger size of a table or of an index automatically means more data pages that need to be loaded from disk, held in memory, transferred to the client - all negatively impacting your performance. How much impact that has really depends on lots of factors of your database design and your data distribution, so any generalized predictions are next to impossible...
Some more food for thought - excellent stuff by Kimberly Tripp - read it, read it again, digest it! It's the SQL Server indexing gospel, really.
Upvotes: 2