Rik
Rik

Reputation: 29243

Problematic data patterns, performance-wise

Assertion: the performance of SQL databases degrades when the volume of data becomes very large (say, tens or hunderds of terabytes). This means certain patterns in database design which are reasonable for most small-to-medium sized databases break down when the database grows. For (a rather general) example, there is a trend that moves away from designing data models which are fully (or say, BCNF) normalized because the joins necessary would impact performance too heavily. See also this question

My question is this: Do you know of any database patterns which, although reasonable in a typical database, break down (performance-wise) for huuuge databases, particularly SELECT-queries? Are there alternative strategies that accomplish the same (data-wise) without these performance issues?

Upvotes: 0

Views: 294

Answers (2)

Roy Ashbrook
Roy Ashbrook

Reputation: 854

first thing that comes to mind is storing files as blobs in the database. i have seen numerous systems that started reasonably small, say below 10GB in a single table of blob data, and then started to hit ceilings as they grew. you can mitigate some of the damage by structuring your solution correctly, but generally speaking i think that pattern of storing files in the database breaks down as the size goes up.

Upvotes: 1

Kirtan
Kirtan

Reputation: 21695

Identity columns?!

This can happen with a HUGE table containing lots of data and heavy insert/delete transactions.

EDIT: OK. Re-read your question. Indexes can be huge performance bottlenecks for inserts in tables containing lot many rows.

Upvotes: 1

Related Questions