Tim
Tim

Reputation: 2403

Storing graphs in fully-normalized relational databases

Goal

Find a perfect, flexible schema for storing many different types of objects with a wide variety of links between them in a relational database.


Problem

EAV is a workaround to the normal confinements of a RDBMS.

If you were to normalize an EAV schema, it would be ugly.


Idea

If EAV was normalized, it would be ugly.

Does the fact that we traditionally maintain these schema by hand limit their complexity and power?

But if it was maintained and queried programmatically, what would it matter?


Graphs

If you have n different entities in n different tables, why not let your code generate n(n+1)/2 link tables and the queries between them? Would this not result in a true graph in a normalized schema?

In a highly interlinked database, there will always be exponentially more edges than vertices. Why not focus on creating proper, normalized verticles (n entity tables) and let our code maintain the edges (n^x link tables)?


Conclusion

Can a system normalize EAV and maintain the resulting complex schema?

Can complex graphs be stored in (and remain true to) relational databases?

I'm sure this has been done before, but I've never seen it. What am I missing?


Example problem

Storing printed works and their bibliographic data


Questions

"What problem are you trying to solve?"
-Piet

I'm looking for a normalized solution to EAV, graphs, and polymorphic relationships in a relational database system.

"I would hate to be the guy who has to understand or maintain it after it's gone into production."
-Andrew

This "traditional maintenance" is the exact thing I'm saying we should be automating. Isn't it largely grunt work?

Upvotes: 14

Views: 3796

Answers (4)

PerformanceDBA
PerformanceDBA

Reputation: 33708

Since you are editing the question, it must be active.

Yes, there are much better ways of designing this, for the purpose and use you describe.

The first issue is EAV, which is usually very badly implemented. More precisely, the EAV crowd, and therefore the literature is not of high quality, and standards are not maintained, therefore the basic integrity and quality of a Relational Database is lost. Which leads to the many well-documented problems.

You should consider the proper academically derived alternative. This retaiins full Relational integrity and capability. It is called Sixth Normal Form. EAV is in fact a subset of 6NF, without the full understanding; the more commonly known rendition of 6NF.

6NF implemented correctly is particularly fast, in that it stores columns, not rows. Therefore you can map your data (graph series, data points) in such a way, as to gain a flat high speed regardless of the vectors that you use to access the graphs. (You can eliminate duplication to a higher order than 5NF, but that is advanced use.)

"Highly-interlinked" is not a problem at all. That is the nature of a Relational Database. The caveat here is, it must be truly Normalised, not a inlerlinked bunch of flat files.

The automation or code generation is not a problem. Of course, you need to extend the SQL catalogue, and ensure it is table-driven, if you want quality and maintainability.

My answers to these questions provide a full treatment of the subject. The last one is particularly long due the the context and arguments raised.
EAV-6NF Answer One
EAV-6NF Answer Two
EAV-6NF Answer Three

And this one is worthwhile as well:
Schema-Related Problem

Upvotes: 5

Pi Delport
Pi Delport

Reputation: 10598

This depends wholly on the definition of your graph.

The only "true" way to store a graph, in a relation database or otherwise, is a simple adjacency list (or one of its variants). Everything else is a derivative, specialization, or optimization of this technique, and depends on knowledge of the problem domain.

The method you describe in your question is essentially de- or re-normalizing this universal adjacency list into number of "typed" adjacency lists (or link tables), which may or may not be more appropriate, depending on your problem.

I'm sure this has been done before, but I've never seen it. What am I missing?

You're probably not missing anything: it's actually extremely rare to need to store a general graph like this. What problem are you trying to solve?

Addendum

In a highly interlinked database, there will always be exponentially more edges than vertices. Why not focus on creating proper, normalized verticles (tables) and let our code maintain the edges?

I think this is much more common than you might think. I'm mainly familiar with Python, but all the major ORMs / RDBMS toolkits available for it (SQLAlchemy, Django, SQLObject, ...) support automatic maintenance of many-to-many link tables as a standard feature.

Upvotes: 2

Andrew Shepherd
Andrew Shepherd

Reputation: 45222

Your idea would certainly create a completely flexible schema that can represent any kind of object graph. I would hate to be the guy who has to understand or maintain it after it's gone into production.

One benefit in a well designed data schema is the constraints. I'm not just refering to the physical column constraints you can define, but the constraints imposed by the overall structure. There are a fixed set of explicit relationships, and this provides well defined paths to follow.

In your scenario, there would always be a large number of paths from one entity to another. How would somebody know which path was the "right" path. The "right" path will simply be "the set of relationships the developer chose to populate".

Imagine a database that has these relationships.

Customer <===> Invoice <===> InvoiceLineItem <====> Product

If I'm looking at this, and somebody asks me: "Give me a list of customers and for each customer a list of product's they've bought", I would know how to write the query.

But, if this was a graph where everything pointed to everything else, how will I know which path is the "right" path. Will it be the "Customer_Product" relationship, the "Customer_Invoice_Line_Item" to "Customer_Product", or "Customer_Invoice" to "Invoice_Product", or "Customer" to "Invoice" to "Invoice_Line_Item" to "SomeOtherTableIHaven'tEvenLookedAtYet" to "Product"? The answer can be "It should be obvious", but it is very common for something to be obvious to one developer only.

Upvotes: 4

Dave Markle
Dave Markle

Reputation: 97671

why not let your code generate n(n+1)/2 "link" tables and the queries between them?

Any time I see anything in Computer Science where the answer comes out to be "about n-squared", I immediately think that the answer is wrong. :-)

But more realistically, when "n" gets to be a moderate size, the number of link-tables gets to be enormous, really, really quick. So much so that you can't say that this methodology could represent a general-purpose solution, IMO.

But here's my real objection -- your proposed methodology isn't a viable engineering solution. Engineering is all about making tradeoffs, and this method trades a LOT for generality's sake. For example, here's what you lose by using your method over a tried-and-true "traditional" database design:

  • You lose the ability to have a discoverable schema -- the number of tables gets out of hand so quickly, anyone looking at your table design can't know what the relationships are.
  • Almost no kind of data integrity can be enforced by the database other than the most basic referential kind -- all code which uses the database must be careful not to break the rules, or you have data corruption.
  • You end up potentially having a very large number of tables which model relationships that don't really exist in your business domain. When you use a "link" table, you are essentially modeling a many-to-many relationship, which may or may not exist in the real world.
  • You potentially lose enormous amounts of speed, and incur a very large penalty in terms of storage used. It's far more efficient to model 1:N relationships by referring to the "parent" entity in the "child" entity directly.

Upvotes: 3

Related Questions