user3900102
user3900102

Reputation: 69

c# Entity Framework using rowbegindate in transaction table

I'm a developer that started working with a database architect. He is using a design I've never seen before, that I think will have negative performance implications.

In transaction tables he is using two fields in every table.. rowbegindate and rowenddate. There is a parent table that just a few fields that never changes. This is called a PersonHeader table. That key is used a fk to the child Person table. The Person table's PK is the fk of the PersonHeader table AND the RowBeginDate for that row. To retrieve the current row, I need to always check the for the RowEndDate that is NULL.

I haven't gotten into the details yet of how this will affect performance of Entity Framework, but I suspect that it will not be efficient.

I've worked on a number of projects and have never seen this approach. What are the performance implications of having this many dead records in a transaction table. I don't think there will be many updates, but I would estimate that the database person table could end up having 500,000 rows or more, not to mention the detail tables.

Upvotes: 1

Views: 42

Answers (2)

user3900102
user3900102

Reputation: 69

PersonHeader PersonKey (PK)

Person PersonKey (PK, FK) RowBeginDate(PK) RowEndDate (other columns)

(I added this as an answer, since a comment wouldn't take a new line...) PersonLocation LocationTypeKey (PK, FK) PersonKey (PK, FK) LocationKey (FK) RowBeginDate (not a key) RowEndDate

Location LocationKey (PK) StateKey (FK) RowBeginDate (not a key) RowEndDate

Upvotes: 0

DVK
DVK

Reputation: 2792

When working with applications that have auditing requirements, it is not uncommon to have to maintain historical versions of records. I have done similar things using (for example, when storing a history of changes to employee records), an EmployeeID and UpdatedOn field as a key so I can get the latest version of the record.

Provided that the tables are properly indexed and that the indexes don't end up being too large because of the composite key, I wouldn't worry about performance or the number of records. I've worked with tables that contained half a billion records and performance was still fine (rebuilding the indexes took a while though).

You could even create an interceptor from entity framework that would allow you to filter out "dead" records when performing your queries (see Entity Framework soft delete implementation using database interceptor not working)

Upvotes: 1

Related Questions