Reputation: 1542
There seems to be a lot of "noise" going on around DDD, repositories, data mappers, etc... and very little "real world" implementation code to show newbies like myself what is "good", and what is "bad".
I've just finished reading the book Architecting Applications for the Enterprise, and it's been a huge eye-opener. I currently use the Active Record pattern on a project at work, and have been working on changing it to use Domain Model. I've used a lot of the examples of architecture in the book, as well as the Northwind Starter Kit code download that is a companion to the book.
Everything has been going great, but now I've hit my first "real world" data mapper problem... aka, instead of my mapper just being responsible for taking one Entity object and persisting it to the database, I now have an Entity object that has an IList<> collection that also needs to be mapped.
The main Entity object is Expert, here is the code:
public class Expert
{
public int ID { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public virtual IList<Case> Cases { get; protected set; }
}
Here is the implementation of the collection from Expert, the Case object:
public class Case
{
public int ID { get; set; }
public string Name { get; set; }
}
Can't get much simpler than that.
Now, I have a DataMapper for each Entity Item, but my question is, when I go to map the Case collection in my ExpertDataMapper code, what is considered the "right" way to do that?
In the book, there is actual SQL code that is embedded in the ExpertDataMapper that makes a call out to ADO.NET code that takes all Case items in the IList collection and calls that code once per item. Here is some pseudo-code:
public virtual void Create(Expert expert)
{
// Insert expert in the Expert table
SqlHelper.ExecuteNonQuery(ProviderHelper.ConnectionString, CommandType.Text,
"SQL TO INSERT EXPERT", this.BuildParamsFromEntity(expert));
// Insert individual Case items in the Case table
foreach (Case c in expert.Cases)
{
SqlHelper.ExecuteNonQuery(ProviderHelper.ConnectionString, CommandType.Text,
"SQL TO INSERT CASE", this.BuildParamsFromEntity(order));
}
}
So two things wrong with this jump out at me immediately:
I would assume the ExpertDatMapper should DELEGATE the task of inserting a Case to the CaseDataMapper... but I don't know if it's "right" or "wrong" for one DataMapper to instantiate another. Is that considered a regular problem with DataMappers? I can find no guidance on this problem which I assume is fairly common.
So, when it comes to DataMappers, I have not seen any concrete implementations of a simple create, update, etc.. on an entity object's DataMaper with a simple collection association. The book I'm reading has no code to support it (and the code that is there looks suspect to me).
I have read, and own Martin Fowler's P of EAA book, so please don't tell me to look there. I am also aware of the ORM tools that are available to me, so I don't need to take care of implementing a DAL myself. I've played with EF 4.0, and I like it. But for this project, I don't have the option of using an ORM tool.
Most books/examples seem to stop at the same basic premise that as long as I have a one-to-one association between my Entity object and the table it gets persisted to via the DataMapper, the world is just rose-colored using a Domain Model approach...
If all my entities were one-to-one, I might as well just use Active Record and be done with it.
Any guidance, suggestions, insight? Sorry, It's a long post, but I've read other posts here with similar problems, and no really concrete answers or suggestions of how to deal with the problem presented here.
Upvotes: 3
Views: 3517
Reputation: 8276
This really depends on personal preference and opinions. Some developers would say there's nothing wrong with the code you've shown. Others would say each class is only responsible of its own state and properties. Others would say other stuff as well. You should use what's right for you, i.e: what you feel comfortable working with. Personally, I prefer using EF Code First which allows me to easily create my model classes and my database context class. Then I simply abstract that using the repository pattern and a UnitOfWork class that takes care of committing the transactions. Moreover, the dependency injection pattern is a great way to loose couple your classes. I had major issues with my classes being tightly coupled which I couldn't resolve without using an IoC container.
This article is by far the best resource on the subject I've seen. The author does a great job in creating a fully working and most importantly scalable framework to work with.
Good luck :)
Upvotes: 3
Reputation: 71573
On the first point, you are correct, from a "SOLID" perspective. Instead of handling the persistence itself, it would be more maintainable and less redundant to implement a CaseDataMapper that is used by the ExpertDataMapper. That was probably not done for several reasons, mostly tied to simplicity. If you have a separate class that should do work within the same transaction, you have to pass the transaction around. This in itself is not terrible, but it introduces more questions about how to make the implementation architecture-independent. If you just pass the Transaction around, you're coupled to vanilla ADO.NET, and can't upgrade later to an ORM like MSEF, NHibernate, Linq2SQL, etc. So you need a UnitOfWork pattern, which allows you to hide the actual Transaction or Session or DataContext or whatever in the Repository. By now, this relatively simple code snippet is now two full class definitions with plenty of stuff.
To avoid all this for the purposes of illustrating one Expert, they just put the code to save Cases inside the Expert. This basically makes the assumption that a Case will always be referenced as a child of an Expert, and so it wouldn't be worth it to sever the functionality into something that can be reused. KISS. This assumption may be true; if not, it is left as an exercise to the reader to refactor that logic out into a helper.
On the second point, you are also correct, and there really is no way around it. Every row created from data the SQL Server can't yet know about must be inserted one row at a time. You MIGHT somehow be able to set up a bulk insert, but I can guarantee that's more trouble than its worth until you get into the thousands of records. Any ORM you'd use would result in the same SQL.
On not using an ORM: With a sufficiently complex domain/data model, if you do not use a prefab ORM, you will end up rolling your own if you want it to conform to design methodologies like SOLID.
Upvotes: 4