Reputation: 37587
We currently practice Test Driven Development with tests running against a database. This database runs locally on a developers machine but they are all synchronized to a master database when schema or data changes.
This has been going on for a couple of years and now we are finding that the data is becoming very stale as new features are added to the product.
Adding data to the test database has become "impossible" through the GUI, as simple changes can break hundreds of tests - we've gotten better at writing less fragile tests but the horse has now bolted.
What sort of strategy can we use for managing this issue?
We thought about copying a production database and just start writing any new tests against this. I can see the problem recurring over time though and it would add confusion to our code base.
Upvotes: 2
Views: 319
Reputation: 2054
I personally do not consider tests that require external resources, such as databases, jms queues, other services, ...etc to be unit tests. I refer to them as "integration tests".
That being said, there is a need to build a suite of unit tests around one's OR layer sometimes. You are likely to want to test how the code behaves when the database contains data representing different scenarios, some of which may not be readily available in your development database. What I found most useful is to have my unit test build an in-memory database (e.g. using H2) and load it with the different datasets required by the different tests. This is fairly easy with Hibernate where you can allow your schema to be created based on your relational mapping files. You then only have to insert data in it that is needed for your test cases. This is great because:
Upvotes: 1
Reputation: 15690
I'm not sure if this will work for you... when I ran into this, I developed a library of "ensure..." methods that check the state of a particular object in the database, and force it to be in that state if it's not. Each test is responsible for having a setup that "ensure"s that everything it needs is there.
In addition, there are some global aspects that aren't checked by every test, because they take a long time and aren't modified by the tests - they're typically set up at the beginning of an automated suite run, and not normally used for individual test runs. If single tests break because of an issue that looks like it might be part of this setup, I just run it by hand.
This wasn't something that I just decided to do one day and then did it all. I slowly fixed tests as I had to work on them otherwise - often because they broke because of the kind of issue you've described. The "ensure..." library was developed incrementally too - nothing was put in until it was needed.
Hope this helps!
Upvotes: 0