Reputation: 447
I'm implementing unit tests in a finance system that involves several calculations. One of the methods, receives an object by parameter with more than 100 properties, and based on this object's properties, it calculates the return.
In order to implement unit tests for this method, I need to have all of this object filled with valid values.
So...question: today this object is populated via database. On my Unit Tests (I'm using NUnit), I'd need to avoid the database and create a mock object of that, to test only the method's return. How can I efficiently test this method with this huge object? Do I really need to manually fill all of the 100 properties of it?
Is there a way to automatize this object creation using Moq (for example)?
obs: I'm writing unit tests for a system that is already created. It's not feasible to re-write all the architecture at this moment.
Thanks a million!
Upvotes: 22
Views: 7900
Reputation: 1401
I'd take this approach:
1 - Write unit tests for every combination of the 100 property input parameter object, leveraging a tool to do this for you (e.g. pex, intellitest) and ensure they all run green. At this point refer to the unit tests as integration tests rather than unit tests, for reasons that become obvious later.
2 - Refactor the tests into SOLID chunks of code - methods that do not call other methods can be considered genuinely unit testable as they've no dependencies on other code. The remaining methods are still only integration testable.
3 - Ensure ALL the integration tests are still running green.
4 - Create new unit tests for the newly unit testable code.
5 - With everything running green you can delete all/some of the superfluous original integration tests - up to you, only if you feel comfortable doing so.
6 - With everything running green you can start to reduce the 100 properties needed in the unit tests to only those strictly needed for each individual method. This will likely highlight areas for additional refactoring, but anyway will simplify the parameters object. That in turn will make future code maintainers efforts less buggy, and I'd wager that historic failures to address the size of the parameter object when it had 50 properties is why it is now 100. Failing to address the issue now will mean it'll grow to 150 parameters eventually, which lets face it, nobody wants.
Upvotes: 1
Reputation: 35106
For cases when I had to get a big amount of actual correct data for testing I've serialised the data into JSON and put that directly into my test classes. Original data can be taken from your database and then serialised. Something like this:
[Test]
public void MyTest()
{
// Arrange
var data = GetData();
// Act
... test your stuff
// Assert
.. verify your results
}
public MyBigViewModel GetData()
{
return JsonConvert.DeserializeObject<MyBigViewModel>(Data);
}
public const String Data = @"
{
'SelectedOcc': [29, 26, 27, 2, 1, 28],
'PossibleOcc': null,
'SelectedCat': [6, 2, 5, 7, 4, 1, 3, 8],
'PossibleCat': null,
'ModelName': 'c',
'ColumnsHeader': 'Header',
'RowsHeader': 'Rows'
// etc. etc.
}";
This may not be optimal when you have a lot of tests like this, as it takes quite a bit of time to get the data in this format. But this can give you a base data that you can modify for different tests after you are done with the serialisation.
To get this JSON you'll have to separately query the database for this big object, serialise it into JSON via JsonConvert.Serialise
and record this string into your source code - this bit relatively is easy, but takes some time because you need to do it manually... only once though.
I've successfully used this technique when I had to test report rendering and getting data from the DB was not a concern for the current test.
p.s. you'll need Newtonsoft.Json
package to use JsonConvert.DeserializeObject
Upvotes: 7
Reputation: 1002
So... this is technically not an answer as you said unit testing, and using an in-memory database makes it integration testing, not unit testing. However, I find that sometimes when faced with impossible constraints, you need to give somewhere and this may be one of those times.
My suggestion is to use SQLite (or similar) in your unit testing. There are tools to extract and duplicate your actual database into a SQLite database, then you can generate the scripts and load it into an in-memory version of the database. You can use dependency injection and the repository pattern to set the database provider different in your "unit" tests than in the real code.
In this way, you can use your existing data, modify it when you need to as pre-conditions for your tests. You will need to acknowledge that this is not true unit testing... meaning that you are limited to what the database can truly generate (i.e. table constraints will prevent testing certain scenarios), so you cannot do a full unit test in that sense. Also these tests will run slower because they are really doing database work, so you will need to plan for the extra time needed to run these tests. (Though they're still usually pretty quick.) Note that you can stick mock out any other entities (e.g. if there's a service call in addition to the database, that's still a mock potential).
If this approach seems useful to you, here are some links to get you going.
SQL Server to SQLite converter:
https://www.codeproject.com/Articles/26932/Convert-SQL-Server-DB-to-SQLite-DB
SQLite studio: https://sqlitestudio.pl/index.rvt
(Use that to generate your scripts for in memory use)
To use in memory, do this:
TestConnection = new SQLiteConnection("FullUri=file::memory:?cache=shared");
I have a separate script for the database structure from the data load, but, that's a personal preference.
Hope that helps, and good luck.
Upvotes: 0
Reputation: 247143
Given the restrictions (bad code design and technical debt...I kid) A unit test will be very cumbersome to populate manually. A hybrid integration test would be needed where you would have to hit an actual datasource (not the one in production).
Potential potions
Make a copy of the database and populate only the tables/data needed to populate the dependent complex class. Hopefully the code is modularized enough that the data access should be able to get and populate the complex class.
Mock the data access and have it import the necessary data via an alternate source (flat file maybe? csv)
All other code could be focused on mocking any other dependencies needed to perform the unit test.
Barring that the only other option left is to populate the class manually.
On an aside, this has bad code smell all over it but that is outside of the scope of the OP given that it not able to be changed at this moment. I would suggest you make mention of this to the decision makers.
Upvotes: 5
Reputation: 170
First things, first - you should make the acquisition of this object done via an interface if the code there is currently pulling it from the DB. Then you can mock that interface to return whatever you want in your unit tests.
If I were in your shoes, I would extract the actual calculation logic and write tests towards that new "calculator" class(es). Break down everything as much as you can. If the input has a 100 properties but not all of them are relevant for each calculation - use interfaces to split them up. This will make the expected input visible, improving the code as well.
So in your case if your class is let us say named BigClass, you can create an interface that would be used in a certain calculation. This way you are not changing the existing class or the way the other code works with it. The extracted calculator logic would be independent, testable and the code - a lot simpler.
public class BigClass : ISet1
{
public string Prop1 { get; set; }
public string Prop2 { get; set; }
public string Prop3 { get; set; }
}
public interface ISet1
{
string Prop1 { get; set; }
string Prop2 { get; set; }
}
public interface ICalculator
{
CalculationResult Calculate(ISet1 input)
}
Upvotes: 1
Reputation: 236228
If those 100 values are not relevant and you need only some of them, then you have several options.
You can create new object (properties will be initialized with default values, like null
for strings and 0
for integers) and assign only required properties:
var obj = new HugeObject();
obj.Foo = 42;
obj.Bar = "banana";
You can also use some library like AutoFixture which will assign dummy values for all properties in your object:
var fixture = new Fixture();
var obj = fixture.Create<HugeObject>();
You can assign required properties manually, or you can use fixture builder
var obj = fixture.Build<HugeObject>()
.With(o => o.Foo, 42)
.With(o => o.Bar, "banana")
.Create();
Another useful library for same purpose is NBuilder
NOTE: If all properties are relevant to feature which you are testing and they should have specific values, then there is no library which will guess values required for your test. The only way is specifying test values manually. Though you can eliminate lot of work if you'll setup some default values before each test and just change what you need for particular test. I.e. create helper method(s) which will create object with predefined set of values:
private HugeObject CreateValidInvoice()
{
return new HugeObject {
Foo = 42,
Bar = "banaba",
//...
};
}
And then in your test just override some fields:
var obj = CreateValidInvoice();
obj.Bar = "apple";
// ...
Upvotes: 21