ediblecode
ediblecode

Reputation: 11971

Unit Testing - dependent tests

I am creating a list of unit tests that are dependent on each other. For example, my first test creates a record in the database and checks that the return value is greater than 0.

The second test then checks the data of the record created in the first test. However, it needs the ID of the record produced in the first test.

Originally I called the second test from within the first test so that I could pass the ID as a parameter, which worked fine except this meant that essentially there was only one test.

I created an ordered list with the ID declared outside the scope but after the first unit test this value returns to 0 so obviously the second unit test fails.

Is there any way to create the tests so that they share the value produced in the first test?

The code is below:

[TestMethod]
public void TestNewLandlord_InsertIntoImportFiles_ReturnFileID()
{
    try
    {
        DataSet ds = EngineBllUtility.InsertIntoImportFiles(connString, @"C:\Documents and Settings\dTrunley\My Documents", "HFISNewLandlordTest.csv",
        "TestNewLandlord()", WindowsIdentity.GetCurrent().Name, "HFIS Landlords", "NA", 30247531, false);

        importFileId = long.Parse(ds.Tables[0].Rows[0]["ImportFileID"].ToString());
        Assert.IsTrue(importFileId > 0);
    }
    catch (Exception ex)
    {
        Assert.Fail(ex.Message);
    }
}

[TestMethod]
public void TestNewLandlord_InsertIntoImportFiles_CorrectData()
{
    try
    {
        using (SqlConnection connectionString = new SqlConnection(connString))
        {
            using (SqlCommand sqlCommand = new SqlCommand(
                String.Format("SELECT * FROM [mydeposits].[import].[ImportFiles] WHERE [ImportFileID] = {0}", importFileId), connectionString))
            {
                connectionString.Open();
                using (SqlDataReader dr = sqlCommand.ExecuteReader())
                {
                    if (dr.HasRows)
                    {
                        bool correctData = true;
                        dr.Read();
                        if (!dr["ImportFileStatusID"].ToString().Equals("1"))
                            correctData = false;
                        if (!dr["HeadOfficeMemberID"].ToString().Equals("247531"))
                            correctData = false;
                        Assert.IsTrue(correctData);
                        TestCleanup();
                    }
                    else
                        throw new Exception("Import does not exist in database");
                }
            }
        }
    }
    catch (Exception ex)
    {
        Assert.Fail(ex.Message);
        TestCleanup();
    }
}

Upvotes: 4

Views: 13218

Answers (2)

oleksii
oleksii

Reputation: 35895

I am creating a list of unit tests that are dependent on each other. For example, my first test creates a record in the database and checks that the return value is greater than 0.

In my opinion, such approach is incorrect. You may create evil code that's going to bite you back. Such code:

  • breaks the unit test principles
  • is hard to maintain
  • is very rigid, cumbersome and error-prone

Unit tests must be independent, or else don't write them at all. The reason for that is as the complexity of your software grow - so does the complexity of tests. If you have one test depends on others maintenance of the tests become a burden. So the cost of software increases, but the quality of code - doesn't. If you do not have dependencies between tests - the complexity of software doesn't matter as you can test each individual piece of functionality separately.

Another advantage is that you can run tests in parallel. For large systems, it is important that the Continuous Integration (and Deployment) cycle is fast. By running tests in parallel you could significantly speed up yout release cycle.

Suggested solution

What you are trying to accomplish are possibly the integration tests. One way to write them would be to create a separate project for such tests. Each test will still be independent of each other, but probably each test would require some SetUp and TearDown in NUnit test terms. So the SetUp would prepare everything that is required for the integration tests to pass and TearDown would perform a clean up after each test.

Upvotes: 13

Tony Hopkinson
Tony Hopkinson

Reputation: 20320

That's very naughty, however you can un comment (if created by the test wizard or add)

//You can use the following additional attributes as you write your tests:

//Use ClassInitialize to run code before running the first test in the class
[ClassInitialize()]
public static void MyClassInitialize(TestContext testContext)
{
}

//Use ClassCleanup to run code after all tests in a class have run
[ClassCleanup()]
public static void MyClassCleanup()
{
}

//Use TestInitialize to run code before running each test
[TestInitialize()]
public void MyTestInitialize()
{
}

//Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
}

No different as such to mocking up some common tests data, though just as dubious.

Upvotes: 3

Related Questions