Reputation: 145
Im currently working on several NodeJS APIs that do exploit a Neo4J Database. I want to do automated tests on each endpoints and make sure everything works (eg: POST a user should create a node user in my graph with the expected labels and properties)
As a junior developer Im not really sure where to start.
My initial idea is to set up a fixtures Neo4j database (by executing a long cypher query on a void graph that will generate fixtures data) The issue is that this fixtures database will be affected by the different tests I do. For Instance if i want to test a DELETE endpoint I will assert that a data will be deleted of my database.
I see two solutions for that :
Before testing an endpoint I generate some needed data. But then I will have to make sure to delete them after the test to not impair the fixtures and affect other endpoint tests.
Before testing an endpoint I clean the database, execute the fixtures query, and execute a second query to add some extra data to test my endpoint. Then for each endpoint im sure to have a clean database with eventually some extra data.
The first solution seems effort/time-consuming while the second seems a bit rough. Maybe my conception of automated tests is really wrong.
Here is an example of a test I have done (using mochajs and chai) Where I change the admin of an organisation :
describe('UPDATE organisation', () => {
before((doneBefore) => {
const admin1 = updateOrganisationUtilities.createAdmin('[email protected]');
const admin2 = updateOrganisationUtilities.createAdmin('[email protected]');
Promise.all([admin1, admin2]).then(
(admins) => {
idAdmin = admins[0].id;
idAdmin2 = admins[1].id;
updateOrganisationUtilities.bindAdminToOrganisation(idAdmin, idOrganisation)
.then(() => {
doneBefore();
});
});
});
after((doneAfter) => {
Promise.all([
updateOrganisationUtilities.deleteExtraData(),
]).then(() => {
doneAfter();
});
});
it('Should update an organisation', (doneIt) => {
const req = {
method: 'PUT',
url: `/organisations/${idOrganisation}`,
payload: JSON.stringify({
name: 'organisation-test',
address: 'here',
trainingNumber: '15',
type: 'organisation.types.university',
expirationDate: new Date(),
idAdmin: idAdmin2,
}),
headers: {
authorization: token,
},
};
server.inject(req, (res) => {
assert.equal(res.statusCode, 200);
assert.exists(res.result.id);
assert.equal(res.result.address, 'here');
assert.equal(res.result.trainingNumber, '15');
assert.equal(res.result.type, 'organisation.types.university');
updateOrganisationUtilities.getLinkBetweenAdminAndOrga(res.result.id, idAdmin2)
.then((link) => {
assert.include(link, 'IS_ADMIN');
assert.include(link, 'ROLE');
doneIt();
});
});
});
});
As you can see :
It seems working but Im afraid that it's going to be a PITA to maintain this kind of structure when I will have hundreds of tests that might alter the database. Moreover I fear consuming too much time with this way of testing my endpoints since I have to write specific functions/queries before and after each endpoint test.
I'm sorry if my question seems wide open but im very curious about how experienced developers deal with endpoints automated tests.
Upvotes: 1
Views: 347
Reputation: 66989
To ensure that unexpected changes from one test do not affect another test, you should completely empty out the DB after each test, and create the appropriate fixture before each test. Not only would this ensure that you are testing with the data you expect, but it will make implementing the tests easier.
Upvotes: 2