Reputation: 249
My organization started to use Pact for creating/verifying contracts between REST services/micro services written in Java about half a year ago. We have a hard time deciding what the appropriate scope or grasp of a provider test should be and would love some input from the experience of other pact users out there.
Basically the discussion evolves around where to mock/stub in the provider tests. In a service you would have to mock external calls to other services at least but you have the option of mocking closer to the REST resource class as well.
We boiled it down to two options:
1. The first option is that a provider test should be a strict contract test and only exercise the provider service's REST resource class, mocking/stubbing out the service classes/orchestrators etc. used from there. This contract test would be augmented with component tests that would test the parts stubbed/mocked by the provider test.
2. The second option is to use the provider test as a component test that would exercise the entire service component for each request. Only transitive external calls to other components would be mocked/stubbed.
These are thoughts of pro's for each option
Pro's for option 1:
Pro's for option 2:
I would be really interested to hear how your provider tests typically look in this regard. Is there a best practice?
Clarifying what we mean by "Component": A component is a microservice or a module in a larger service application. We took the definition of 'component from Martin Fowlers http://martinfowler.com/articles/microservice-testing/.
A provider service/component typically has a REST endpoint in a Jersey resource class. This endpoint is the provider endpoint for a Pact provider test. An Example:
@Path("/customer")
public class CustomerResource {
@Autowired private CustomerOrchestrator customerOrchestrator;
@GET
@Path("/{customerId}")
@Produces(MediaType.APPLICATION_JSON)
public Response get(@PathParam("customerId") String id) {
CustomerId customerId = CustomerIdValidator.validate(id);
return Response.ok(toJson(customerOrchestrator.getCustomer(customerId))).build();
}
In the above example the @Autowired (we use spring) CustomerOrchestrator could either be mocked when running the provider test or you could inject the real "Impl" class. If you choose to inject the real "CustomerOrchestratorImpl.class" it would have additional @Autowired bean dependencies that in turn may have other... etc. Finally the dependencies will end up either in a DAO-object that will make a database call or a REST client that will perform HTTP calls to other downstream services/components.
If we were to adopt my "option 1" solution in the above example we would mock the customerOrchestrator field in the CustomerResource and if we were adopting "option 2" we would inject Impl-classes (the real classes) for each dependency in the CustomerResource dependency graph and create mocked database entries and mocked downstream services instead.
As a side note I should mention that we rarely actually use a real database in provider tests. In the cases where we adopted "option 2" we have mocked the DAO-class layer instead of mocking the actual database data to reduce the number of moving parts in the test.
We have created a "test framework" that automatically mocks any Autowired dependency that is not explicitly declared in the spring context so stubbing/mocking is a light weight process for us. This is an excerpt of a provider test that exercises the CustomerResource and initiates the stubbed CustomerOrchestrator bean:
@RunWith(PactRunner.class)
@Provider("customer-rest-api")
@PactCachedLoader(CustomerProviderContractTest.class)
public class CustomerProviderContractTest {
@ClassRule
public static PactJerseyWebbAppDescriptorRule webAppRule = buildWebAppDescriptorRule();
@Rule
public PactJerseyTestRule jersyTestRule = new PactJerseyTestRule(webAppRule.appDescriptor);
@TestTarget public final Target target = new HttpTarget(jersyTestRule.port);
private static PactJerseyWebbAppDescriptorRule buildWebAppDescriptorRule() {
return PactJerseyWebbAppDescriptorRule.Builder.getBuilder()
.withContextConfigLocation("classpath:applicationContext-test.xml")
.withRestResourceClazzes(CustomerResource.class)
.withPackages("api.rest.customer")
.build();
}
@State("that customer with id 1111111 exists")
public void state1() throws Exception {
CustomerOrchestrator customerOrchestratorStub = SpringApplicationContext.getBean(CustomerOrchestrator.class)
when(customerOrchestratorStub.getCustomer(eq("1111111"))).thenReturn(createMockedCustomer("1111111));
}
...
Upvotes: 21
Views: 3158
Reputation: 3387
We abandoned Pact.
In my experience, pact solves some very specific problems that we didn't have. Like using a sledgehammer to drive a nail into the wall.
Back to your question:
Pact is basically an attempt at creating a shared mock -- a declared request and response -- that is trustworthy because it is shared. It solves the problem of untrustworthy mocks by having both client and service use the same declarations. If you are further mocking out your service to verify those mocks, then you're just sidestepping what pact is about.
The problem with mocks is they aren't trustworthy. They're incomplete, they go out of date, and it can be hard to tell if they're correct just by looking at them; capturing realistic API output isn't feasible (manual capture, automated replay).
The typical mitigation strategy for untrusted mocks or mock data is to run another integration test that doesn't use the mock, with some shared validation code.
So, I would say, the whole point of pact is to have option #2.
Why I don't like pact
I would point out that pact does not solve all integration testing problems by any stretch. The OP and some answers call this out. It seems like a heavy-handed approach to gatekeeping your environments, and probably indicates a lack of discipline/standards with those CICD quality gates - just one opinion here.
I feel like pact is selling a solution most don't need.
Here's how I would describe the use case for needing pact.io (must fit all of these)
HTTP Rest interactions are king (ignore events & streams)
Fairly mainstream tech stacks with good pact language support
You don't mind a custom API specification DSL - ignore jsonschema.org or OAS, etc.
Your mocks aren't trustworthy unless they are shared and verified regularly.
Consumers are driving the API changes (probably a rich UI)
Communication/coordination about API changes is difficult (maybe even dysfunctional):
You can't trust your dev/stage environment(s) -- they're too unstable, the service mesh too complex without fault tolerance, and creating an alt-stage or pre-prod environment is out of the question.
You can't validate your dev builds against stage instances in a meaningful way
You can't create shared integration tests due to high complexity and heavy build/test turnaround times.
It's worth teams spending huge amounts of time to understand pact, pact hooks, pact dependency tracking, can-i-deploy, and pact test design. And it's worth continuing to pay that cost for every new hire, and for every employee to spend extra time wrapping their head around the hidden intricacices of pact every 6-12 months.
Ok, so I am a bit turned off by Pact in general. Jaded? I feel a bit wary of any claims that most implementations of XYZ tech failed because they don't have a top-down directive to use the technology.
Suggesting alternatives
Upvotes: 0
Reputation: 249
We have decided on option 2. That is we should strive towards including as much real code as possible in the provider tests. The primary reason was that to achieve test symmetry from the mocked spring beans in a complementary component test would be more complicated than to have a slightly more complicated provider test as a result of using option 2.
Thanks for your inputs!
Upvotes: 1
Reputation: 12847
I say go with Option 2.
The reason is because the whole raison d'etre for Pact is to have confidence in your code change - that won't break the consumer, or that if it does, find a way to manage that change (versioning) while still keeping the interactions intact.
To be fully confident, you must use as much of the 'real' code as possible, while the data can be mocked or real, it doesn't really matter at that point. Remember that you want to be able to test as much of the production code before deploying it.
The way I use it, I have 2 types of tests, unit tests and Pact tests. The unit tests make sure that my code doesn't break on stupid mistakes or bad inputs, which is great for mocking dependencies, while the Pact tests are used to test the interactions between the consumer and the provider and that my code change doesn't affect the request or the data format. You could potentially mock dependencies here, but that might leave something open to breaking in production because that dependency might affect the request or data.
In the end though, it's all up to preference on how you use Pact, as long as you use it to test out the contracts between consumer and provider.
Upvotes: 3
Reputation: 1318
This is a question that comes up often, and my answer is "do what makes sense for each service". The first microservices that pact was used for were so small and simple that it was easiest to just test the whole service without any mocks or stubs. The only difference between a call to the real service and a call in the verification test was that we used sqlite for the tests. Of course, we stubbed calls to downstream services.
If it's more complex to set up real data than it is to stub, then I'd use stubs. However! If you are going to do this, then you need to make sure that the calls you stub are verified in the same way that pact works. Use some sort of shared fixture, and make sure that for every call that you stub in a pact provider test, you have a matching test to ensure the behaviour is as you expect it to be. It's like you're chaining collaboration/contract tests together like this:
Upvotes: 5