Trond Nordheim
Trond Nordheim

Reputation: 1436

Orchestrating local development or testing environments with Azure Service Fabric

After playing around a little bit with Azure Service Fabric, and watching the streams from BUILD, I'm a bit curious if there are anything the pipeline around tooling for orchestrating environments for more complex services.

Say I build a service "Service1" which calls upon actors and services in "Service2" and "Service3"; any developer checking out the "Service1" repository to perform changes must also not only check out "Service2" and "Service3", but also build them and deploy them to the Service Fabric before being able to properly test his/her changes within "Service1". Comparing this to for instance Compose for Docker (I'm aware that Azure Service Fabric isn't a container-style infrastructure as such, but just as an example), where you can create a manifest describing your service and its dependencies. Then you can easily bootstrap the entire environment necessary to run and test your service.

This would also be useful for automated testing or even QA of your service, where you can spin up a new cluster and deploy your service - and its dependencies - and run actual live testing on your changes before you start a production deployment.

This might be a question or proposal better suited as product feedback, but it would be interesting to have some more input on this before formulating it into a suggestion. I'm not deeply familiar with Compose, which I used as an example - nor with Azure Service Fabric, so there might be better solutions to the problem out there.

Upvotes: 6

Views: 1098

Answers (4)

Satya Tanwar
Satya Tanwar

Reputation: 1118

The way we have handled the relationship between various service and actor packages is to have the full deploy. With the full deploy any change commited by user in their local git branch will be tested by complete application deployment (all service & actors) . We have used the custom ps scripts to deploy the same in local environment.

Once app is deployed, developer run our complete suite of specflow test which uses mock data and tests all the pieces of our service fabric application along with any new changes made. Currently the deployment is a manual steps which is performed by developer. Developers are required to attach the specflow testrun result along with commit/merge.

Coming from Microsoft BizTalk world where apps/orchestrations behave like service and actors in SF, we have learned full deploy is much more cleaner approach compared to partial deploy in managing dependency between application in a same environment.

our application consist of 3 services

1. Stateless router service
2. State full cache service
3. Domain specific Actor micro services ( we have multiple services in same actor project)

Service 1 and 3 are dependent on service 2. Any change in 'Router' or 'Actor' service need to be tested along with 'Cache' service as they all work together. 

Any changes to 'Actor' ( this one changes most of the time due to business logic ) generally require our team to deploy all 3 service and execute our spec flow test suite to test complete functionality. This make sure that all the functionality is working as expected. 

As an alternative you can create multiple projects for different actors instead of having all of them in same project. We are also working on this approach in our v2 implementation. This will help that we only deploy actor based micro services where change happened along with any required services.

I agree our current design of combining all the actors in a single project is not best but this helped us in building the app at a faster pace. Refactoring is always important once you learn what works and what don't as per your need.

Upvotes: 1

staceyw
staceyw

Reputation: 36

Not sure I see the issue yet. Assuming TFS (or other), the team may have a root project that contains all the service projects. You will have a local WorkingSet of ServiceB already on local box (which you can build and deploy). You build and deploy ServiceB (if you have not before) to your local dev fabric. It stays there until you delete it or deploy over it. Now you checkout for edit ServiceA, make your change, f5 on ServiceA solution and your ServiceA finds ServiceB which is already in the dev fabric. If someone changes ServiceB later, you get fresh bits and deploy that app again so it will be updated in your dev fabric until it changes again.
In an Azure fabric situation (when released) would seem to be much the same. ServiceB will already be running. You change ServiceA and deploy your change.

Upvotes: 0

Vaclav Turecek
Vaclav Turecek

Reputation: 9050

Pulling down application packages and deploying them to your cluster (whether local, test, prod, whatever) actually is a fairly simple thing as long as you have a central repository of application packages similar to what Compose does with Docker Hub. In other words, the capability is there but we don't yet have a centralized place like Docker Hub to store application packages - and you'd have to do a little scripting to pull things down yourself. So for now yes you would need to check out the repo that has your dependencies and build (or inject mocks for local testing).

On that note, please do post this suggestion to our product feedback!

Upvotes: 3

VipulM-MSFT
VipulM-MSFT

Reputation: 476

Once you have well defined contracts between the micro-services you can upgrade them and test them independently. Various micro-services can be packaged in the single application package or they can be part of the different application packages. You can create needed instances of the micro-services in single application or a different application.

Let’s say that micro-service Service A is part of the Application Package A. It relies on other micro-service Service B that is part of the Application Package B. You have installed and instantiated Application Package B as “fabric:/AppB” and created a micro-service “fabric:/AppB/ServiceB”. You can deploy and instantiate Application A and Service A as “fabric:/AppA/ServiceA” configured to talk to “fabric:/AppB/ServiceB”. You can also look up the service of particular type at runtime and based on published metadata in the endpoint of the service or in the Naming property store you can dynamically connect to needed services.

Upvotes: -1

Related Questions