smolarek999
smolarek999

Reputation: 519

SOA by microservices - how to normalize/transform messages

We are developing solution which adapts Messages Pattern and Microservices to define and run business flows.

It should have such components:

We should be able to define multiple flows where each step is to call different service.

Output of one service may be input for another one. Problem is that they can have different schema so it should be transformed/normalized somehow.

But which part should be responsible to do such transformation? It should be configurable, because we want to add new flows without redeployment.

First idea is to store responses from each services, and then each step will use XSLT transformation to produce input xml out of previous responses. But it may be configuration hell, cause creating and testing such XSLT won't be easy

Do you have any suggestions how to solve this properly?

Upvotes: 4

Views: 915

Answers (3)

MetalLemon
MetalLemon

Reputation: 1400

I found this paper useful in shaping my thinking on the subject, it may be of help to you too.

Pg. 21, but read the paper before this as you need the context

Practical SOA for Solution Architect

Upvotes: 0

Sergey Alaev
Sergey Alaev

Reputation: 3982

If you are having long flow of messages processed by multiple, independently written components, you should use component-specific data formats.

Due to Conway's law, your components should belong to different business entities that have different views on domain model. For example, "Order" can mean completely different things and need to have different data for different departments.

As for your question - every component should send messages that are specific to his business entity. Receiving side knows the edge between different business worlds and should do transformation and enrichment of incoming messages for further processing.

Of course, additional data is needed to do that enrichment. It must be provided by other services owning dictionaries and configuration.

P.S. "add new flows without redeployment" is main reason to fail for many and many projects. First, there is no reason to fear redeployment in modern architecture - all your services must be clustered and handle failure gracefully. Second, you should strictly define what can be done by changing configuration/rules/etc and what can not. And more importantly WHO can do that changes. Do not expect business people to write business rules by default :-)

Upvotes: 1

ahoffer
ahoffer

Reputation: 6546

Assuming you have multiple systems providing the services, use a canonical data model to avoid embedding transformations in middleware. Here is a link to Gregor Hohpe's enterprise integration patterns site about canonical models.

a Canonical Data Model that is independent from any specific application. Require each application to produce and consume messages in this common format.

The idea is that there is an agreed upon standard which services use for inter-operations. Typically, each system that provides services has its own, internal data model that is different than the canonical model. This happens because changing legacy systems is too onerous, or the canonical model is a bad fit for a system's internal representation of data. Each system is then individually responsible for converting its internal schema to the canonical model for service I/O.

Each system team can use whatever tool its wants to perform the transformations: XSLT, Python scripts, Java, or some WYSIWYG tool from Oracle or Microsoft. The upshot is that any system whose internal data model is not identical to the canonical model must perform some mapping. It's unavoidable.

Upvotes: 1

Related Questions