What does it mean that a service is a logical boundary?
As explained in Philippe Kruchten’s 4+1 view of software architecture, we should not automatically force the logical view to be the same as the physical implementation or for that matter the deployment view.
This often means that a service isn’t a single implementation or deployment artifact.
Examples of business capabilities are: Sales, Shipping, Marketing, Billing, Policy Management, etc.
These capabilities are pretty broad in their scope. You’ve probably read that a microservice should follow the Single Responsibility Principle (SRP) – it should do one thing and do it well.
But if a microservice should cover an entire business capability it would most like be fairly big, which goes against many of the qualities we like about microservices, such as:
Small (easy to comprehend)
Replaceable (discard the old and write a new in 2 weeks)
Upgradable (upgrade just the parts you want without interrupting other parts)
A large service is still individual deployable, but from a scaling point it’s typically all or nothing: either you scale the entire deployable unit or you don’t.
What if it is only certain use-cases that needed scaling? This is often harder with too big a deployable unit (what some people refer to a monolith) due to individual components inside the unit being too tightly coupled, like a tangled ball of yarn.
First of all, sorry to those who’ve been waiting for part 5. My schedule has been too busy to find focused time to write part 5 before now 😳 .
In part 4 we looked at building our services around functional areas or business capabilities/bounded-contexts. We discussed that the business data and logic pertaining to a business capability must be encapsulated inside a service to ensure single source of truth for the data. This also means that other services aren’t allowed to own the same data that another service owns (we want to avoid multi master services). Since we want our service to be autonomous (i.e. be able to make a decision without having to communicate synchronously with other services), we also looked how to avoid 2 way communication (RPC, REST or Request/Response) between services. The options we looked at were Composite UI’s and Data duplication over Events. We also briefly discussed a 3rd option which involves a different view on services, where they’re not autonomous. Instead services expose intentional interfaces and coordinate updates/reads between several System of Records (SoR) that them selves are autonomous. I believe that organizations with many large legacy systems (and most likely multi master systems) should look into the possibilities of the 3rd option as I believe it may create less friction than trying to develop new autonomous services that are well aligned with business capabilities.
In part 5 I will continue discussing SOA and Microservices in the light of autonomous services.
Last week we enjoyed the company of several other SOA and microservice interested developers and architects at the µService Conference in London – https://skillsmatter.com/conferences/6312-mucon#program
We did a talk on Thursday called “Microservices – SOA reminded of what it was supposed to deliver?” – the video and slides are available on line.
If you’re new to SOA, Microservices and/or DDD – we highly recommend watching Udi Dahans keynote before watching the video of our talk.
In part 3 we saw, that in order to ensure a higher degree of autonomy for our services, we need to avoid(synchronous) 2 way communication (RPC/REST/etc.) between services and instead use 1 way communication.
A higher level of autonomy goes hand in hand with a lower degree of coupling. The less coupling we have, the less we need to bother with contract and data versioning.
We also increase our services stability – failure in other services doesn’t directly affect our services ability to respond to stimuli.
But how can we get any work done, if we only use 1 way communication? How can we get any data back from other services this way?
Short answer is you can’t, but with welldefined Service Boundaries you (in most cases) shouldn’t need to call other services directly from your service to get data back.
What is a service boundary?
It’s basically a word that’s used to define the business data and functionality that a Service is responsible for. In SOA: synchronous communication, data ownership and coupling we covered Service principles such as Boundaries and Autonomy in detail.
Boundaries determine what’s inside and outside of a Service. In part 2 we used the aggregate pattern to analyse which data belonged inside the Legal Entity service.
In the case of the Legal Entity service we realised that the association between Legal Entity and Addresses belonged together because LegalEntity and its associated Addresses were created, changed and deleted together. By replacing two services with one we gained full autonomy for the Legal Entity service whereby we could avoid the need for orchestration and handling all the error scenarios that can result of orchestrating data mutating calls between services (LegalEntity service and Address service).
In the case of the Legal Entity the issue of coupling was easily solved, but what happens when you have a more complex set of data and relationships between these data? We could just pile all of that data into one service and thereby avoid the problem of having data mutations across processing boundaries (i.e. different services that are hosted in other OS processes or on different physical servers). The issue with this approach is that this quickly brings us into monolith territory. There’s n0thing per se wrong with monoliths. Monoliths can be build using many the same design principles described here, e.g. as modules instead of as microservices, which are bundled together and deployed as a single unit – where as microservices often are deployedindividually (that’s at least one of the major qualities that people talk about in relation to microservices). Continue reading “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 4”→
In Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 2, we again discussed the problems with using (synchronous) 2 way communication between distributed (micro) services. We discussed how the couplingproblems caused by 2 way communication combined with micro services actually result in the reinvention of distributed objects. We also discussed how the combination of 2 way communication and the lack of reliable messaging and transactions cause complex compensation logic in the event of a failure.
After a refresher of the 8 fallacies of distributed computing, we examined an alternative to the 2 way communications between services. We applied Pat Hellands “Life Beyond Distributed Transactions ? – An Apostate ‘s Opinion” (PDF format) which takes the position that Distributed transactions are not the solution for coordinatingupdates between services. We discussed why distributedtransactions are problematic.
According to Pat Helland, we must find the solution to our problem by looking at:
We also discussed how using 2 way (synchronous) communication between our services results in hard coupling and other annoyances:
It results in communication related coupling (because data and logic are not always in the same service )
It also results in contractual-, data- and functional coupling as well as high latency due to network communication
Layered coupling (persistence is not always in the same service )
Temporal coupling (our service can not operate if it is unable to communicate with the services it depends upon)
The fact that our service depends on other services decreases its autonomy and makes it less reliable
All of this results in the need for complex logic compensation due to the lack of reliable messaging and transactions.
If we combine (synchronous) 2 way communication with small / micro-services, modelled according to e.g. the rule 1 class = 1 service, we are actually sent back to the 1990s with Corba and J2EE and distributed objects.
Unfortunately, it seems that new generations of developers, who did notexperiencedistributed objects and therefore not take part in the realization of how bad the idea was, is trying to repeat the history. This time only with new technologies, such as HTTP instead of RMI or IIOP.
Jay Kreps summed up the current Micro Service approach, using two way communication, very aptly:
I read the other day that the new system Proask for the National Board of Industrial Injuries in Denmark, was the first major project that would realize the Ministry of Employment strategic decision to use a Service Oriented Architecture ( SOA). For those who have not heard of Proask, it is yet another strongly delayed public project which, like most other public projects, are trying to solve a very big problem in a large chunk. A lot can be written about this approach, but in this blog post I will focus on here is their approach to SOA. A related article reports that the new Proask system is 5 times slower than their old system from 1991.
The Proask project was initiated in 2008. It made me think back on that other ( private) SOA prestige project from the same period, for which I was an architect for a subcontractor. The entire project was built around SOA with many subsystems that would deliver services. The entire architecture was built around an ESB that would act as facilitator in terms of mapping and coordination. All communication was done as synchronous WebService calls over HTTP(S). So classic SOA for the period 2003-201? (sadly synchronous calls are still the predominant integration form today). This SOA realization was also characterized by very poor performance, high latency and low stability.