Part 1 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 2 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 3 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 4 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 5 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Text updated 27th of June 2021
As I explained in Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 5, to me a service is a logical boundary that is the technical responsible for a given business capability.
This means that the service owns all data, logic and UI for this capability everywhere it is used.
What does it mean that a service is a logical boundary?
As explained in Philippe Kruchten’s 4+1 view of software architecture, we should not automatically force the logical view to be the same as the physical implementation or for that matter the deployment view.
This often means that a service isn’t a single implementation or deployment artifact.
You’ve probably read that a microservice should follow the Single Responsibility Principle (SRP) – it should do one thing and do it well.
If we align microservices with business capabilities, such as Sales, Shipping, Marketing, Billing, Policy Management,
then the microservices would most like be fairly big, which goes against many of the qualities we like about microservices, such as:
- Small (easy to comprehend)
- Replaceable (discard the old and write a new in 2 weeks)
- Upgradable (upgrade just the parts you want without interrupting other parts)
- Fast startup/shutdown
- Individually deployable
A large service is still individually deployable, but from a scaling point it’s typically all or nothing: either you scale the entire deployable unit or you don’t.
What if it is only certain use-cases that needed scaling? This is often harder with too big a deployable unit (what some people refer to a monolith) due to individual components inside the unit being too tightly coupled, like a tangled ball of yarn.
As explained in part 5 splitting services along business capabilities (or in DDD terms Bounded Contexts) has many advantages, such a Business/IT alignment, encapsulation and loose coupling.
It can serve us well to look at the smaller responsibilities of a given business capability.
We can with benefit breakdown the business capability in to smaller parts or components.
The smallest responsibility for a component inside a service, is the handling of a single message/use-case i.e. either a Command or an Event. Personally I prefer dividing Services into components along transactional/consistency boundaries.
When we have decomposed a service into smaller implementation components, then there is no logic or behavior remaining outside of these components.
This effectively makes the service a logical container/boundary. The only artefacts other than the components, a service has are its external schema/contracts (the commands, events, datatypes, etc. it exposes).
Are the components inside the logical service then microservices?
They can be!
One of the qualities that many people care about, when discussing microservices, is that they own their own data (i.e. have their own database schema/model), which can give us the ultimate degree of autonomy for microservices (as long as they don’t perform RPC/REST against other microservices).
But as you know there is at least 50 shades of grey and the same goes for shades of Autonomy.
The components inside a service boundary can share endpoints, resources, processes and other artefacts in many combinations. Here are some of the most likely:
Endpoint | Process* | Database | Storage |
Shared | Shared | Shared | Shared |
Own | Shared | Shared | Shared |
Own | Shared | Own | Shared |
Own | Own | Shared | Shared |
Own | Own | Own | Shared |
Own | Own | Own | Own |
* Process includes: co-deployed inside the same runtime (e.g. JVM), co-deployed in same OS instance, in the same Docker image, on the same physical hardware, etc.
Let’s be pragmatic
Going for full microservices every time is too dogmatic in my opinion.
We need to be pragmatic and case by case determine what solution solves the use-case best, where best involves time to market, price, changeability, scalability, future direction, etc.
There isn’t such a thing as a cookie cutter solution, sorry 😉
How should we split?
Should we split into component per message, should we split into components per aggregate or should we split into components for functional area inside the service (e.g. having two separate implementation lines of e.g. Order handling – one for VIP customers and one for regular customers)?
My rule of thumb is that we should strive to make our components autonomous, i.e. they shouldn’t need to request data or call logical from other services/components using RPC/REST/… (i.e. using two way communication).
We should strive for them to interact only using Events.
We can perform RPC, e.g. for queries or *Commands, between components within the same Service boundary, but in this case we should consider co-locating/co-deploying them into the same process and use local calls instead of remote network calls.
In the next blog post I will show how we have chosen to organize our services/components/microservices/gateways/applications as code artefacts.
Update 31st January 2016 (thanks to Trond Hjorteland for pointing out that the original text indicated that sending commands entitled using one-way communication)
* Note: There’s some debate about whether Commands strictly involve two way communication (sync or async) or if they can be used with one way communication (e.g. with events to clarify if they succeeded or not). My view is that commands should be dealt with using two way communication, since you typically want to know if the processing of the command succeeded. For certain cases you may be fine with not knowing this (or you believe that the likelihood of the command failing is very low), in which case Commands could dealt with using one-way communication. A good question to ask in such a situation: are the messages really exchanged commands or are they in fact events? – in which case one-way communication is the right way to exchange the message
Hi Jeppe,
Great post as always. I’m glad that you post talk about autonomy because I was hoping you could share your opinion on who authentication and authorization should be implemented in a micro service architecture.
From a DDD perspective, I believe Identity & Access should be its own bounded context and hence should be implemented as its own micro service. How should this data be shared between services?
In a monolith when a user is authenticated a session is created on the server and you probably store the user id in it. However, if different commands are sent to different services, how do you check if the user is authenticated or not?
LikeLike
Hi Sherif
I agree the Identity & Access is it’s own bounded context. In my view it falls within the responsibilities of IT operations / Technical infrastructure team, which means that the Identity & Access (or components of this responsibility – such as Authentication and simple Authorisation servlet filters, if you’re using Java) are co-located/deployed together with the microservices/autonomous-components that needs Identity & Access.
It’s generally a tricky subject (and sometimes a slippery slope) – it can be hard to determine when it’s an IT Operations concern and when it’s a business concern that belongs inside the business capability. IT Operations should only be concerned with technical concerns like it this user authenticated, may I call this end-point, may I send this message, etc. Determining if a User may access an Aggregate, perform an operation (like transfer money between two accounts) belongs inside a business service that owns the aggregate.
/Jeppe
LikeLike
Hi Jeppe,
Great series, a lot of food for thought 🙂
One topic in particular that I’m struggling with is getting rid of synchronous communication between services.
Our platform is pay-per-use where users buy credits and when they use certain functions in the application credits are deducted from their credit balance. Those paid functions are spread across several unrelated services, so for adding and deducting credits I created a separate service (the ‘credit manager’) which is called from those services. The user can not use a paid function unless credits have been successfully deducted (the credit balance can not go below zero), so I can not rely on asynchronous communication.
If I want to eliminate synchronous communication to the credit manager, what options do I have ?
LikeLike
Hi Rik
Without knowing more details about how you have split knowledge/behaviour/integrate between your services and the UI, then I would say you’re caught in a classic Autonomy vs Authority problem.
You *can* obtain Autonomy if you distribute knowledge+ logic of the current credit balance to all pay-per-use services.
This pushes detailed knowledge about credits into every service combined with the issues surrounding eventual consistency. It will take sometime from update of credits in one service until every other service knows about it, which increases the risk of providing access to something without the user having credits – which is something you most likely wouldn’t want to (or maybe you do if the risk is very little).
For a case like this you may want to exchange autonomy for authority – the service that knows about credits (the credit manager) is the authority on how many credits a user has and is able to deduct credits before allowing access, perhaps it also knows the price of each pay-per-use scenario (there are pros & cons to this last point). No one else gets to know this or touch the credit. This also means that in case marketing comes up with a discount scheme, then this change could be limited to the credit manager service alone (if it knows the price per use – otherwise the change needs to happen in each pay-per-use service or a pricing service – answering this question properly of course requires a better understanding of your business and how boundaries/responsibilities can be split)…
I would most likely see your credit manager as a technical service, much like authorisation and authentication. These types of services are usually co-deployed or co-located together with the services that need them in order to avoid unnecessary remote communication (RPC). If this is not possible then your only option is IMO to perform remote communication from the pay-per-use services to the credit manager service. This means the credit manager service needs to be highly available and scalable, so it doesn’t become the bottleneck or single point of failure.
Hope this makes sense? 🙂
/Jeppe
LikeLike
These posts are fantastic ! They explain very well SOA and Microservices and the differences between them ! Well done. I’ll recommand this article 🙂
LikeLike