Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 5

Part 1 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 2 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 3 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 4 – Microservices: It’s not (only) the size that matters, it’s (also) how you use them
Part 6 – Service vs Components vs Microservices

Text updated the 27th of June 2021

First of all, sorry to those who’ve been waiting for part 5. My schedule has been too busy to find focused time to write part 5 before now 😳 .

In part 4 we looked at building our services around functional areas or business capabilities/bounded-contexts.

We discussed that the business data and logic pertaining to a business capability must be encapsulated inside a service to ensure single source of truth for the data.

This also means that other services aren’t allowed to own the same data that another service owns because we want to avoid multi master services.

Since we want our service to be autonomous, i.e. the service should able to make a decision without having to communicate synchronously with other services, we also looked how to avoid 2 way communication (RPC, REST, GraphQL or Request/Response) between services.

The options we looked at were Composite UI’s and Data duplication over Events.

We also briefly discussed a 3rd option that involved a different view on services, where they’re not autonomous and services instead expose intentional interfaces and coordinate updates/reads between several System of Records (SoR) that them selves are autonomous. I believe that organizations with many large legacy systems (and most likely multi master Systems of Records) should look into the possibilities of the 3rd option, as I believe it may create less friction than trying to develop new autonomous services that are well aligned with business capabilities.

In part 5 I will continue discussing SOA and Microservices in the light of autonomous services.

Business Capabilities and Services

In part 4 I suggested building our services around functional areas or business capabilities/bounded-contexts.
I would like to tighten up that statement and rephrase to:

We should align our services with business capabilities.


Why? In http://bill-poole.blogspot.dk/2008/07/business-capabilities.html Bill Poole explains why he thinks using Business Capabilities for Service alignment is the right way to go:

Bill Poole:… a business capability is something that an organisation does that contributes in some way towards the overall function performed by the organisation.

The advantage of business capabilities is their remarkable level of stability. If we take a typical insurance organisation, it will likely have sales, marketing, policy administration, claims management, risk assessment, billing, payments, customer service, human resource management, rate management, document management, channel management, commissions management, compliance, IT support and human task management capabilities. In fact, any insurance organisation will very likely have many of these capabilities.

Business capabilities are the essential part of the software we develop. Dan North has the following to say on the subject:

dan north business capability

Dan North: Business Capability is the asset

Finally in http://www.udidahan.com/2010/11/15/the-known-unknowns-of-soa/ Udi Dahan states why he thinks Services should be autonomous and the technical authority for a specific business capability:

Udi Dahan:…synchronous producer/consumer implies a model where services are not able to fulfill their objectives without calling other services. In order for us to achieve the IT/Business alignment promised by SOA, we need services which are autonomous, ie. able to fulfill their objectives without that kind of external help.

A service is the technical authority for a specific business capability.
Any piece of data or rule must be owned by only one service.

What this means is that even when services are publishing and subscribing to each other’s events, we always know what the authoritative source of truth is for every piece of data and rule.

I have summed the above statements into the following rule:

A Service is

  • The technical authority for a given business capability
  • It is the owner of all the data and business rules that support this business capabilityeverywhere
  • It forms a single source of truth for that capability

The consequence of this defintion is that:

A service needs to be deployed and available everywhere its data/logic is needed.

Thinking about it that makes a lot of sense. In http://www.udidahan.com/2010/11/15/the-known-unknowns-of-soa/ Udi Dahan explains why:

Udi Dahan: …when looking at services from the lense of business capabilities, what we see is that many user interfaces present information belonging to different capabilities – a product’s price alongside whether or not it’s in stock. In order for us to comply with the above definition of services, this leads us to an understanding that such user interfaces are actually a mashup – with each service having the fragment of the UI dealing with its particular data.

Ultimately, process boundaries like web apps, back-end, batch-processing are very poor indicators of service boundaries. We’d expect to see multiple business capabilities manifested in each of those processes.

If we examine the domain of banking we can see that it provides multiple UI’s such as customer facing Mobile phone app, full Web applications and back-office applications. Each of these UI’s will present data from many underlying business capabilities:

Multiple composite UIs

Multiple bank UI’s

To help make this less abstract here’s an example of what a concrete composite UI could look like.
Each red box represents a UI widget delivered by a Service. Each widgets form a part of the complete UI.
The only data shared between the UI and the Service UI widgets is the id of the Product being displayed (this id can be shared as a cookie, url parameter, UI event, page shared variable, etc.)

Amazon Composite UI

Composite UI example Amazon

Layout wise the page where the UI widgets are placed is owned by the application. The application doesn’t know how the widgets are implemented or how they fetch data. As mentioned the only contract is id of product being displayed

The composition of Service UI widgets can happen client side or server side. One way to think of it is the each service gets to render its UI into a designated DIV in the webshop UI page.

Composite UI Widgets

Composite UI – widgets on a page

Here’s an example of how loading a page can be coordinated on the front end.

HTML Composite UI

HTML Composition

Clarification: If the UI composition is performed client side, then the communication between a client side UI widget and its server side counterpart API will 99,9% of the time be in the form two way communication (typically async Ajax calls against a REST interface). Using events between the UI and the backend is typically only used for notifications.

The advantage of Composite UI’s is that the application, here the WebShop, doesn’t need to know any details about each of the services that provide UI partial to the page. The fact that a Review is a combination of Score, Number of Reviews and Number of Likes is completely encapsulates in the Review Service. The fact that the review score is rendered as stars, instead of a number, is a concrete frontend UI visualization decision.
The Review service might only output a “<score>4.2</score>”. How this is rendered in the UI is up to the styling.

From the applications point of view all the UI widgets share is the Page Context (e.g. contained in a page variable, cookie value, shared using an Event that each UI composite listens to), the styling contract (e.g. CSS based) and the fact that the ReviewService’s UI should be rendered into a DIV with id “Book:Review”.

The advantage is that once a new requirement for Reviews is introduced it only needs to be implemented inside the ReviewService’s code base as well as ReviewService owned UI widgets. Nothing inside the applications needs to change. The change is completely local to the ReviewService.

Another advantage of using composite UI’s is that other services rarely needs to subscribe to events from other service just to build up caches/replicas of  other services data. This problem is solved at the composition level. 

The downside is that all Services must be able to provide its UI widgets to ALL applications on potentially MANY different platforms such as an iOS app, back office .NET app, a Java based Webshop, etc. as exemplified below where multiple services are part of rendering/printing an invoice in a composite way:

Composite Invoice

Composite UI example Invoice

In my opinion Composite UI’s work really well with autonomous services as it extends the Service encapsulation all the way to the UI.

Update: One of the challenges with composite UI’s is when it comes to updates (e.g. submitting a form across multiple services), because we here run into the classical problems with updating data across transactional boundaries without having XA/2PC transactions to help solve the problems that occur when one or more services fail to update where as other succeed. Often times this can be solves by using page transitions, but not always.
You also have to take into account that the browser is a less reliable platform, due too browser hangs or the user closing the page, than a server side implementation of the same orchestration.

If you’re further interested in Composite UI’s I can recommend reading http://www.udidahan.com/2012/06/23/ui-composition-techniques-for-correct-service-boundaries/, http://www.udidahan.com/2014/07/30/service-oriented-composition-with-video/ and this video by Udi Dahan.

Service deployment model

Since a service is expected to be deployed wherever its data is needed, this makes service deployment the next issue we need to look a.

This begs the question: is a service a physical construction/boundary?

According to Philippe Kruchten’s 4+1 view of architecture the logical view and the physical view (or deployment view) should be independent of each other (i.e. they shouldn’t map 1 to 1).

If we combine this with our service defintion, we arrive at the conclusion:

A Service must be a logical construction/boundary.

I’ve summed this up below into the following definition:

  • Systems/Applications are (runtime) process boundaries  – A Process boundary is a physical boundary (simplest example is a single .exe og .war deployed unit)
  • A Service is a logical boundary, not a physical one. You could choose to deploy a Service as a single Physical (runtime) process, but that’s just ONE way of deploying a service (as we will see later in this blog post) and not necessarily the best way to do it.
  • Therefore Process/application/system boundaries and service boundaries are often not the same

To support application composition across multiple service, each service should be able have the following deployment options:

  • Many services can be deployed into the same system/application/process
  • Parts of a service can be deployed into applications on many different platforms (Java, .NET, iOS, Android, Web, etc.) – e.g. the UI part of your service could be deployed/packaged up into a Web application, an iOS application, an Android Application (with each e.g. being a separate implementation package for the individual platform, but they all still belong to the same logical service)
    • An example: The price of product can be displayed both on the web shop, on the backoffice application, on the iOS commerce application, etc.
  • Service deployment is not restricted to tiers – the same service can be deployed to multiple tiers / multiple applications
    • Part of service A and B can be deployed to the Web tier
    • And another part of Service A and B can be deployed to the backend/app-service tier of the same application
  • Many services can be deployed to the same server
  • Multiple services’ UI’s can be packaged/loaded into the same page (service mashup)

What is a Service made up of?

If a Service is a logical construction/boundary and it can be deployed multiple places, what is a service made up of?

In my opinion autonomous services, as described here, are made up of internal autonomous components or microservices that in total work to support the Service’s, and thereby the business capability it aligns with, various functionalities/use-cases.

Microservices are effectively the implementation details of a logical service.

Service vs microservices

Service vs Microservices

We focus on and talk about services and the business capabilities/use-cases they support, which means that we’re not overly concerned with their implementation details, i.e. their microservices (or autonomous components).

This is a good thing, because Microservices are much less stable than Services. Focusing on the service and not the implementation details makes it much easier to rewrite the microservices (as long as their contracts are stable) or supplement (in case we need another version of the  microservice running in the environment).

Each microservice can have one or more endpoints with which applications/gateways/etc. can interact with them. An endpoint could be e.g. be an HTTP endpoint that e.g. returns UI in the form of HTML, a REST endpoint performs a Query or handles a Command. It could also be a Message Queue endpoint where the microservice takes messages (e.g. Commands) off a Queue and handles them asynchronously.
The endpoint could also be a normal Java/.NET/etc. interfaces, which for IT Operations style integration can called directly without incurring the cost of remote calls.

So how small should a microservice be?

In part 3, based on Pat Hellands “Life Beyond Distributed Transactions – An Apostate’s Opinion” (original article from 2007) / “Life Beyond Distributed Transactions – An Apostate’s Opinion” (updated and abbreviated version from 2016), we formulated a rule of thumb that says:

1 use-case = 1 transaction = 1 aggregate.

This means that for every data-changing (write/update) operations (i.e. an operation that has side effects, such as changing business data) they should in general only affect one aggregate in order to ensure scalability and consistency without resorting to distributed transactions.

This means that the smallest microservice that we should create must be responsible for all data changing operations on a single aggregate. If we go smaller, then we can’t guarantee consistency for our aggregate and complexity will increase.

Said another way: A microservice is the devision of Services along transactional/consistency boundaries.

On the read side, e.g. reports or queries, the smallest microservice that we should create must be responsible for maintaining the given read model or report (e.g. through events published by the write side microservice in CQRS style).

This doesn’t mean a microservice should only be concerned with only one of the concerns above. Depending on performance, scalability, consistency, availability requirements we could choose to bundle more concepts into a single microservices (e.g. all writes/updates and queries related to a one or more Aggregate types) or split a microservice into smaller parts (separate writes/updates from reads for a given Aggregate type into separate microservices).

Logical SOA components

Logical service components

Also, there’s nothing here that mandates or requires that a Service absolutely must use events to communicate between its internal parts (Microservices). It’s absolutely possible and reasonable to review alternative storage platforms that can handle distribution of data, e.g. NuoDB.

There’s also NOTHING that says that Microservices MUST/SHOULD be deployed in their own process. In my view Microservices are logically deployable units. This means they CAN be deployed individually, but it should only be done when it makes sense!

Individually deployed units of computing entails costs for serialization, deserialization, security, communication, maintenance, configuration, deployment, monitoring, etc. So only take this expense when you have a (typically) non-functional requirement that mandates/requires processing units to be deployed individually.

Conways law

Finally we haven’t covered what this way of working with autonomous services and microservices means for the organization in the light of Conways law, which states that:

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

Said another way: The way you organize your teams has a direct influence on what you architecture will look like:

  • If you split a project between 3 teams, you will get 3 services.
  • If you split a compiler between 5 teams you will get a 5 step compiler.

One of the challenges with most organizations is that Teams are typically aligned with Applications.
We also need to have a Team that’s aligned with one or more Services/Business-Capabilities if we want to succeed with autonomous services and microservices.

Whats even worse in most organization is that teams are typically only aligned with Projects. Every new project is setup with a new team.

Jay Kreps puts this problem into perspective:

Jay Kreps Software is mostly human capital

Software is mostly human capital (in people’s heads): losing the team is usually worse than losing the code

That’s it for this blog post, in the next blog post I will investigate Service vs Components vs Microservices further.

Advertisement

32 thoughts on “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 5

  1. Great series of posts.

    When building complex clients that makes server requests for composite UI partials, or partials data, this results in more client-server chatter as the composites become more fine-grained. Watched netflix dev video(which I can’t find) where they had problems with too much server chatter, especially on mobile clients over slow networks with unstable connectivity.

    Its possible to “aggregate” server calls, i.e. ask for prices of 20 book prices in one call rather than 20 calls, or a book with price and review rather in one call rather than a price and review call. This leads to mutating sever contracts or creating cross boundary aggregating services, for UI clients.

    At the moment I go case by base, trying to stick with non-aggregated service calls, but with large lists represented with items of multiple composites I move to aggregate contracts.

    Any thoughts?

    Like

    1. Hi Ray,

      What I’ve done before in multiple applications, is to define an event driven data fetching protocol. Here’s a simple example of how we did using client-side coordination/loading where corresponding serverside parts were deployed to the same application backend.

      The initial trigger for lading data is when the page loads for the first time and each partial needs request its specific data. They each do this by handling a Page initiated PageLoading event (initial page rendering can also happen on the serverside). After the special page loading event triggering of data fetching normally happens due to a partial publishing an event. Data fetching for partial published events also follows the same pattern as the PageLoading event (the PageLoading is just a trigger event that helps each partial to know it needs to start fetching data and later render the data). Each partial has event handlers for the specific events it’s interested in. In the event handler the partial will request the data needed using its own (private) API, which is tailored to the needs of the specific partial (i.e. there’s no aggregated service call that tries to load all data for all partials in one go). The request for data is a message that is intercepted by a proxy. Together with this message the partial also registers a callback that will be called by the Proxy when the response is ready.

      The proxy understands the event handling lifecycle and waits for all partials to have handled the event published.
      When all partials have completed their event handling, the proxy will batch up all request messages together into one server call. On the server the batch is expanded/unpacked and all the requests are handled in parallel (either using green threads or actual threads from a pool). When all request messages have been handled the response from each of them (either the result or an exception) are batched up together again and returned to the client as a single response. This removes the chattiness from the client side at the expense of handling it one the serverside.

      Back on the client each response/exception is unpacked and feed back to the corresponding callback handler (that each partial registered together with the message) which allows the partialto handle the response, perform rendering, perhaps publish events and the chain continues.

      Udi Dahan has made a video presentation of a similar pattern here: http://www.udidahan.com/2014/07/30/service-oriented-composition-with-video/

      I hope this makes the pattern more clear? 🙂

      /Jeppe

      Like

      1. I have considered a batch pattern, but I’ve a liked the idea of keeping the plumbing dumb as possible.
        For example my current system is event driven, has a bus on the client and server that is connected through a single api on the server, with security, etc. So UI composites publish events to the bus and services in the client or server can publish events in response, based on Fred Georges ideas https://vimeo.com/79866979. The bus is just pub/sub. An alternative is to create a client batch service that bundles up flagged events for a single service calls and a server batch service that unpacks and so on following your example, this moves the batching into services rather than the event plumbing.

        Like

  2. Last post in the series. I really enjoyed reading them! I have some comments:

    1) In your explanation of Udi Dahan’s Composite UI, you should mention explicitly what is the interaction mechanism between the web page (client) and services. Dahan’s posts suggest the interaction is sync over async implemented in Ajax asynchronous calls and callbacks. Since this post is a follow up to #4, the reader may think the interaction is event-based with the client- and server-side components sharing an event bus/message channel. My interpretation is that we move away from 2-way calls but in this case it’s not via a pub-sub event bus, but via AJAX (asynchronous XMLHttpRequests and callback functions).

    2) If I had to guess, based on your example I’d say you think Composite UI is a good idea for *rendering* web pages that show partials from multiple services. If that’s the case, I think the post should point that out clearly. If that’s not the case, there are downsides that come into play when creating a Composite UI for a web form submit (changing data as opposed to just rendering) that are overlooked in your post. One of Dahan’s links uses the Marriott reservation web form as example. Dahan’s text also fails to comment on these downsides:
    – What if the second service fails? There’s no ACID-like transaction being managed at the JS level, so you may need to implement compensating operations and invoke them at the JS level.
    – The JS code running on the browser is more susceptible to interruption (e.g., browser hangs, user closes the page), not to mention communication issues.
    – The additional complexity of the design (especially with the optimization Dahan’s introduces in his 5-minute video and the diagram with 14 steps) is a downside for maintainability and testability.
    Thus, I think the Composite UI for changing data (e.g., submit a web form) can improve performance but does not improve the reliability of the solution because controlling changes to multiple services is more reliably done at the server-side.

    3) Where you say “The downside is that the service needs to be able to provide its UI partials…”, it’s not clear what service are your referring to.

    4) You say “According to Philippe Krutchen’s 4+1 view of architecture the logical view and the physical view (or deployment view) should be independent of each other (i.e. they shouldn’t map 1 to 1).” In fact, Kruchten’s paper has a whole section about the correspondence between the views. That section starts with “The various views are not fully orthogonal or independent.” More modern and flexible view-based architectural approaches (e.g., “Documenting Software Architectures – Views and Beyond, Second Edition”) also emphasize views are related and the mapping between elements in one view to elements in another view in general is many-to-many.
    (Also, you could fix Krutchen to Kruchten.)

    5) I like your definition of a service at the beginning of the post. It explains the importance of the service functional cohesion, it mentions business alignment and the important principles of autonomy and encapsulation. However, section “Service deployment model” is confusing to me. It somehow concludes that “a Service must be a logical construction/boundary”. It goes on to say that “Systems/Applications are process boundaries” and “A Process boundary is a physical boundary”. Physical and logical boundaries are vague concepts. Also, I don’t know if you’re talking about “process” as in “business process” or as in “CPU process”.
    In any case, a more tangigle framework to classify architecture elements is given in the book I mentioned in the previous bullet:
    – there’s an architecture view that shows (SOA) services. They are primarily runtime components. They communicate through different types of connectors (e.g., events, SOAP, REST, RMI), they use memory and CPU, they have a runtime lifecycle, there can be multiple instances of each service, they have runtime properties like latency, autonomy, and reliability. The architecture view that shows services as such is generically called a Component & Connector view or Runtime view.
    – there’s another architecture view that explains how a given software solution is packed for deployment. This deployment view shows the deployment units, how are they alocated to different runtime environments, and the relation to the runtime components. For a Java EE monolithic application for example, this view would show that all SOAP services of application xyz are packaged inside xyzWS.war.

    6) Still in section “Service deployment model”, you say “Parts of a service can be deployed into applications on many different platforms”. How can we deploy just part of a service? Maybe what you mean here is to package different versions of the service for different platforms, all these versions sharing the core logic implementation?

    7) The last bullet in that same section says “Multiple services can be deployed to the same page (service mashup)”. If this is a web page as in the Composite UI example, I think it would be more correct to say that a web page can be packaged along with multiple services. A page is not a deployment unit.

    8) The important discussion I was hoping to find in the “Service deployment model” section is: how many and which services should be inside a deployment artifact (e.g., war)? One service for each deployment artifact (MSA)? All services of a given scope (application scope) in the same deployment artifact? Other alternatives? What are the tradeoffs among the alternatives?

    9) The discussion of microservices around the web these days lacks consensus is is unclear at times. IMO one reason is that many authors don’t see the different architecture perspectives (or views) with clarity. But let’s stick with a commonly referred to definition by Lewis and Fowler: “In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.” In this definition, the only specific thing that does not match a general SOA service is “independently deployable by fully automated deployment machinery”. Indeed, Lewis and Fowler explain microservice by contrast with the monolith (which in Java EE is a single war or ear per application): “This server-side application is a monolith – a single logical executable. Any changes to the system involve building and deploying a new version of the server-side application.”
    Back to your post, I find it hard to understand microservices as subelements or submodules of autonomous services (second paragraph of “What is a Service made up of?”).
    Then you say microservices are effectively the implementation details of a logical service. Later you say microservices have endpoints, which are addressable points of interaction or interfaces. If a microservice exposes an endpoint, it’s not implementation details, unless what you mean is that a “logical service” is a controller service that calls microservices in a composition.

    10) You say “In my view Microservices are logically deployable units. This means they CAN be deployed individually, but it should only be done when it makes sense!”
    Your statement is open-ended. As I mentioned before there’s lack of clarity around this topic, but I’d rather be more assertive and stick with Lewis and Fowler: a microservice is independently deployed, so it has to be a unit of deployment. And I’d add: Microservices should only be used when it makes sense (weighing the rqmts against the tradeoffs).

    11) I like the last paragraph in that section (“Individually deployed units of computing entails costs for…”). It helps the reader to think of the tradeoffs of microservices.

    12) In the Conway’s Law section, you complain that teams are typically aligned with applications and not with services. Well, application development in general involves the user interface and controller logic, the data model and database, the business logic. Creation of services is part of application development. I think the challenge is that teams should be aligned with bounded contexts, better saying, applications should be scoped as a single bounded context.
    I also don’t understand the complaint that for every new project there is a new team. What’s the alternative to having a team working on a software project? Isn’t the scope of the project the real issue?

    Like

    1. Hi Paulo

      Thanks for your thorough comments 🙂

      1) I’ve added details about the UI (client) to server communication often being 2-way communication (in the form of sync over async).

      2) Thanks – I’ve added details about this concern. There’s more to say about it – e.g. the way you build your UI can be the need to coordination and compensation stronger, but this deserves a whole blog post 🙂

      3) Fixed it 🙂

      4) Fixed the name – thanks 🙂

      5+6+8+9+10) I added more details and description to make my points more clear. I hope to soon write a blog post detailing how the finer details of logical service & microservices are split and handled code-wise, build wise and deployment wise 🙂

      7) Thanks, I corrected the wording.

      12) In many cases I think it will be very hard to align services/bounded contexts directly with applications, because applications follow organisational structure (e.g. the support department handles both customers, orders, shipping problems) where as services align with business capabilities. This to me means that an application uses many services. If an application and service align 1-1 that’s great, in my experience those are rare cases.
      The last point about every new project is a new team is my experience that many organisations build new teams for every new project. It’s my experience that breaking up and well performing and tight knit team can be very expensive – it takes a long time to get a new team up and running and for everyone to get used to each others habits, idiosyncrasies, fix skill deficiencies, learn the new domain(s), etc. That’s not the same as saying you can’t move people around, you just have to be aware of the costs associated.

      Like

  3. Thanks for the series.

    Related to the inherent eventual consistency with service oriented architectures, I can see the benefits.

    There is a tricky use case though: the creation of resources, followed by a read. The user expects the newly created resource to be available for use, but it not might be the case. How do you mitigate this? In case of updates it might be ok to read stale data, but for creates there is no data at all.

    A possible solution with UI’s would be to register websocket (or similar) that notifies the UI that the new resources is available, but what about pure API-s?

    Thanks,
    Adrian

    Like

    1. Thanks for the comment Adrian.

      Creates ( writes) and reads of the same resource would belong within the same service boundary. Across boundaries that’s of course a different matter.

      Within a boundary the only reasons I can think of that would cause eventual consistency issues between reads and writes are not related to service orientation (SOA), but more likely due to handling writes asynchronously (e.g. storing commands in a Queue) or applying CQRS where the read side(s) are updated asynchronously (e.g. over a Topic/EventBus/Service Bus).

      I’m not sure if this is what you meant?
      If so, then you could use a websocket to notify the client when the corresponding event(s) is published on the Topic/EventBus/ServiceBus. VertX, for instance, makes this pattern very simple to apply.

      In general, regarding handling Commands (as used in CQRS), there’s a lot of discussion about whether to handle them asynchronously. Commands CAN fail wherefore many (including Greg Young) argue that they should be handled synchronously so any errors can be propagated back to the client (of the UI or the API). Others, such as Udi Dahan, argue that if Commands are properly validated by the client (i.e. if you trust the client to perform this job by e.g. using one of more read models to perform the validation) then Commands are very unlikely to fail, in which case they can “safely” be stored in a Queue. For the few rare cases where they fail, you would need another way (e.g. e-mail, etc) to reach out to the user to notify them of the failure. The counter argument to this is that if the message sent is unlikely to fail it should be an Event and not a Command.

      Like

      1. Let me come up with a more concrete example.

        Let’s have an system where users buy subscriptions in order to use it. Buying a subscription means that the user gets a set of resources ready for use. It is not feasible though to keep the user waiting for all resources to be created before acknowledging his subscription. So we decided to use asynchronous provisioning of these resources, which span across multiple services. It is also more decoupled, a SubscriptionEventCreated is fired and all the services react to it by creating the right resources. The user gets back the ack and now he expects that to be able to use the promised resources right away, but some services are still working on it. How do you handle such a case?

        Like

      2. Across service we have to be eventual consistent, unless we take on the pain of using distributed transactions (which I don’t recommend doing for several reasons listed in the previous blog posts).

        In your example this leaves you with a problem of how to deal with your users expectations. If they expect realtime resource creation and you, for several reason, are doing it async then they might feel that your solution is bad.
        One of the best ways to deal with this is to upfront address the users expectations. This is typically something you need to figure out together with the business experts.
        The possible solutions differ a lot if the type of application (e.g. the natural flow vs. the possible flow), the type of users, etc.

        Here are a couple of general ideas that may or may not work for your particular case:
        – Before users press the “Purchase” button, have a text on he screen that explains to them that the resources will be created for them alone (they get their own) and this takes a little time. Tell them how much time is to be expected.
        This works really well for certain types of domains/use-cases.

        – Start a Process-Manager/Saga that is triggered by the SubscriptionCreatedEvent and then listens for Events confirming that the all the resources have been created. When this happens you can push a message back to the UI, send an SMS or an email. Having such a Process-Manager keep an idea on the creation process is IMO generally a good idea. Someone needs to care for and be interested in that all resources in fact gets created and also deal with issues if one or more resource creations fail.
        Using this approach works really well for some use cases (e.g. think purchasing a book. You will receive an email telling you when we have packaged it up and its ready for shipping). For others it could be unnatural.

        – A 3. option could be to use a composite UI. Perhaps you could show them a screen where each resource is represented as a tile. Each tile is owned and rendered by a specific service that also owns the resource being created. The service is responsible for rendering the status of the creation process and possible many other things later on. As that status changes from Under-Creation, to Created that service could notify the user (e.g using server side push) so the user remains informed.

        Like

  4. This is a great article. It breaks down the definition of business capabilities in relation to SOA and microservices with more clarity than I’ve read anywhere else.

    I like your definition of microservices as implementation details of a logical service. This definitely makes sense for services on the write-side that mutate data. However, as you alluded to earlier, a lot of read models require data across aggregates and even logical services. So I would say that you should align your data mutating services along business boundaries, but allow your read side models to be more amorphous. You could make a case for saying that if a microservice doesn’t mutate data, it needn’t be considered part of a BC, as it doesn’t own any data.

    There doesn’t seem to be too much discussion on this point. Even in Udi’s case, where a BC goes all the way to the UI, there are still parts that live outside. I think the greatest benefit to BC alignment is on the data mutating side, as that is where most of the rules live and is more stable that the read side. I think benefit could also be gained by treating the read side more loosely. This would manifest itself by seeing strong team ownership of write side, but more loose ownership of read side.

    Like

    1. Hi

      Thanks for the comments 🙂
      The read side is definitely a challenge, especially when it comes to queries. So far I have had good results with keeping both reads and writes inside a service boundary (IMO everything needs to belong inside a given service – nothing is outside) and solving the cross service challenges with either API gateways or Composite UI’s. Searching is more tricky. Sometimes usecase aligns well with searching within a given service boundary and then using the UI to drill into other service boundaries, which keeps the services well decoupled.
      Other times users really wants to do cross service queries, in which case we’ve created Query service(s) which (based on events) aggregates data from different services into appropriate search views (could e.g. be in Elastic Search). This smells like a top level CQRS architecture solution, which it probably is, but I haven’t found better ways to solve it.
      I’m curious of others have found better solutions to this?

      /Jeppe

      Like

    1. Sorry for the late reply.

      In our current project we have a CI boundary per BC. This simplifies a lot of things, like having a single repository per BC.
      We’re prepared to create a CI boundary per Microservice if/when it becomes necessary, but so far we’ve been content with having it at the service/business capability level and redeploying applications/gateways (which tend to be deployed with all dependent microservices co-located as part of the same deployment unit – aka FAT jar) + individually-deployed-microservices when needed.

      /Jeppe

      Like

  5. Great series, I read it twice as it covers so many information. One question I have regarding the composite UI approach. Can the composition be achieved via a REST API which retrieve/aggregate the data from various services on behalf of the UI. The advantage of this approach is to avoid the multiple remote service calls from the UI component (could be mobile app).

    Like

    1. Hi Dan

      It can – this is also know as the API gateway pattern.
      Depending on circumstances (e.g. due to technical/organisational constraints, external API consumers where you can can’t deliver UI components) the API gateway CAN be a good/better solution.
      I wouldn’t recommend it as an alternative to real composite UI if the only goal is to avoid multiple remote service calls from the UI components, as there are simple solutions to this. An approach similar to what I’ve used on previous project is described here: Service Oriented Composition (with video)

      /Jeppe

      Like

  6. Hi Jeppe,

    Just wondering what alternatives are there in the Java world to publish and subscribe to events in the matter you described in your series of posts?

    I’ve looked into the .Net world and I see NserviceBus as a great solution, but in the Java world I see mainly solutions leaning towards an ESB with things like Apache Camel for example. Is there something out there that can connect different micro services to message queues to build a reliable decentralized communication model?

    Like

    1. Hi Sherif

      I’m not aware for any Java alternatives to NServiceBus. The Java products I’ve seen follow the classical brokered ESB style.
      We’re working on a distributed/federated Bus, which can provide many of the same features, but it wont be available until next year.
      You could perhaps use something like Kafka for 1-m event distribution.

      /Jeppe

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s