API Connect

 View Only

Microservices and APIs: Defining application boundaries

By Kim Clark posted Tue October 09, 2018 12:25 PM

  
In a recent post on integration architecture Alan Glickenhouse touched on the question of when and where API management should be used in relation to microservice architecture. This post looks at that question at the next level of depth, exploring the positioning of API management to both embrace microservices architecture and yet still manage the complexity it can introduce.

It is not uncommon for a large enterprise to have hundreds or even thousands of core applications containing the data and functions that help them run their day to day business. If all these applications were refactored into microservices architecture, each application might result in tens or even hundreds of microservice components. Whilst many applications will never be refactored into microservice architecture, some will – or at least parts of them will. Certainly many new applications will be written using these fine grained microservices components in order to gain the benefits of greater agility, more independent, elastic scalability and truly independent resilience models.

A busy microserice landscape

Clearly microservices architecture by its very nature implies an enormous increase in the number of components sitting on the network, and we need to consider how to handle that complexity.

One aspect of this complexity is the interfaces that these microservice components make available. Most microservice components will make their capabilities available via an interface such as RESTful HTTP/JSON based APIs. Just as the number of microservice components on the network increases, so does the number of exposed APIs on those components.

Microservices exposing APIs

How do we find the APIs we want from the overwhelming set available. Which are we “allowed” to re-use in other contexts? Ideally of course microservice components are completely independent, but in reality, there will always be some invocations between them. How will we know which sets of components are dependent on one another and how far failures will permeate. We need to better understand this increasing number of possible linkages between components across the enterprise landscape.

Is this the level at which API management should work? It might be tempting to think that we should apply API management at this fine grained level and it would allow us to administer this increasing number of interfaces. However, whilst API management has its place in a microservice world, we first need to re-establish some notion of boundaries and ownership.

Things were a lot simpler with traditional siloed applications. These often represent only one large component sitting on the network, or perhaps two for high availability. Whilst within the silo there might be much communication between the different parts of the application, this was typically hidden, and indeed unavailable to anything beyond the application’s boundaries. It certainly wasn’t made available as APIs on the network. An example would be the calls between EJBs in a Java application. These are only internally available calls, and may well be done in memory as local calls, never reaching down to the network.

Intra and inter component communication traditional

Only capabilities that the siloed application wanted to make available to other applications would be exposed via a network level interface such as a web service, or more typically now, a RESTful JSON/HTTP based interface - we’ll generically refer to these as “APIs” for simplicity from this point on.

In microservices architecture, an application is broken down into multiple independent microservice components. Although ideally these microservice components are as independent as possible, there will always be a need for some intercommunication, and since each microservice is a separate network component, they will typically do this internal intercommunication via APIs too.

Intra and inter component communication microservice

So, now we have a plethora of APIs, some of which are really only used in the narrow context of a close knit set of microservice components, and some that are intended for much wider re-use across the enterprise, and perhaps beyond. Technically however, they all look the same. How do we find the ones we need, and indeed how do we stop consumers from calling the ones they shouldn’t. Indeed the idea that there even is an application boundary has potentially been lost. It could be said that there is no boundary at all, unless we choose to create one.

Grouping microservices

Without this notion of an application boundary, anything can call anything. Perhaps more importantly, we have little indication of ownership and accountability. Who has the responsibility for ensuring that a set of microservice components work together reliably to provide a business capability? How do we provide the fine grained access control to ensure that microservice components are only called by those that know how to use them appropriately.

It becomes clear that the only way to manage such a large number of components is to bring back some notion of the original application boundary concept. We want components within the boundary to be able to talk to one another’s APIs at will, and then only make some APIs available beyond the boundary.

Introducing boundaries for microservices

We can use network level mechanisms to create protected communication within the boundary using for example Kubernetes namespaces and perhaps further security mechanisms such as certificates or token based authentication to ensure that internal communication is secured. We would not expect to see a full API management capability intercepting communication internal to the application. These components know about one another, and are likely created and maintained by the same group of people. The specifications of their interfaces are part of the internal design of the application. Ideally, these essentially internal components of the application should communicate directly with one another, with no additional latency or complexity introduced.

Next we need to explicitly expose specific URLs beyond the boundaries for use by other applications. We would likely use for example the Ingress functionality in Kubernetes to make these API available outside the namespace boundary. But how will the owners of the microservices make definitions of the APIs they want to expose easily discoverable? How will consumers explore what APIs are available? How will we administer access for them?

Placing API management on the application boundary

This is where API management comes in, enabling us to control which consumers can explore what available APIs, whether they can self-subscribe to use them, and enable us to capture analytics on that usage.

The value add of API management

So, summarizing all that, we’re saying that inter-microservice communication within an application boundary is different from inter-application communication that goes across different application boundaries. Although they may both be performed using web APIs, their implementation may be radically different.

Summary of when to use the API management gateway

Now whether these boundaries we introduce in a microservice architecture represent the same groupings as we would have had originally had they been siloed applications is an interesting question. We are certainly not tied to those boundary definitions. What is perhaps more interesting is that we could potentially change our mind on the shape of these new boundaries over time. It would have often been impractical and maybe even impossible to move code from one siloed application to another. Now, since the boundaries are just arbitrary decisions made by us and defined by mechanisms such as the exposure of services via API management, those boundaries could be changed more easily.

So, we have asserted a number of things in this blog post:



  • Some form of grouping of microservices, which we might describe as an “application”, is necessary to manage the increased complexity in the number of components this architectural style produces.


  • Group of components must have an owner at the group level in addition to the owners at the component level in order to ensure a consistent design of the overall application.


  • These groups of components need to live within some form of enforceable boundary, perhaps via security models, or even down at the network level to enable inter-communication within the boundary that should not be available beyond it.


  • Communication within the boundary should be “light touch”, meaning it does not need to go via a formal API gateway.


  • Any APIs exposed beyond these boundaries are destined for broader re-use by consumers outside the ownership domain of the boundary. As such they should be exposed using some form of API management to provide discovery, self-subscription, traffic management, and more.




There is certainly more to be said on this topic. For the time being, hopefully this serves to provide clear guidance on where API management itself fits within a microservices architecture.

Update: 13 Nov 2018 - further post published extending this discussion to explore the role of a Service Mesh in comparison to that of API Management to see how their roles differ and yet also complement one another.


#API
#apidesign
#APIDevelopers
#APImanagement
#applicationboundaries
#Microservices
#MicroservicesArchitecture
2 comments
30 views

Permalink

Comments

Wed July 03, 2019 08:42 AM

Roberto, hi

Apologies for being so slow to respond to this. In short I totally agree with your analysis. Synchronous communication between microservices is common despite it's potential effect on availability, due to its simplicity of implementation and the ubiquity of HTTP. Of course other forms of synchronous communication are also coming in too (e.g. gRPC). Asynchronous communication in forms such as event sourced programming is a recognized pattern, but is significantly more complex to implement and introduces challenges around data consistency, so it is rightly limited in use to use cases where there is a specific problem to be solved (e.g. bringing data closer to the surface to reduce latency on reads). So I think it's fair to day that we'll often see a combination of both APIs and events in use for m microservice intercommunication, using each for their sweet spot. More broadly, we are seeing an increased interest in the use of events even across application boundaries, but again for specific use cases. Synchronous APIs are still the predominant style, and typically the first intercommunication implemented.

Mon March 25, 2019 03:13 PM

Hi Kim,

To join the discussion on here...

In terms of this article, I recognise the microservice communication patterns depicted here. Actually, from my observations, application owners are more readily relying upon synchronous rather than asynchronous based communications internally - this is not necessarily based on principles per-say, but people tend to advocate what they know and HTTP / ReST is popular, and also a lot of off-the-shelf microservice components / SaaS applications that can simply be "configured" expose ReST interfaces.

I noticed a change in tone between this article and an earlier article "The fate of the ESB".

Under a sub-section "A comparison of SOA and microservice architecture", you noted "...However, within a microservice application, synchronous calls introduce real-time dependencies, resulting in a loss of resilience, and also latency, which impacts performance. Within a microservice application, interaction patterns based on asynchronous communication are preferred."

This makes a lot of sense, but as I said, many microservice application owners are coming to rely upon HTTP internally also, losing some of the loose-coupling / availability benefits that comes with asynchronous styles of communications.

Is the change in tone intentional / just an evolution of a more prevalent communication style that's emerging?

With so many real-time dependencies, internal to microservice application boundaries, and also between microservice application boundaries, the scaling benefits of containerisation may continue to be realised but the ephemeral characteristics may be no different to where we started(!?)