AIOps

AIOps

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.


#ITAutomation
#AIOps
#CloudPakforAIOps
#AIOps

 View Only

Application boundaries combine the flexibility of microservices with the manageability of applications.

By Kim Clark posted Mon September 04, 2023 11:00 AM

  

Application boundaries combine the flexibility of microservices with the manageability of applications.

By Kim Clark 

Cloud-native techniques make it possible to assemble applications from increasingly fine-grained components, such as microservices deployed in containers. Since the code of a such an application is not deployed as a single monolithic runtime, there is no longer an implicit or physical application boundary. Without application boundaries, it can be difficult to manage performance or measure SLAs. It’s also difficult to test these applications, or to predict the effects of change.

As each small piece of functionality is deployed independently, the overall landscape becomes exponentially more complex. The graphic below helps compare the neatly defined boundaries of monolithic applications with the seemingly chaotic granularity of microservices. 

We recommend re-introducing these boundaries to enable microservices-based applications to remain understandable, maintainable, and securable. Here are several key points to keep in mind when architecting a microservices-based application:

Fine grained landscapes are unmanageable without application boundaries.

Without application boundaries, you need to consider every permutation of each fine-grained component talking to each other fine-grained component. As the number of these components grows, the problem multiplies exponentially. It therefore becomes increasingly difficult to: 1) understand the effects of change, 2) diagnose cross component scenarios, 3) define the scope of regression tests, 4) optimize service level indicators, and 5) manage the people and processes involved. Application boundaries enable you to compartmentalize your current landscape to operate it safely and efficiently, and to evolve it more easily as your needs change in the future.

Boundaries only exist if we explicitly define them.

There is no longer an implicit grouping based on which server is running which code. Communication may be using the same protocol (e.g., RESTful APIs over HTTP) regardless of whether they are within, or across, applications. The good news is that this homogeneity in communication styles means you can define those boundaries wherever you want. In fact, you might even change the boundaries over time, without significant refactoring. The bad news, however, is that for a boundary to exist, you must define it explicitly and actively enforce it in the implementation. 

Software defined networking can be used to group application components.

What makes an application boundary? The obvious starting point is network isolation. Kubernetes, for example, provides mechanisms such as network policies and namespaces. These enable you to lock down groups of containers within their own network, defining what’s on the inside. But that’s only half the job. You also need to decide what’s available on the outside of the boundary.

API management is required on the application boundary.

Some APIs will be exposed beyond the application boundary, to other applications. You cannot assume the other applications are on the same cluster, even if they are today. The default mechanism for exposing APIs beyond a Kubernetes cluster is an ingress gateway, which provides simple routing, load balancing, and SSL termination. This is insufficient on its own to share APIs beyond the application. You’ll often need features such as traffic management, authentication, authorization, complex routing for versioning, and the ability to add custom security policies. You’ll also need to consider how these APIs will be discovered, who should be allowed to use them, how they should gain access to them, and how you’ll measure their usage. For these reasons, you should use API management on the boundary of the application. The existence of the application boundary makes it easy to decide which APIs will benefit from exposure via API management, as opposed to those that just need internal communication within the application.

Despite the application boundary, we should still take a zero trust approach.

Proponents of a zero trust” approach would argue that relying on network segmentation and gateways alone is an old-fashioned approach to security. In zero trust, you assume that all such boundaries might be compromised, and each individual component should take independent measures to secure itself. In Kubernetes, this means each component must have a dedicated network policy, and all communication between components should be explicitly declared, encrypted, and access controlled. Kubernetes provides network policies to enable this. Service meshes are an alternative that have the potential to simplify routing and encryption.

Use a service mesh for communication within the application boundary, but not beyond.

A service mesh is an optional addition to a Kubernetes environment that invisibly intercepts all communication between containers and declaratively defines how they interact. Some of its features, such as rate limiting and access control, appear similar to those of API management, so it is important to draw a distinction between the two. Once again, the application boundary makes this much clearer since service mesh technology is best suited to managing communication within the application boundary, and API management is designed for socializing communication across boundaries.

Wrapping it up

The loss of traditional implicit application boundaries represents a threat to the manageability of fine-grained cloud native landscapes. The good news is that there are increasingly mature mechanisms to re-introduce these boundaries, both from a network and an endpoint governance point of view. Even better, homogenous platforms such as Kubernetes enable us to re-draw those boundaries over time, to match with the inevitable changes in the way the business functions something that was harder, if not impossible with traditional applications. For a more in depth coverage of this topic see here

0 comments
54 views

Permalink