Software defined networking can be used to group application components.
What makes an application boundary? The obvious starting point is network isolation. Kubernetes, for example, provides mechanisms such as network policies and namespaces. These enable you to lock down groups of containers within their own network, defining what’s on the inside. But that’s only half the job. You also need to decide what’s available on the outside of the boundary.
API management is required on the application boundary.
Some APIs will be exposed beyond the application boundary, to other applications. You cannot assume the other applications are on the same cluster, even if they are today. The default mechanism for exposing APIs beyond a Kubernetes cluster is an ingress gateway, which provides simple routing, load balancing, and SSL termination. This is insufficient on its own to share APIs beyond the application. You’ll often need features such as traffic management, authentication, authorization, complex routing for versioning, and the ability to add custom security policies. You’ll also need to consider how these APIs will be discovered, who should be allowed to use them, how they should gain access to them, and how you’ll measure their usage. For these reasons, you should use API management on the boundary of the application. The existence of the application boundary makes it easy to decide which APIs will benefit from exposure via API management, as opposed to those that just need internal communication within the application.
Despite the application boundary, we should still take a zero trust approach.
Proponents of a “zero trust” approach would argue that relying on network segmentation and gateways alone is an old-fashioned approach to security. In zero trust, you assume that all such boundaries might be compromised, and each individual component should take independent measures to secure itself. In Kubernetes, this means each component must have a dedicated network policy, and all communication between components should be explicitly declared, encrypted, and access controlled. Kubernetes provides network policies to enable this. Service meshes are an alternative that have the potential to simplify routing and encryption.
Use a service mesh for communication within the application boundary, but not beyond.
A service mesh is an optional addition to a Kubernetes environment that invisibly intercepts all communication between containers and declaratively defines how they interact. Some of its features, such as rate limiting and access control, appear similar to those of API management, so it is important to draw a distinction between the two. Once again, the application boundary makes this much clearer since service mesh technology is best suited to managing communication within the application boundary, and API management is designed for socializing communication across boundaries.
Wrapping it up
The loss of traditional implicit application boundaries represents a threat to the manageability of fine-grained cloud native landscapes. The good news is that there are increasingly mature mechanisms to re-introduce these boundaries, both from a network and an endpoint governance point of view. Even better, homogenous platforms such as Kubernetes enable us to re-draw those boundaries over time, to match with the inevitable changes in the way the business functions – something that was harder, if not impossible with traditional applications. For a more in depth coverage of this topic see here.