Join / sign up
Not long ago, most software applications were very simplistic, operating as either a single process, or as a small number of processes distributed across several servers. Legacy systems with slow release cycles are still widespread, and rarely updated. At the end of each release cycle, developers pack up the entire system and pass it on to a group of system administrators for deployment and monitoring. In cases of equipment failures, administrators transfer the system manually to remaining functional servers.
Today, these huge monolithic legacy applications are divided into smaller, stand-alone components called microservices. Since microservices are independent of each other, they can be developed, deployed, updated, and scaled individually. Components can be changed quickly and as often as necessary to keep up with today's rapidly changing business requirements.
The downside of large numbers of deployable components and huge data centers is that they can be challenging to configure, manage, and maintain in ways that guarantee the smooth operation of the entire system. It is even more challenging to arrange each of these components in a way that conserves resources and reduces equipment costs.
Solving these issues manually is very laborious and time-consuming. An automated system that places components on servers, configures them and monitors them while handling emergency failures can save a great deal of time and money. This is where Kubernetes comes into play.
Kubernetes is a technology that allows developers to deploy applications on their own, as often as they want, without support from system administrators.
Kubernetes abstracts the hardware infrastructure and provides access to your entire data center as a single huge computing resource. This allows you to deploy and run software components without having to know about the actual servers in the base. When deploying a multi-component application, Kubernetes selects a server for each component, deploys it, and makes it easy to find other application components and interact with them.
Before we begin exploring Kubernetes in detail, let's consider the changes that have taken place in the application development and deployment niche in recent years. These changes were conditioned by both the splitting of large monolithic applications into smaller microservices and transforming the infrastructure that runs these applications. Having insight into these changes will help you better understand the benefits of using Kubernetes and container technologies such as Docker.
Monolithic applications consist of components that are closely related to each other and must be developed, deployed, and managed as a single entity, since they all run as a single OS process. Changes made in one part of the application require changes to the entire application. Over time, the absence of rigid boundaries between the app's individual parts leads to increased complexity and subsequent deterioration in the quality of the entire system, due to unlimited growth in the relationships between these parts.
Running a monolithic application usually requires a small number of powerful servers capable of providing the necessary resources. In order to cope with the growing load on the system, you need to either scale the servers vertically (scaling in height) by adding more processors, RAM, and other server components, or scale the entire system horizontally (scaling in breadth) by setting up additional servers and launching several copies of the application.
Although scaling in height usually does not require any changes to the application, it can quickly become expensive and always has an upper limit. Scaling in breadth, on the other hand, involves the use of relatively cheap hardware, but it may require significant changes to the application program code, which is not always possible.
Some application parts are difficult to scale horizontally or are totally unsuitable for this (for example, relational databases). If any part of the monolithic application does not scale, then the entire application does not scale, unless this monolith is divided.
These and other problems forced tech specialists to start breaking up complex monolithic applications into small independently deployed components called microservices.
Each microservice runs as an independent process and interacts with other microservices through simple, well-defined interfaces (APIs).
Microservices interact via synchronous protocols, such as HTTP, or via asynchronous protocols, such as AMQP (Advanced Message Queueing Protocol). These protocols are simple, well known to most developers, and not tied to any particular programming language. Each microservice can be written in whatever language is most suitable for the implementation of specific tasks.
Since each microservice is an autonomous process with a relatively static external API, it is possible to develop and deploy each microservice separately. Changing it does not require changing or redeploying any other service, provided the API does not change, or changes only in a backward-compatible manner. Scaling is performed separately for each microservice, unlike monolithic systems that require scaling of the entire system.
As is often the case with any technology, microservices have their drawbacks. If the system is made up of only a small number of deployable components, it is easy to manage them. The question of where to deploy each component is solved quickly because there are not many options to choose from.
As the number of components grows, it becomes more challenging to make deployment decisions. Not only do the number of deployment combinations increase, but also the number of relationships between components.
Microservices do their job together as a team, so they need to find each other and interact. When they are deployed, someone or something must conduct configuration to keep the system well synced. With an increase in the number of microservices, configuration becomes tedious and error-prone, and it can be especially challenging for system administrators when the server crashes.
Microservices also create other problems. For example, they complicate debugging and tracing of the call execution, since they span multiple processes and machines.
Fortunately, these problems can now be addressed by distributed tracing systems. Components in the microservice architecture are not only deployed but also developed separately. Due to their independence and the fact that, as a rule, each component is developed by a separate team, nothing prevents each team from using different libraries and replacing them whenever necesary. A discrepancy between application components in libraries is inevitable if different versions of the same library are required for each individual case.
Deploying dynamically linked applications that require different versions of shared libraries and/or other environmental features can quickly become a nightmare for the system administrators who deploy and manage these applications on production servers. The more components you need to deploy on a single host, the trickier it is to manage their dependencies in order to satisfy all their requirements.
No matter how many individual components you develop and deploy, among the biggest problems developers and system administrators face are differences in the environments in which they run their applications. These differences occur not only between the development environment and the working environment, but also between individual machines in the working environment. Another inevitable fact is that the environment of a certain working machine may change over time.
Differences can potentially emerge in any element of the system, from hardware to OS and libraries available on each machine. Work environments are managed by system administrators, while developers independently work on their own computers.
Administrators and developers have different knowledge about system administration. For obvious reasons, this leads to tangible differences between the two systems, not to mention the fact that system administrators pay increased attention to the latest security updates, while developers, as a rule, are not concerned about security.
In addition, working systems can run applications from several developers or development groups, which is not always the case for developers' computers. A working system should provide an appropriate environment for all applications hosted on it, even if each requires different, sometimes conflicting versions of libraries.
To reduce the number of problems emerging in the working environment, applications should ideally work in the same environment both during development and when in "prod", have the same operating system, libraries, system configuration, network environment, etc. It is also not desirable for this environment to change over time. In addition, the ability to add applications to a server without affecting any existing applications on the server is a plus.
Over the past few years, we have observed changes in the entire application development process, and in how applications are served in the work environment. In the past, the job of the development team was to create an application and transfer it to a group of system administrators, who then deployed, monitored, and maintained it.
However, today's organizations understand that it is best to have the same team that develops an application also take part in its deployment and accompany it throughout its life cycle.
For optimal application performance, development teams, QA, and system administrators should closely collaborate throughout the entire process. This practice is called DevOps, and Kubernetes was designed to be extremely helpful here.