Introduction to Kuma

Share
  • December 9, 2019

When service mesh first became mainstream around 2017, a few control planes were released by small and large organizations in order to support the first implementations of this new architectural pattern.

These control planes captured a lot of enthusiasm in the early days, but they all lacked pragmatism into creating a viable journey to service mesh adoption within existing organizations. The first generation of service meshes were hyper-focused on Kubernetes, are hard to use, and provide many moving parts that increase their operational costs.

Kuma, which means “bear” in Japanese, is a universal open source control plane for service mesh and microservices. Built on top of Envoy, an open source edge and service proxy designed for cloud native applications, Kuma can run and be operated natively across modern Kubernetes as well as more traditional platforms like virtual machines (VMs) and bare metal. This flexibility allows every team in the organization to adopt it, and for those teams that are transitioning to Kubernetes, it provides a more reliable journey to their transformation by providing security, observability, and identity since day one as the first step to their transition, as opposed to being the last one.

Kuma can instrument any L4/L7 traffic to secure, observe, route and enhance connectivity between any service or database. To use it natively in Kubernetes via CRDs or via a RESTful API across other environments, developers don’t need to change their application’s code.

While being simple to use for most use cases, Kuma also provides policies to configure the underlying Envoy data planes in a more fine-grained manner. This allows both first-time users of a service mesh as well as the most experienced ones to use Kuma.

The case for Kuma

When building software architectures, developers and architects will use services that communicate with each other by making requests on the network.

For example, think of an application that communicates with a database to store or retrieve data, or think of a more complex microservice-oriented application that makes many requests across different services to execute its operations:

Each time these services interconnect via a network request, the user’s experience is put at risk. In addition, connectivity between different services can be slow and unpredictable, insecure, hard to trace and cause other problems, such as routing, versioning, canary deployments, etc.

To fix these situations, developers take one of the following actions:

Write more code

The developers build a smart client that every service will have to utilize in the form of a library. Usually, this approach introduces a few problems:

  • It creates more technical debt.
  • It is typically language-specific and subsequently prevents innovation.
  • When multiple implementations of the library exist, this causes fragmentation.

Build a sidecar proxy

The services delegate all the connectivity and observability concerns to an out-of-process runtime that will be on the execution path of every request. It will proxy all outgoing connections and accept all the incoming ones. By using this approach, developers don’t worry about connectivity and only focus on delivering business value from their services.

It’s called sidecar proxy because it’s another process running alongside our service process on the same host, like a motorcycle sidecar. There is going to be one instance of a sidecar proxy for each running instance of the services. Because all the incoming and outgoing requests – and their data – always go through the sidecar proxy, it is also called a data plane (DP).

SEE ALSO: The more data, the better the AI, isn’t it?

The sidecar proxy model requires a control plane (CP) that allows a team to configure the behavior of the data planes and keep track of the state of its services. Teams that adopt the sidecar proxy model will either build a control plane from scratch or use existing general-purpose control planes available on the market, such as Kuma.

Unlike a data plane, the control plane is never on the execution path of the requests that the services exchange with each other. It is used to configure the data planes and retrieve data from them (like observability information).

An architecture made of sidecar proxies deployed next to the services (the data planes) and a control plane controlling those data planes is called service mesh. Usually, service mesh appears in the context of Kubernetes, but anybody can build service meshes on any platform (including VMs and bare metal).

With Kuma, the main goal is to reduce the code that has to be written and maintained to build reliable architectures. Therefore, Kuma embraces the sidecar proxy model by leveraging Envoy as its sidecar data plane technology.

SEE ALSO: Your first step towards serverless application development

By outsourcing all the connectivity, security and routing concerns to a sidecar proxy, developers can build applications faster and focus on the core functionality of services to grow their organization’s business and build a more secure and standardized architecture by reducing fragmentation.

By reducing the code that app teams create and maintain, developers can modernize their applications piece by piece.

Enter modernization

Before Kuma, service mesh was considered to be the last step of architecture modernization after transitioning to containers and perhaps to Kubernetes. We believe this philosophy is backwards. Service mesh should be available before implementing other massive transformations so that developers can keep the network both secure and observable in the process. The following diagram illustrates this:

Unlike other control planes, Kuma natively runs across any platform, and it’s not limited in scope (i.e., Kubernetes only). Kuma works on both existing brownfield applications (those apps that deliver business value today), as well as new and modern greenfield applications that will be used in the future.

By leveraging out-of-the-box policies and Kuma’s tagging selectors, developer teams can implement behaviors in a variety of topologies, similar to multi-cloud and multi-region architectures.

Since Kuma is covering the services across the entire organization, enterprises can obtain greater value from a service mesh.

The post Introduction to Kuma appeared first on JAXenter.

Source : JAXenter