Friends or Foes?

More and more companies are not only using cloud services (software as a service), but also modernizing their own IT and existing applications (container as a service or platform as a service). This is hardly surprising, as the cloud offers many advantages. Applications run more stable in a cloud (self-healing), scale better (automated horizontal scaling) and offer a faster time-to-market with the new Dev(Sec)Ops concept. De facto open source standards like Docker, Kubernetes or Cloud Foundry also make applications portable. This is especially important in hybrid or multi-cloud infrastructures. Cloud-native apps are developed for this using the twelve-factor method. The range of architecture approaches (monolith, microservices, twelve-factor apps) provides a multitude of requirements for a cloud platform.

According to a Bitkom survey in 2017, 51 percent of companies were already using a private cloud, whereas only 31 percent of companies used a public cloud. This was to be expected after the adoption of the GDPR.

This is reason enough to contrast and compare Kubernetes and Cloud Foundry. What does the future hold for these two? This article focuses on a private cloud running in its own data centre. Do companies have to decide between these two, or is it better to use both? What would a combination of both solutions look like?

Fig. 1: Deployment in Kubernetes

Kubernetes

There are many configuration options in Kubernetes (via Kubernetes resources). The Open API specification of the current IBM Cloud Private Kubernetes installation (ICP v3.1.2, K8s v1.12.4) is over 75,000 lines long. A high level of training is required to master this universe. For this, the user is rewarded with a platform that can be fine-tuned to their own needs.

Kubernetes offers complex cluster topologies. Worker nodes (on which the containers are deployed) can run on different VMs with different hardware, software, and sizing. For example, in addition to Linux worker nodes, it is also possible to use Windows worker nodes or different architectures (e.g. x86_64 and power64le or even System z) within a Kubernetes cluster. Deployment can influence which pod should run on which worker nodes. Kubernetes also offers a variety of options for deployment strategies (rolling update, canary, blue-green, …) with the help of service meshes such as Istio or Linkerd.

SEE ALSO: What is Kubernetes and how does it relate to Docker?

Kubernetes consists of a master node (including API server), proxy nodes and worker nodes. For a minimal installation with few worker nodes, about six VMs are needed, for an HA installation about nine. It is up to the operator whether the nodes are installed inside a VM or on bare metal. The operating systems of the nodes must be actively kept up to date. For this, operators can employ the previously used management tools for the Kubernetes VMs as well. It is recommended to automate the creation of the VMs. Terraform can help here, among other options.

Deployment in Kubernetes is comparatively complex. A simple application requires at least four configurations (pod, deployment, service, and ReplicaSet). The pods marked green in Figure 1 start (Docker) images. The developer must therefore be aware of the fact that the Docker image has to be created during their build process, as well as deal with the more complex deployment process and the required Kubernetes resources. In return, Kubernetes automatically takes care that the instances are – if possible – distributed to several worker nodes. While deployment via the ReplicaSet defines, among other things, the number of instances, the service determines the visibility of the application.

Kubernetes lets organisations subdivide projects into namespaces. Most Kubernetes resources are associated with namespaces. Role-based access control (RBAC) grants granular rights at the namespace level or cluster-wide. Another grouping option is labels that can be filtered. For example, a service includes all deployments with a specific label. In this way, several deployments can be managed behind a service if, for example, a new version is to be rolled out. Cluster-internal access to pods can also be configured via labels.

The variety of configuration options also allows you to install more complex, stateful applications. Operators, which monitor the state of the pods and regulate if necessary, can be helpful here. In addition to the operator, StatefulSets (the counterpart to the deployment) also help operate stateful applications. Another special feature is the DaemonSets, where background processes are deployed once to every single node. Kubernetes thus offers many other ways to install applications. There is no limit to twelve-factor apps. In combination with Helm Charts (a way to deliver all the necessary Kubernetes resources of a software into a configurable ‘package’), this makes Kubernetes into the next operating platform on which third-party applications can also be installed. Kubernetes is the de facto standard for container operation.

Of course, this section only gives you a glimpse of Kubernetes. Nonetheless, it points out the possibilities that Kubernetes offers and how it can be configured to suit in-house data centres and organisational structures.

Table 1: Comparison of Kubernetes and Cloud Foundry

Cloud Foundry

Cloud Foundry takes a different approach. Onsi Fakhouri (Senior Vice President, R&D for Cloud at Pivotal) said at the Cloud Foundry Summit 2015: ‘Here is my code. Run it on the cloud for me. I do not care how.’. Cloud Foundry targets the developer experience. The developer should focus on developing and be spared the middleware/infrastructure. As much as possible is hidden in a black box or automated in the platform. This brings a fast time-to-market with it, as the developers do not have to learn a lot of new things. Not so long ago, at a Cloud Foundry workshop, attendees acknowledged this by stating that they no longer wanted to practice with Cloud Foundry, but preferred to talk about transformation and DevOps culture. The reason: a short demo of the clear set of commands was already sufficient for understanding.

During deployment, the developer does not have to create a Docker image. So-called droplets are automatically created from the applications via buildpacks. The developer is unaware of this internally hidden process in the system. If there is an application-appropriate buildpack in the environment, a simple cf push is enough and the application is deployed in Cloud Foundry. If there is no buildpack, you can either develop one yourself or use Docker containers.

SEE ALSO: Cloud Foundry report: Severless computing and container technologies are in full swing

CF also provides an intuitive concept for organisational mapping of applications to organisations and DevOps teams. An organisation consists of spaces. In spaces, applications are deployed and connected to (infrastructure) services. DevOps teams can be assigned appropriate permissions for the administration of the organisation/spaces/quotas and deployments.

Cloud Foundry also wants to offer a similarly intuitive and automated experience for the operation. There is BOSH for that. BOSH ensures that the majority of the approximately 30 VMs are installed and executed. If a VM stops running as expected, it will be restarted or replaced. On the one hand, this makes operation easier, but on the other, it also makes it harder, as it does not make monitoring easier. But operation also has its challenges. For example, operating system updates cannot be loaded without a new Cloud Foundry version. In addition, for example, BOSH cannot solve hard drive issues on its own. As usual, the black box approach not only offers advantages, but also brings with it new challenges.

Cloud Foundry works well as long as you stick to the conventions. The apps must have been developed to be cloud-native (for example, according to the twelve-factor approach).

If an intervention in the VMs is required, you have to familiarize yourself with BOSH. At this point, if not earlier, there is a greater training effort.

Interim conclusion

What is meant by stateful/stateless?

For stateless applications, the state is stored in a backing service (database, volumes, etc.). The application itself does not store the state. As a result, all application instances share the same state and are interchangeable. For the client, it does not matter which instance executes its request because the state of all instances is the same. This makes it easy to add new instances or to take them away (as the load decreases).

Fig. 2: Example of a stateless application

Stateful applications, on the other hand, have their own separate state per instance. As an example, databases can be cited that are replicated across multiple instances and thus fail-safe. It is intended that each instance has a copy of the state. Kubernetes supports this type of applications, but not Cloud Foundry. In addition, Kubernetes guarantees a starting order via StatefulSets and allows specific instances to be targeted.

Fig. 3: Example of a stateful application

Fig. 4: Example interaction Cloud Foundry with Kubernetes via an Open Service Broker

Both Kubernetes and Cloud Foundry have their strengths. While Kubernetes can shine as an operating platform, Cloud Foundry excites developers with its ease of use. If you want to bring new cloud-native applications to the cloud with as little platform knowledge as possible, Cloud Foundry certainly has its strengths. If you want to bring existing, stateful applications to the cloud first and then modernise to be cloud-native, there is no way around Kubernetes. In principle, therefore, it is also a question of goal-setting and how strongly the already established processes can be changed or to what extent the framework conditions allow this. Table 1 compares both platforms.

Open Service Broker

DevOps processes and culture enable software developers to quickly and frequently release new versions. For a CI/CD pipeline, this applies above all to the packaging of the applications, the configuration and the deployment. Often infrastructure services like databases are not included. Many companies have the same starting point: There is a large database on the host that is used by all applications. A database admin team takes care of the operation. Changes are sent to the DBAs who then import them. This process may take weeks, depending on your organisation and availability.  This is a bottleneck that can be used as a starting point.

Cloud Foundry has introduced the service broker.  The application can choose from a marketplace which service it requires according to the self-service approach. As a reminder: The speed of development in cloud environments arises mainly due to the ‘as a service’ approaches and the associated potential automation.

Service brokers can be relatively easily created in the form of a simple REST API. For the majority of the usual subsystems there are already finished open source service brokers. Examples of services range from persistence services (databases, messaging, volumes) to configuration services (application settings, secret management, certificates), to other actions that can be automated (OCR services, virus scanning, etc.). However, it should be noted that the services may also be equipped with back-ups, high availability, support and monitoring, especially for data that has to be kept for several decades.

A service is created using a simple command. Necessary information, such as URL, username and password for a database is communicated to the application via a binding. This is how the as-a-service philosophy is put into practice. The culture of offering things to automate is being promoted. Forms, applications and long waiting times (until the requirements are met) are a thing of the past. The concept has prevailed in Cloud Foundry. Open Service Broker is now being maintained as a specification within the Cloud Native Computing Foundation.

Therefore, it is not surprising that this specification has also moved into the Kubernetes world. Now the strengths of both platforms can play out. In Kubernetes, stateful services can be made available, which are then offered in the Cloud Foundry marketplace. Through the service, Cloud Foundry apps can provision and use broker services in Kubernetes. In Figure 4, a Cloud Foundry application creates a PostgreSQL database on Kubernetes via the service broker and then receives the necessary information via the binding to access them.

Eirini and Quarks

In addition to the Open Service Broker, there are two other projects that demonstrate how Kubernetes and Cloud Foundry work well together. The project Eirini lets you replace the Cloud Foundry orchestrator Diego/Garden with Kubernetes. This is made possible by the Orchestrator Provider Interface (OPI). An abstraction layer is placed in front of the Diego engine so that the orchestrator can be exchanged as required. The developer continues to deploy the applications through Cloud Foundry.

However, the application is not operated in a Cloud Foundry droplet, but as a Docker container in Kubernetes. This facilitates monitoring, since it can be operated uniformly in Kubernetes. In addition, solutions that were previously only available to Kubernetes can also be used for Cloud Foundry applications. For example, the Istio service mesh is not yet ready for productive operation in Cloud Foundry at the time of writing this article.

The Quarks project (formerly CF Containerization) makes it possible to operate Cloud Foundry completely in Kubernetes. Docker images are created from the BOSH releases, which can be installed via Helm Charts. Kubernetes then takes care of the scaling. This also saves memory as the VMs are replaced by containers. Quarks works via operators, which monitor the state. However, the applications themselves continue to be deployed as Cloud Foundry droplets. Cloud Foundry remains as it is, just not based on VMs, but in containers run in Kubernetes.

So while Eirini runs the applications in Kubernetes, Quarks enables Cloud Foundry to run on Kubernetes. If both projects are combined, Cloud Foundry maintains the developer experience, and operators can use the same tools for monitoring, for example, rather than having to get familiar with BOSH. Both projects are Cloud Foundry Incubator projects. Both projects are nearing finalization. But the goal of making the best of both worlds possible and combining them is obvious.

Conclusion

What are the results of comparing Kubernetes and Cloud Foundry? We noted that Kubernetes can prove convincing as an operating platform. For the future, it can be assumed that third-party software will increasingly be provided as a Helm Chart for Kubernetes operation, as it has become the de facto standard for this. This is mainly because of the possibility to deploy stateful applications on Kubernetes and the option to configure Kubernetes according to your own ideas and requirements.

SEE ALSO: Automated build and deployment of Docker containerized OSGi applications on Kubernetes

Furthermore, it is easier for software producers to extend Kubernetes, as shown by the operators, among other things. Cloud Foundry, on the other hand, has focused on the Developer Experience and DevOps transformations. Cloud Foundry allows DevOps teams to prioritize development activities, and to provide operation-focused topics such as deployments, as well as the ability to connect to environment-specific services without a steep learning curve. Both approaches involve learning curves (Dev, DevOps, platform operation), which differ in complexity. Not to be underestimated are the company processes and structures that have to be reconsidered through cloud computing, faster release cycles, and new responsibilities to realize the potential of new technologies.

It has been shown that the Open Service Broker can act as a glue between the two platforms. The interaction between Kubernetes and Cloud Foundry becomes very clear here, as cloud-native apps installed in Cloud Foundry can access backing services deployed in Kubernetes. This is entirely in the spirit of DevOps and automation. Just as service brokers give relief to the DevOps teams, the Eirini and Quarks projects offer benefits to the operation. The two projects clearly demonstrate that both platforms can work well together or even operate on each other. Both projects will soon be ready for production.  It turns out that choosing the right cloud platform does not have to be an ‘either-or’ situation ‒ you can choose what you need. We have already gained experience with both platforms and we are certain: ‘Kubernetes and Cloud Foundry are friends, not foes!’

The post Friends or Foes? appeared first on JAXenter.

Source : JAXenter