Beyond Cloud: Public Cloud, yes or no?

Share
  • November 28, 2018

Costs & cost transparency

Cloud providers list their prices transparently. And the best thing is: you only pay for the resources you actually use and then additionally pay in small increments – it’s ideal, don’t you think?

Perhaps the low individual prices can be dazzling. However, the sum from the total cost calculator is usually only suitable for a rough prediction. As with any infrastructure sizing, the actually required resources sometimes deviate significantly from the calculated ones. In addition, many details only arise during operation. For distributed applications (and which application isn’t distributed today?), the network throughput is the decisive scaling tool. A stable 10G network is indispensable, but with Amazon AWS for example, it is only available for the large (i.e. expensive) instance sizes.

    DevOpsCon Whitepaper 2018

    Free: BRAND NEW DevOps Whitepaper 2018

    Learn about Containers,Continuous Delivery, DevOps Culture, Cloud Platforms & Security with articles by experts like Michiel Rook, Christoph Engelbert, Scott Sanders and many more.

DevOps culture

The Public Cloud is a catalyst for DevOps culture. Every developer has a burning desire to at least test whole server clusters, including networking, and to test their application under live conditions. It couldn’t be better!

Until an organization can achieve this, it has to take big steps towards a “DevOps culture”. This really doesn’t include a dedicated DevOps position or even a DevOps department. It is nothing less than the demolition of the wall between development and operation. It includes the insight that operations is not dark magic, but also only software development (just in Ansible, Puppet or similar).

An organization can fail particularly impressively because of the credo “You build it, you run it”. The pager migrates to the development team and after the first pager alarm at three o’clock in the morning, priorities change abruptly. In the worst case, employment contracts and company agreements are studied. In the best case, scaling, the continuous delivery pipeline and the testing pyramid work on together with Operations.

The knowledge about modern distributed applications is in the minds of the developers. When the going gets tough, they have to pull their application back out of the mud. Ops knowledge in software development is extremely important: How do I monitor my application? How do I scale during peak loads?

But modern development methods are also indispensable in Operations. The goal should be the complete elimination of the “snowflakes”, this server, which was manually configured under obscure circumstances. The SSH login on a server must be elevated to an anti-pattern. In doing so, many methods can be adopted from software development (such as version control, tests, continues delivery and so forth), because the usage of Ansible, Puppet and others is – surprise – mainly software development.

SEE ALSO: Avoiding cloud lock-in must be about more than orchestration

Lock-in-effect

It feels like the number and vertical range of Amazon AWS’ services doubles after each RE:Invent conference. Specialized services, such as Lambda or Sagemaker, remove superfluous boilerplate code and enable a one-time quick start in complex topics.

But vertical integration has its (not only monetary) price. Due to the lack of open interfaces, applications can only run on the service of a provider. Cross-cutting aspects, such as logging and monitoring (e.g. in Lambda environments), complete the vendor lock-in. With the previously attractive cloud provider, one might have brought a Microsoft of the 2000’s or an IBM/SAP if the 2010’s into their home.

Single-Vendor

Quite often Amazon AWS appears to a be a synonym for “public cloud”, but other providers also offer compute and storage with absolutely comparable performance in their basic services. Those who rely on several cloud providers from the outset not only minimize the vendor lock-in, but also increase their reliability enormously.

Beyond Cloud

But why, in my opinion, is the question of the public cloud obsolete? If you look at the developments of modern container schedulers, especially Kubernetes, you can see that they are ideal to form a bracket around any infrastructure.

Nobody really wants to worry about which host in which data center their application is running on. Actually, it should just run – with the resources it needs as well as the redundancy and distribution that enable secure operation. This is exactly what container schedulers offer as a bonus without any lock-in-effect. The scheduler offers a uniform deployment and administration API for my entire infrastructure. Kubernetes runs on any compute hardware and so the datacenter (Public, Private Cloud or Bare Metal) only acts as a CPU, GPU and RAM supplier for my Kubernetes clusters.

And by the way: The answer to the opening question is a definitive “yes and no”.

The post Beyond Cloud: Public Cloud, yes or no? appeared first on JAXenter.

Source : JAXenter