DevOps in an immutable world

Share
  • January 2, 2019

Gartner predicts that by 2020, 50 percent of CIOs who have not transformed their IT capabilities will be displaced from the digital leadership team within their organizations. How is that for a reality check? If you are a leader in an IT organization that is at the epicenter of a digital transformation, the eventuality of that statement probably rings quite true.

Now, if you are an IT leader in a cloud-native organization, and containers are the building blocks upon which your app teams build software, you probably don’t identify with messy handoffs between developers and operators, and this transformation probably does not seem quite so existential. Regardless, we are here to examine how IT leaders can leverage today’s technology trends as well as tried and true lessons learned from traditional software delivery to transform application software delivery in their organizations.

Since its evolution, the container-based technology ecosystem has demonstrated that it can accelerate DevOps transformations compared to organizations using traditional infrastructure. To validate this, it’s important to understand what it’s like to practice DevOps in the context of traditional infrastructure, examine its limitations and evaluate what the advent of the container ecosystem (particularly the concept of immutability) does to those dynamics.

First, the ‘what’ and ‘why’ of DevOps: DevOps is an understood set of practices and cultural values that have been proven to help organizations of all sizes improve their software release cycles, software quality, security, and ability to get rapid feedback on product development. But why are organizations hopping onto this wagon? Businesses see a faster time to value, better competitive edge and in some cases, sheer survival. Developers see an acceleration of feature delivery and a great way to get rapid feedback from their customers. Ops teams gain better stability and time to do more of what they love. And finally, quality teams can progress towards better automation with fewer bugs. Although the incentives for these various constituents are all quite different, it is critical to be multi-faceted about the approach and to ensure that those incentives are all addressed in parallel.

With that said, IT organizations going through a DevOps transformation have a few things in common:

  • Your world is not all green. Much of it is brown. Maybe even a little scorched. You have one of everything. You are feeling like the technology trends that you read and hear about are out of touch with reality. The pace of upgrades to the new stuff feels glacially slow and while you have had pockets of success, it is hard to replicate across the organization amidst so much explosive change.
  • What complicates your day to day is the integration or the seams between the old and the new. You need to balance conflicting constraints—compliance and regulation on the one hand, and competitive pressure on the other. Demonstrating ROI for much of your engineering investment is not easy.
  • Your organization is perpetually in pursuit of agility and the consequences of falling behind are very real. As part of your role, you must demonstrate productivity gains across the digital transformation process.

Here’s what the traditional software delivery workflow has generally looked like over the past decade:

  • Two different teams are responsible for different layers of the stack— operations teams own the operating system image and development teams own the application artifacts.
  • Application artifacts and their dependencies are delivered from development to operations using the OS packaging constructs.
  • The Ops team then deploys those artifacts on OS images which meet the organization’s policies and includes additional monitoring and logging software. The composite image is then run in production.
  • The Dev team evolves the application by handing new packages to Ops, and Ops deploys those updates and any other updates (such as patches that address operating system vulnerabilities) using scripts or configuration management software.de

In this traditional infrastructure world, successful companies use the concept of infrastructure as code to gain speed, simplicity and configuration consistency in their deployments. This leads to lower risks, fewer failures and lower operating costs over time.  Another cornerstone of agility in this world is the use of canary deployments to fairly static infrastructure to get fast feedback at a lower confidence point. Finally, managing infrastructure code in CI/CD can help with fast feedback, giving operators more velocity in deploying changes at a high scale and low failure rate. This also gives systematic control over change. Imagine the ease of doing a pull request review in your pajamas at home opposed to attending a change control board meeting!

SEE ALSO: Which technologies will dominate in 2019? Containers and serverless go head to head

I have observed companies make tangible changes to their software delivery cadence. For example, companies that develop software for other enterprises may feel blocked by a slow pace of upgrade of their software due to compliance regulations. But successful companies don’t let this prevent them from examining their software change lifecycle, automating their tests, reducing their release hardening period, and slowly reducing their ability to release software from months to weeks to days, and in some cases, hours. Even if their customers’ pace of adoption is slow, teams can put their software in front of beta customers and internal users to get feedback. The bias to work on small batches of software can give organizations the invaluable ability to make progress on burning down technical debt while also chipping away at business-driving features. Like I always tell my kids – “stop starting, and start finishing!”

Changes in processes and delivery cadence also need to be actively complimented by deliberate changes in organizational behavior. For example, making and keeping commitments, improving safety so people can take more risks and are able to make mistakes, truly practicing continuous improvement rather than paying lip service to it, and finally, getting a better-shared understanding of what it is to be agile. In my experience, an agile mindset trumps agile practices any day, and agile practices are far more important than agile tools.

With that said, traditional infrastructure will only take us so far on this journey. Even the most adaptable organizations are faced with limitations:

  • Pace of iterations is still slow.
  • Choices for leaders are not that great and continue to have a messy handoff between Dev and Ops.
  • There continues to be proliferation of infrastructure design choices between teams and projects. For example, every project has a bespoke solution for high availability; monitoring, metrics collection and alerting tends to differ between projects.
  • Infrastructure tends to be tightly coupled so people become afraid to make changes, which is the antithesis of Agile.
  • Keeping dev, staging, and production environments exactly the same is expensive and time-consuming, but essential for testing.
  • Design decisions by software engineers can impact downstream operations complexity, and operators can’t do anything about it. Being agile is impossible in this scenario because no feedback loop exists.

This is when a container ecosystem comes into play. It is a portable and consistent environment for the development, testing, and delivery of an application. Container technology offers an ability to separate applications from infrastructure, similar to how VMs separate the OS from bare metal. The term ‘ecosystem’ includes not only the images and containers themselves but also an underlying container scheduler, a container registry and a host of related tools and services. To get a sense for how fast this space has evolved, all you have to do is take a look at the members of CNCF. [1]

To overcome the limitations of traditional infrastructure, developers can package all dependencies with the application itself. Applications are isolated from each other by default. Runtime environment cannot differ between the developer’s workstation, staging, and production because the same container is promoted between those environments. This immutable nature of container artifacts gives developers a new level of independence from the underlying OS. No more waiting for an OS upgrade to deploy software! In addition, organizations start to see similar deployment workflows across teams even when languages and projects differ, helping engineers move across teams and learn new skills while helping Ops teams develop standard rather than one-off tools.

Further, container schedulers have standardized metrics for collection and monitoring. The industry has also seen standardization of service restart capabilities, as it is much easier to take deviant hardware out of commission. Operators have improved portability across hardware and cloud providers. If your company is using a platform like Kubernetes for automating deployment, scaling, and management, you know the value of having standardized abstractions across cloud providers, whether they are public or private.

SEE ALSO: Does serverless technology really spell the death of containers?

In terms of organizational impact, working in the container ecosystem undeniably promotes agile behavior. Developers are held more accountable for the operational impact of engineering decisions they make in their software, which drives better feedback. There is improved separation of concerns between the Dev and Ops roles. For example, Kubernetes provides an abstract representation of infrastructure to software developers and a clearly modeled set of application requirements to technical operations. This decouples how much those groups need to understand each other and coordinate as they cooperate across a well-defined API boundary rather than in meetings. Adding new capabilities and services can be difficult in traditional organizations that often promote building giant, monolithic applications. Containerization helps encourage more maintainable, smaller services by lowering barriers to provisioning new services and increasing the pace of innovation. Besides, the immutable nature of containers, i.e the ease with which the ecosystem supports the build up and tear down of artifacts based on environment and configuration changes, makes it really easy for operators to not be at the receiving end of developers’ actions.

Some conclusions

Enterprises poised for a transformation from traditional infrastructure-based software delivery to container-based software delivery can take away a few lessons from these experiences. On the technology front, it is wise to start with a small set of less-critical in-house applications and containerize them. Make sure that your developers learn to use Kubernetes or your container orchestration system of choice end-to-end, but don’t make them learn how to set it up. Using off-the-shelf tools can limit distractions and improve time to market on the applications. Finally, it is worthwhile to consider the use of CI/CD tools that are opinionated about how to deploy into Kubernetes or a similar platform so you don’t have to do that on your own.

It is imperative that IT leadership continues to reinforce the tried and true principles of agile behavior. At the team level, this boils down to three things: always work on the most important thing, focus on continuous delivery of value, and be ruthless about eliminating waste. As leaders, you have an obligation to demonstrate to your teams that you are in this for real, that it is not mere lip service or theater. Ensure that the objectives of the transformation are aligned with the rest of the business, be deliberate and realistic about your organization’s strengths and needs, pay close attention to how people get work done and the overall health of your teams. Be willing to change anything that’s not working. Teams need to see demonstrable proof that there is leadership sponsorship for change.

Finally, always keep learning. The immutability, portability and predictability offered by the container ecosystem can truly accelerate an organization’s DevOps transformation, revolutionize software delivery, and drive business outcomes— as long as the organization is in a constant state of introspection and learning. In the words of William Edwards Deming, an American engineer, statistician, professor, author and management consultant, learning is not compulsory, but neither is survival.

This article is part of the latest JAX Magazine issue. You can download it now for free.

Have you adopted serverless and loved it, or do you prefer containers? Are you still unsure and want to know more before making a decision? This JAX Magazine issue will give you everything you need to know about containers and serverless computing but it won’t decide for you.

The post DevOps in an immutable world appeared first on JAXenter.

Source : JAXenter