Continuous Delivery Expert Check 2020 – CI/CD, security & rolling deployment (Part 1)

Share

JAXenter: Continuous delivery is a key component of every DevOps endeavor. Why do you think it plays such an important role?

Today most businesses need to figure out how to deliver new features faster, while making sure services are solid and secure.

Tracy Miranda: A friend shared recently that he was changing banks because the one he used to make it extremely difficult to make money transfers for his business online. This to me was a living example of how software has become a major differentiator for most industries. Today most businesses need to figure out how to deliver new features faster, while making sure services are solid and secure – and that is what continuous delivery is all about and why it is so critical today.

Priyanka Sharma: Every company is now a software company. And the most important determinant of winners and losers in tech is the speed of software delivery and iteration. By utilizing continuous delivery, engineering teams can tee up features for production and then push them live with a metaphorical click of a button. Based on my experience speaking to GitLab customers, and GitLab’s own engineering practices, folks can go from deploying once in weeks or months to daily with continuous delivery.

Clark Boylan: Continuous delivery ensures that your software is always releasable. This allows you to make many small releases that are easier to consume rather than large infrequent releases that require heavily lift deployment. This makes it easier to discover what broke, and where, when necessary. The end result is reliability and consistency.

Baruch Sadogursky: There is a simple answer and a more interesting one. The simple answer is that when we talk about removing obstacles and improving integration within the team, removing silos is about moving faster. One of the key aspects about moving faster is removing manual work and automating everything. Continuous delivery is a key component of the mobilization of software delivery. The more interesting aspect, I would say, is that it contributes to DevOps in the way that it helps provide value. The end business goal of DevOps is to provide value and continuous delivery improves two aspects of it.

SEE ALSO: How to keep your network secure & Agile during COVID-19

First of all, it helps concentrate on the value itself, on new work, and doing what matters to customers. Eliminating manual and tedious work minimizes errors, and improves quality. At the end of the day, people spend less time on work that doesn’t provide value. The other perspective of continuous delivery is that it provides value fast. Today, in our very demanding market, the organization that provides value faster than its competitors wins. This is why continuous delivery helps provide better results.

It is also important for improving security because one of the steps in mitigation of security breaches is how fast you can deliver the patch to your systems. Here again, whoever can deliver those patches faster gets better, faster production than others.

Joonas Lehtimäki: Nowadays, companies need to be fast and efficient if they want to succeed in business. Manual work from the old data center / system admin days, in my opinion, has come to an end. Companies need a way to deliver their software reliably everywhere, maybe multiple times a day, and that’s where CD plays a significant role.

Nir Koren: I think that developers today don’t really want to spend a great deal of time deploying to all environments manually – they want to develop and watch their fruits instantly on the cloud. From the company perspective, it’s definitely speed of delivery (for both features and bug fixes).

Christian Uhl: The automation aspect of continuous delivery frees the teams from many of the operational burdens that have existed before. This allows us to re-invest that significant amount of time into more valuable activities. But this is just the tip of the iceberg – successfully implementing continuous delivery yields pure and measurable competitive advantages to a business. The reduction of risk and higher time to market is why every organization should strive to become better at this aspect.

JAXenter: Continuous delivery is the most common term when talking about continuously providing software, but there’s also continuous deployment and continuous integration. What is your favourite style and why?

Tracy Miranda: There is continuous confusion about the terms, for sure. In an ideal world, continuous integration and continuous deployment are fully automated parts of the software delivery lifecycle. Full automation is very hard to achieve or limited by restraints such as regulatory requirements. With continuous delivery, the focus is on the engineering approach where teams produce software in short cycles, ensuring the software can be reliably released at any time. It seems like a subtle change in focus but actually makes all the difference because the emphasis now readily includes the human elements and the key role of the team in software delivery.

Priyanka Sharma: Continuous integration, delivery, and deployment are all part of the same process to speed up software lifecycles. Let’s define each to ensure we understand the terms.

Continuous integration: This is when developers merge code into the master branch often and regularly on a daily basis instead of waiting for a “release day”. In order to do this successfully, most organizations utilize a continuous integration pipeline where developers’ changes are validated by creating a build and running automated tests against the build. While running automated tests is not a precondition for a continuous integration pipeline, it is a best practice. GitLab is an avid user (and provider) of continuous integration pipelines.

Continuous delivery: Once an organization is running continuous integration, they can also automate their release process to be able to deploy with the click of a button. Continuous delivery is a great option for companies that want to be fast and nimble but have regulatory or business reasons that prevent them from practicing continuous deployment. GitLab is in that category as we need to manually deploy (which we do daily) to satisfy compliance requirements.

Continuous deployment: When an organization has both continuous integration and delivery, they can automate their entire process by deploying without human intervention. Continuous deployment is the nirvana state for software developers since the only reason a commit pushed to production will not go live is if a test fails. For many companies, complete automation is not feasible due to compliance reasons and for them, continuous delivery is the next best option.

All three work together and are necessary components of a well-oiled machine.

Clark Boylan: All three work together and are necessary components of a well-oiled machine. Continuous integration is your first line of defense. Here you catch problems as early as possible, preferably before changes merge if you are gating code commits. Assuming CI gates code commits, your software branches should always be in a near releasable state. This feeds into continuous delivery, as the effort required to make releases is minimized enabling automation of the release process. Finally, your continuous deployment automation can consume the outputs of your continuous integration and delivery tooling, going straight into production. Each layer feeds into the next in a very complementary way.

Baruch Sadogursky: Here I would say that they are not different styles of the same thing; they are a gradual evolution of continuous pipelines. Continuous integration is one piece of eliminating manual, tedious, and error-prone wor by integrating continuously very, very fast in an automated manner. Continuous delivery is automating another tedious process by delivering releases to the target servers. Continuous deployment is an evolutionary step forward. Not only do we now continuously craft the release and are capable of delivering the release to the target servers, we actually do it continuously every time, removing another manual step.

So, it’s hard to answer the question. What are my favorites? They all obviously have value and they are all needed. One can argue that continuous deployment should replace continuous delivery. I tend to agree. The fewer manual steps, the better. But continuous integration is definitely not a question of preference. Continuous integration is mandatory; it’s a part of both continuous delivery and continuous deployment.

Now, there is another continuous thing that you didn’t mention in your question, but I would like to bring up: continuous update. It’s the next evolutionary step after continuous deployment. The difference is that we are now aware that not only do we deploy new software to our servers, we actually deploy updates to existing servers to any edge device. That might be your own server that you control on prem or in the cloud, but also edge devices that are out of our control, such as mobile devices, IoT devices, computing agents in cell towers.

All of those are edges that need to be updated and the usual techniques of continuous integration obviously apply, but you need to go beyond continuous delivery and continuous deployment and actually implement continuous updates. All of those are evolutions, and most of those contain one within another. So, it’s really not a matter of personal preference or style.

Joonas Lehtimäki: Yes, there are many terms, and I often find people confusing them. I usually like to use the whole CI/CD term because if we were talking about just continuous deployment, it only covers the deployment phase of the software pipeline. Continuous integration does the automated tests, building, and whatever the software needs before it can be deployed to production.

Nir Koren: Continuous integration is the basic for all. You cannot have CD (Delivery / Deployment) without robust and proper CI. My favourite approach is Continuous Deployment (where you have fully automated processes including production deployments) because it forces you to provide more solid and robust environments and test frameworks, and it causes production to be more dynamic. Continuous deployment empowers the developer and makes him more accountable to the entire process.

Christian Uhl: I would not call it different styles, but more concepts that build on top each other – continuous integration is a requirement for continuous delivery, which is the foundation for continuous deployment.

JAXenter: Security is also a very important topic in terms of delivering software – our applications should not only be available as fast as possible, but also as secure as possible. Are there any best practices for how to implement the security aspect into your CI/ CD pipeline?

Tracy Miranda: The guiding principle is to ‘Shift left on security’ and this is nicely summarized in the Accelerate book – the bible for software delivery – which talks about integrating security into the design and testing phases of software development. Additionally, there are many security principles to apply to your specific CI/CD pipelines and environment. For example, for cloud native CI/CD pipelines Cosmin Cojocar has an excellent talk which outlines the principles e.g. establish secure defaults, minimize attack surfaces – and then goes onto give specific ways to do this with Jenkins X by using configuration as code, isolating clusters, etc. I highly recommend taking a look at the principles and then working out how you apply those to your setup.

It is critical that security be baked into CI/CD pipelines.

Priyanka Sharma: Absolutely! If we think back to the recent security breaches in the news, in most cases they are not the result of complex attacks. Instead they are cases of teams being unable to follow best practices for a variety of reasons. So, it is critical that security be baked into CI/CD pipelines so that developers can check for issues as a part of their regular deployment. This makes a security issue not much different from a different kind of failed test or bug because it happens when code is committed and merged.

There are many elements of security tests and scans that can be included in pipelines. Here is a list:

  1. SAST – Static Application Security Testing – scans the application source code and binaries to spot potential vulnerabilities before deployment
  2. DAST – Dynamic Application Security Testing – analyzes your running web application for known runtime vulnerabilities by running live attacks
  3. IAST – Interactive Application Security Testing checks runtime behavior of applications by instrumenting the code and checking for error conditions
  4. Dependency scanning analyzes external dependencies (e.g. libraries like Ruby gems) for known vulnerabilities on each code commit
  5. Container scanning checks Docker images for known vulnerabilities in the application environment
  6. License compliance searches for project dependencies upon code commit to check for for approved and blacklisted licenses defined by custom policies per project

Clark Boylan: Security in your CI/CD pipelines must start with security of the CI/CD system itself. We are seeing purpose built CI/CD tools, like Zuul, assume that developers are not necessarily trustworthy and build a security model around that. This helps ensure that the CI/CD system isn’t the weak link in your security threat model. Beyond that, code review performed by trusted individuals and automated tooling work together to keep insecure changes from being merged to your software.

Baruch Sadogursky: Securing applications requires continuous delivery, continuous deployment, and continuous updates. The faster you can deliver your patch to a device, the more secure you are. Another aspect is the security of the pipelines themselves. We have seen a lot of examples of the supply chain, which is a part of your continuous pipelines, being attacked. There are companies in our industry that secure your pipelines and your supply chain. You cannot forget this aspect because it is actually bringing a viable component through your pipeline into your product. So yes, the security of your supply chain is an important aspect that needs to be considered, funded, and implemented.

Joonas Lehtimäki: Security should be considered from architecture planning to production. Every security tool that you can quickly run and automate would be nice to have in your pipeline like, for example, using Clair to scan your images before they go to staging/production environments.

Nir Koren: Except for the obvious manual code review and security inspection – we use automatic static code analysis like WhiteSource and continuously seeking for additional ways to secure our products.

Christian Uhl: The low hanging fruits and obvious mistakes can nicely be caught by building SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing). Furthermore, we can (and did) include services that will update our dependencies automatically as soon as updates are released. This provides us with an always up-to-date-as-possible system and we don’t miss out on security patches on all levels of our product. When you deploy 10 times a day, a few more deploys for dependency updates don’t make a difference anymore. Including security best practices in the code review process also helps us find issues. But to be honest we still do manual security audits of our system / penetration testing outside of our CI/CD pipeline as well, since there’s always unknown problems to catch.

JAXenter: What is the difference between a blue/green deployment and a rolling deployment? Which one do you prefer?

Tracy Miranda: I love how my colleague Viktor Farcic describes blue/green deployment as the “filthy rich” deployment strategy. It is a strategy that made sense in the days of pre-virtual machines, Docker and Kubernetes when deployments took a long time so it was better to keep the old release running in parallel just in case. With modern applications and cloud native platforms such as Kubernetes, rolling and canary deployments make much more sense in terms of cost-effectiveness, high availability, responsiveness, etc. For more on picking the best deployment strategy for your use case I highly recommend this recent FOSDEM talk by Viktor on ‘Choosing the Right Deployment Strategy’ .

With modern applications and cloud native platforms such as Kubernetes, rolling and canary deployments make much more sense.

Priyanka Sharma: Blue/green and rolling deployments are two ways organizations can release new features to production. Blue/green deployments mean there are two environments of an application with a load balancer used to route traffic from one environment to the other. The blue environment is the original version of the code and the green is the new one. Once the green environment passes all tests, the load balancer can redirect traffic from blue to green and if there are any bugs or issues, traffic can be redirected back to the blue environment. Rolling deployments are when new versions of the application slowly replace the old one in an incremental fashion. New and old versions coexist without affecting functionality or user experience. This process makes it easier to roll back any new component incompatible with the old components. GitLab practices rolling deployments.

Clark Boylan: In our world, the focus is on project gating. This means we check every commit before it merges to ensure that it doesn’t introduce regressions and bugs. The idea is to avoid as many problems in production as possible. Reality is some issues will still sneak through and that is where blue/green or rolling deployments can be helpful. In a blue/green deployment you deploy a second environment with your new software alongside the old environment. Once testing shows the new environment is working you can switch to it in production.

With a rolling deployment, you replace small portions of the system at a time enabling you to catch any errors early and rollback if necessary. The choice of strategy here often comes down to the software being deployed. Monolithic software is often happier in a blue/green deployment because it can’t be deconstructed into smaller parts. Software built out of microservices that can be easily replaced makes rolling deployments a good choice.

SEE ALSO: DevOps in 2020 – our big DevOps survey

Baruch Sadogursky: Delivery to those platforms promises you new opportunities. When we talk about containers and serverless, we’re talking about lightweight updates that can replace heavier server rollouts or virtual machine setups that we used to do. Now when we need to replace or update a piece of our application, we can actually implement the continuous update and implement only a tiny piece of the system. Only one image will be scaled to containers, only one lambda, which is a microservice in our serverless application. Delivering smaller pieces allows us to update faster. That’s what helps bring in value faster and helps applications be more secure. Containers, Kubernetes, and serverless are great news because they allow us to move from continuous deployment and continuous delivery of a monolith to continuous updates of smaller pieces of a microservice.

Blue/green is considered to be a safer choice of implementation, but it has some cost overhead.

Joonas Lehtimäki: The main difference is that blue/ green deployment has two environments, and rolling deployment has just one. Blue/green is considered to be a safer choice of implementation, but it has some cost overhead. In blue/green deployment, you’d spin up a newer version of the environment and then start slowly load balancing traffic to it. When the rollout has ended, the old environment is either kept or terminated. Rolling deployments then use one environment and start killing and spinning up instances one by one until all are on the newest version. To me, I prefer rolling updates, but it all comes down to the software and SLA needs.

Christian Uhl: The short and not-exhaustive explanation would be: Blue / Green means running the old and the new version of a deployment in parallel, doubling the instances. At some point, you switch some routing from old to new. The consumer of the system sees either one or the other version. For a rolling deployment, you need to have more than one instance of a deployment and change / update one piece at a time – so a consumer would sometimes see the old, sometimes the new version during deployment.

Both have advantages and downsides: I prefer blue/ green for stateful services (like databases) to avoid consistency issues. Stateless components can be deployed quicker and with less resource allocation using rolling deployments. That allows abandonment of the deploy when too many errors occur in the new system before all users are impacted (see canary releases).

The post Continuous Delivery Expert Check 2020 – CI/CD, security & rolling deployment (Part 1) appeared first on JAXenter.

Source : JAXenter