“In a Serverless world, sophisticated orchestration tools aren’t just nice-to-have, they’re non-negotiable”

Share

JAXenter: Which new features are included in Puppet Enterprise 2019.1?

Matt Waxman: When we were focusing on this release, we wanted to build an automation portfolio that addressed both individuals and teams who are asking for two different things. The individual wants to adopt automation quickly and a team wants to scale automation, however, there hasn’t been a single tool on the market that allows for this.

The updates to Puppet Enterprise 2019.1 focus on both ad-hoc automation and enforced-state management for getting started and scaling automation without getting rid of existing automation efforts (like Puppet modules, bash, Python, Powershell scripts, and more). We believe users shouldn’t have to make a choice between agentless and agent-based automation. With the new updates in Puppet Enterprise, users can choose between commodity transports like SSH or WinRM and agent-based connection methods for increased security.

A few of my favourite features of this release that help with this include:

  • Deeper integration of agentless support in the Puppet Enterprise console offers simplified onboarding workflows including saving and reusing credentials to easily automate targets without agents installed.
  • Enhanced scheduling capabilities that allow users to schedule Puppet Runs and Bolt Tasks to run different jobs on specific schedules, like every Saturday at 2am.
  • Enhanced support for popular network device modules like Cisco and Palo Alto Networks.
  • Ability to install Continuous Delivery for Puppet Enterprise directly from the Puppet Enterprise console with the click of a button.

We also made some updates to our open source agentless orchestration tool, Bolt and for Continuous Delivery for Puppet Enterprise. Bolt now comes with YAML support, so anyone, regardless of their skill level, can get started with automation. For Continuous Delivery for Puppet Enterprise, we’ve made it easy for users to adopt CI/CD practices for their infrastructure code.

JAXenter: Are these new features only available for Users of the Enterprise Edition or will they be part of the open source suite eventually, too?

Matt Waxman: Some of the features listed above are included in Bolt. We included a lot of Bolt features into Puppet Enterprise to ensure folks can go seamlessly from Bolt to Puppet Enterprise when the need for more centralized automation grows. Teams can now leverage what they’ve built in Bolt to adopt enterprise-level practices and enforce governance and auditability from a centralized server with Puppet Enterprise.

We recommend if you want to get started with automation right now, the easiest way to do this is with Bolt as it lets you orchestrate your existing commands and scripts across distributed infrastructure without requiring anything except SSH or WinRM connectivity. We’re seeing a lot of individuals pick up Bolt as well as those who need to do ops and automation tasks but fall more into a traditional developer role.

JAXenter: What exactly is Bolt and how does it work on a technical level?

SEE ALSO: Distributed Tracing: Modern Debugging in Times of Serverless

Matt Waxman: As an agentless automation tool, Bolt lets you orchestrate tasks that are performed on infrastructure on an as-needed basis, for example, when you troubleshoot a system, deploy an application, or stop and restart services. Bolt automates that manual work to maintain your infrastructure while connecting directly to remote nodes over SSH or WinRM.

Other tools often force users to choose between an imperative or declarative model, whereas Bolt supports both, allowing users to take declarative Puppet Code and combine it with imperative tasks written in any language. With Bolt, users have the flexibility to use the language of their choice giving them the freedom to orchestrate changes without becoming an expert in a certain language. Additionally, with Bolt Plans you can easily automate complex workflows, like an application deployment, that involves multiple steps and logic.

Organisations can also run tasks on agentless targets, such as network devices, enabling them to manage all their infrastructure in a consistent way. Bolt can make use of all 5000+ modules on the Puppet Forge, including out of the box content to get you started quickly.

Find more out more information on Bolt here.

You can also try Bolt, choose your operating system, follow the install link and run the listed Bolt command via your command line interface.

JAXenter: Chef announced to go completely open source a while ago and just offer something like a paid support model. Ansible (Red Hat) does something very similar. Is Puppet going to be 100% open source at some point in time too?

Matt Waxman: We believe that it’s possible to be an authentic open source company and follow an open core business model with a healthy community. This is a key part of Puppet’s strategy and serves us well, along with the 40,000+ organisations around the world use Puppet’s technology. On our Puppet Forge, there are 32,650+ unique releases, we have nearly 30,000 commits on GitHub and our three most popular modules alone have been downloaded over 60 million times each.

The platform is independently useful, and building novel workflows and/or features on top of it is a positive thing. For example, people have built entirely master-less workflows around our open source components, and have used that at a massive scale. We think that’s fantastic, and we encourage it!

We believe that it’s possible to be an authentic open source company and follow an open core business model with a healthy community.

JAXenter: The rise of Serverless Computing makes people think much less about infrastructure automation. What is your take on Serverless and how will the world infrastructure automation change to adapt to the changes that are happening in that sector right now?

Matt Waxman: We believe that to properly manage a system, you must understand and control the inputs to that system, over time. This has remained true across a wide variety of fundamental shifts in infrastructure technologies—and is surely true with serverless.

At first, the inputs to systems were likely files on disk. Operators had to meticulously manage the state and content of those files, across their fleet, in order for their infrastructure to work properly. Then package management (e.g. apt or yum) came along. Suddenly, the inputs weren’t individual files, but instead the versions of specific packages installed on specific systems. The advent of higher-level abstractions didn’t obviate the need to understand and control their parameters. Getting a package version wrong could have disastrous consequences for your applications, e.g. in 2012 Knight Capital experienced a version mismatch in their automated trading infrastructure, resulting in a major stock market disruption and a loss to them of $440M in just 45 minutes.

SEE ALSO: The secret to DevOps secrets management

In the serverless era, inputs now begin to look like configuration parameters for cloud services, entries in key/value stores, settings on individual lambda functions, or a complex, interlocking set of permissions and access controls across your cloud resources. Getting a parameter wrong for an autoscaling group can have massive ramifications for your application (and your wallet!). A failure to insert the right values for a specific key can now quickly and easily ripple through your entire infrastructure, maybe even taking down half the internet in the process – referring here to the Amazon DynamoDB service disruption.

We believe that to properly manage a system, you must understand and control the inputs to that system, over time. This has remained true across a wide variety of fundamental shifts in infrastructure technologies—and is surely true with serverless.

These platforms do not manage themselves. The operational “surface area” of these platforms has changed, but it is by no means eliminated. One could argue that the operational surface area of distributed, cloud-native applications is actually larger than the monoliths they replaced. The cost of getting these things wrong are higher now than ever because the building blocks we’re playing with are now much more powerful and sophisticated.

Alongside this, there is the fundamentally distributed nature of serverless applications. They are composed of many, individual functions, all talking to a huge variety of other cloud services. Everything is split out, everything is distributed, and much of it is ephemeral. In this universe, nearly any kind of operational task requires touching many “things”, and in a particular order. Thus, infrastructure automation becomes more about orchestrating tasks across large sets of targets. This is a world where sophisticated orchestration tools like Bolt aren’t just nice-to-have, they’re non-negotiable.

JAXenter: Thank you very much for the interview!

The post “In a Serverless world, sophisticated orchestration tools aren’t just nice-to-have, they’re non-negotiable” appeared first on JAXenter.

Source : JAXenter