Containers & Security – how to build more than another stage into software processes

Share

Over the past two years, containers have grown in popularity. They enable developers to run instances of their software components in an isolated wrap made from namespaces and cgroups. In the same way that the cloud has been adopted, the popularity of containers is a response to the demands of businesses that want greater agility and scale, and the ability to innovate quickly.

Containers have made it easier to scale out distributed applications when more resources are required. Need more power to deal with an increased volume of requests? Add more container instances. Want to run across more than one location? Add more containers. Want to change those images? Power down existing images and replace them with new containers.

SEE ALSO: IoT security – “The safest software is the one not being on the system”

However, as developers use containers to support their applications, we have to be aware of the new security model that these deployments will need. Containers communicate differently to OS hosts. They communicate between each other which means a port is exposed. Containers essentially create a situation where a firewall solution or host-based intrusion detection system (Host-IDS) are oblivious to their existence. In meeting demands at speed, developers can also open their work up to risks and challenges. Containers don’t have a security model or provisions as standard, so we have to think about this in advance.

Where are the risks?

For containers, there are three main areas where risks can be introduced. This covers the container images themselves, how they are updated and how they are run over time.

Each container is a base image that includes what is needed to run for a specific job. This can be developed internally and stored in a private registry, or sourced from a public registry. Whether the image is taken from a public or private source, the image should be checked before it is deployed. The reason for this is simple – it can contain security faults, such as an old OS or an exploitable version of an application, that can be introduced into an application.

Application security company Snyk found that many of the most popular publicly available container images contained flaws and vulnerabilities, while at the same time many developers were not actively scanning those incoming containers for problems. Around 80 percent of developers did not carry out any security checking around the Docker images they were using. Every time an image was pulled from a container registry, an existing vulnerability would be introduced into the application.

Checking images as they are put into the company registries is therefore an essential step. However, these images themselves have to be kept up to date as well. As each image is stored until it is needed, these images are effectively static. If a vulnerability is discovered after that image is created, then the vulnerable container will continue to exist in the registry until called on.

This also affects security for running containers too. A live container will carry on running for as long as the workload is required. This can mean that for applications with large volumes of traffic that container images will carry on for longer periods of time, during which time issues may get discovered.

Alongside scanning images in the registry, each running image should be scanned over time as well. Timing for this is interesting – after all, containers can be requested, used and removed based on demand levels. Carrying out a scheduled vulnerability scan weekly or even daily won’t catch potential issues if containers only exist for 15 minutes at a time. Instead, continuous scanning for vulnerabilities will be required instead.

This continuous approach can also be applied to catch potential issues with containers that can build up over time. While it’s not best practice to add code or extras to container images after they have been created and used to execute a runtime environment, you may find users making curl calls or using wget to pull in a package to set up an application. This addition could create the potential for a vulnerability that you wouldn’t even know about. Scanning live container images is therefore necessary as well.

What approaches should we take?

So far we have seen that containers can introduce potential vulnerabilities into applications. Now, we can look at how to architect a security model around those containers so we can spot potential problems during the software development lifecycle (SDLC).

The first step is to know what containers are running at any given point. Tracking what container images are created on each Docker Host, you can see any new images that are created and get accurate data on them. You can then gather metadata on those images and track activities over time.

The second step is to scan these images for vulnerabilities that might exist within these containers. This will provide information on any old components that might exist within the Docker image that should be updated, as well as potential misconfigurations that could expose data. To achieve this, there are a couple of options.

If you are dealing with a build-based environment and registry scanning, there is the ‘side-car’ approach, which is a native sensor for containers. Essentially, this is an unprivileged Docker image that is deployed next to each node or host as a side-car. This approach can be used with various orchestration environments, including Kubernetes, and side-cars are generally self-updating and configured to communicate over proxies. This model works well for performing scans during the build in Jenkins and for scanning the registry to identify when containers are drifting from their images.

The side-car approach has limitations especially when moving to a runtime phase, where a ‘layered’ approach is more effective. This is where you instrument an image by adding a layer of protection that can be enforced with rules dynamically pushed at runtime. This grants the ability to provide access and read control at file, network, and system calls level.

A layered approach also enables the developer to identify the application, its characteristics inside the container and then answer important questions regarding visibility, such as what system changes is it making? What files is the container accessing and what processes is it running? And what kind of network connectivity is occurring?

This level of visibility opens the door to building profiles for an image and establishing an enforcement mode. Although containers have, in theory, immutable behaviour, this approach can assess container behaviour over time to ensure this remains the case. By using a default profile to make constant comparisons, developers and security teams can check for any anomalies and then enforce rules and appropriate responses.

The third step is to look at the Docker hosts as well. These instances host all the running containers, but they require a different approach as they are not architected in the same way. Instead, more common agent-based security scanning and vulnerability management approaches can be applied to check for potential issues. This ensures that the Docker host is not subverted and the images above can remain secure.

Finally, you will need to consider a web application firewall. These are commonly deployed as part of their own container. As firewalls block certain types of network traffic and allow other types of traffic through, this will help protect the application layer traffic and control what apps and app services are accessible.

Putting container security into context

It’s also worth looking at the process that you take around container security and software development. Most implementations of containers will be part of wider continuous integration / continuous deployment (CI/CD) pipelines, commonly managed by tools like Jenkins, Bamboo or CircleCI. These pipelines will automate the creation of container images during the SDLC, pushing them through from initial development to deployment.

SEE ALSO: DevOps report card: Security must be part of the software delivery cycle

Integrating your security approach into the CI/CD pipeline tool – and therefore, into the approach used to automate it – has two benefits. Firstly, it makes it easier to track assets like containers through the pipeline and into production. Secondly, it should provide the security data directly to developers in their workflow, so they can take actions as they need to. This makes it easier for the security team to collaborate with developers in a practical way.

Containers are increasingly popular as everyone strives to make their applications more agile and more scalable. However, they do not have a security by design architecture in place. End-to-end security has to be built in from the start when building, shipping and running containers, so that everyone taking advantage of the technology can benefit. By designing container security to work with developers in their natural workflows, everyone can safely derive value.

The post Containers & Security – how to build more than another stage into software processes appeared first on JAXenter.

Source : JAXenter