7 Ways to Deliver More Reliable Container Operation

By: Andreas Neeb, Chief Architect Financial Services Vertical at Red Hat

The key features of Linux containers can be summed up in a few sentences: They pack program code and necessary dependencies into a an isolated package, running on a single instance of the host operating system. This is done either directly on physical hardware or in a virtual machine, and this can be in an on-premise data center or in a public cloud.

Containers clearly differ from hypervisor virtualization: Virtualization involves linking virtual machines to a full operating system which generates significant overheads. To address this issue, an application container already contains all the required dependencies, like middleware and the runtime environment. As a result, multiple containers on one server can share the kernel of the operating system.

Containers promise quick and cost-effective development, as they can be quickly and easily ported between and within development, testing, and operating environments. This enables enterprises to solve many organizational problems encountered in large-scale, conventional software development projects. Dozens of programmers are no longer needed to work on a single application. Instead, there are small teams, who focus on particular sub-tasks and processes, and are therefore able to work in a much more agile manner.

When containers are transferred into productive operation, it is possible in the simplest case to initiate the procedure using the systemd server process. In current Linux distributions, the systemd server process acts as an init system to start, monitor, and end other processes, and it can also be used for basic management of container images. For small, simple applications, systemd may be sufficient. Conversely, large-scale business applications place high demands on container management, which can be summed up in seven points.

1. Optimized alignment of resources and workloads 

In contrast to standard monolithic software or a customized solution, a container-based application consists of a number of mutually independent components that are largely capable of operating as standalone components. Each of these individual components and their relationships with one another must be taken into account in a container management solution. Complexity increases further if the IT team follows the DevOps concept, which is based on interlinking development and IT operations.

Container scheduling is required to make the transition from the development stage into live operation. Among other things, this looks at how the containers are distributed across target infrastructures and what resources are available to them on the systems in place. These may be servers in an on-premise data center, but also servers in a public cloud or even several public clouds such as Amazon Web Services, Google Cloud, and Microsoft Azure. The aim of container scheduling is to ensure that containers can optimally utilize the available computing resources such as processing power, memory, SSDs, and hard drive and network capacities.

Businesses that develop container applications very often plan for such applications to run in a public cloud, if not immediately then at least at a later date. In this respect, developers are harnessing the advantages of containers, which abstract from the underlying infrastructure. This means it is irrelevant for the container where it runs – whether directly on a server, in a virtualized environment, or in a public cloud. As a result, the container management solution must allow application containers to be transferred in any way between the on-premise data center and one or more public clouds. At present, only a small number of users have implemented this cross-data center and cross-cloud architecture. Many more look set to follow soon, however.

2. Lifecycle management

A container management solution should not only start containers and ensure optimized resource utilization – which is the job of container scheduling. It should also monitor proper operation, identify and fix malfunctions at an early stage, and ensure availability. This also includes restarting a container that has stopped running for whatever reason on the current server or moving it if necessary to another server in the on-premise data center on in a public cloud.

To this end, a developer can also supply a simple test, for example, which performs an external check to determine whether the container is working properly. The container management solution receives this test as an input parameter and can then check at predetermined intervals whether the container is still performing its service as intended.

Also very useful at this stage are functions for a more comprehensive health check of containers, the implementation of which developers can integrate directly into their application in the form of APIs. This is possible, for example, using API management software, which enables infrastructure administrators to manage the application container lifecycle from provisioning and configuration through to software management. Where the APIs are integrated directly into a container application platform and therefore also into a management solution, and outside access is blocked, regulatory and compliance requirements can also be achieved using this configuration.

3. Security

As application containers become increasingly prevalent in businesses, this poses specific IT security challenges. To address these, basic security measures need to be implemented as part of container management. The aim here is to ensure the security of container images and container content throughout the entire application lifecycle. When creating container images, it is important, for example, that only trustworthy content is used, that the origin of all components and libraries in container images can be readily determined, that isolated environments are used, and that regular security scans are performed.

From the outset, role and rights management for containers must be in place, which is embedded in a container management solution. The container management tool can, in this case, use the LDAP-based solutions already in place in an enterprise. Here, it is first defined which users – depending on their role – may carry out which activities involving a container. Secondly, the actions that a container may perform must be specified. The spectrum ranges in this respect from full isolation through to root rights, where a container is given access to other containers and to the server operating system. This would be the exception rather than the rule, however.

Certification in the form of a digital signature improves the level of security. This makes it clear who created the container image, for what purpose, and at what time. A general security strategy can be summed up as follows:

All components should originate from trustworthy sources. 
It should be clear that their security status is up-to-date and has not been changed without authorization. 
As an additional layer, SELinux should be used on the container hosts to shield running containers from the host and from one another. SELinux isolates the containers and only allows access to necessary resources. 
Across the outside boundaries of a container, it is theoretically possible for malicious content to work its way from one container image to the next and finally even to the container host. Every process running in a container context has access to the kernel of the container host, without any further explicit security measures. In the worst case scenario, an attacker may exploit a vulnerability in the software running in the container. If he then also finds a vulnerability in the Linux kernel, he has successfully made the jump to the container host.

This would also jeopardize all the other container processes on this host. As a preventive measure, the container host must therefore be regularly updated using the latest security updates.

4. Service discovery

Using technologies and processes such as microservices, containers, and DevOps, IT departments are able to respond quickly and flexibly to new business requirements. The prerequisites for this are provided by the microservices architecture concept: The applications are broken down into small, loosely linked microservices, and packed as a container on servers within the enterprise or placed in the cloud.

Since containers are inherently dynamic and volatile, and the container management solution places them depending on need, it cannot be ensured that a single or even a group of associated containers will always run on one particular server. The container management solution must therefore have service discovery functions available to it, so that associated containers can also be found by other services, regardless of whether they are on-premise or running in a public cloud.

5. Scaling applications and infrastructure

When it comes to scaling – a process to be supported by the container management system – there are two different types:

Scaling of container instances with the application itself: In the event of peak loading, it must be ensured, for example, that an administrator, using the container management solution, can manually launch a larger number of container instances to cover current needs. To achieve dynamic scaling, an automatic mechanism that works with stored metrics is appropriate. In this respect, the administrators can specify that, if a particular CPU load occurs, storage capacities are exceeded, or specific events happen, a predetermined number of additional container instances are started. 
Scaling of the container infrastructure: In this respect it must be possible for the applications running on the container platform to be expanded to hundreds of instances – for example, by extending the container platform into a public cloud. This is much more complex than starting new containers on servers.  

6. Providing persistent storage

Introducing microservices in application architectures also has an impact on the provision of storage capacity. During packing and deployment, storage should be provided as a microservice packed in a container and becomes container native storage. This means that the management of persistent storage (container native storage) for application containers is also a task for the container management solution.

Using the Red Hat OpenShift Container Platform, for example, infrastructure administrators can provide application containers and persistent container native storage, which manages the Kubernetes orchestration framework. The Kubernetes Persistent Volume (PV) Framework provisions a pool of application containers, which run on distributed servers, with persistent storage. Using Persistent Volume Claims (PVCs), developers are able to request PV resources, without having to have extensive information on the underlying storage infrastructure.

Managed using a container management solution, container native storage should support the dynamic provision of different storage types such as block, file, and object storage, and multi-tier storage via quality-of-service labels. Furthermore, persistent storage improves the user experience in the operation of stateful and stateless applications. For administrators, it is therefore easier to manage the use and provision of storage for applications. Using container native storage, IT departments benefit from a software-defined, highly scalable architecture, which can be deployed in an on-premise data center and in public clouds, and in many cases is more cost-effective than traditional hardware-based or pure cloud-based storage solutions.

7. Open source solutions offer greater potential in terms of innovation

Container technologies, in particular Linux containers, have grown from a niche product to become a popular trend in the space of just a few years. Linux containers based on the Docker format have played a key role here. The open source-based Docker format is supported by many leading IT companies, stretching from Amazon, Google and Hewlett-Packard Enterprise through to IBM, Microsoft, and Red Hat. An industry standard has therefore been created, which is also valued by enterprises in all sectors that use Linux containers for development.

Particularly because so many users and software producers use Linux containers, a highly dynamic market has developed that follows the principles of open source software. Increasingly, enterprises are adapting a microservice architecture and supplying container-based applications. This creates new requirements that have to be implemented as quickly as possible in the form of new functionalities. This would not be possible following the closed source model and with only a single software vendor. The same also applies to container management solutions.

According to Github, around 1,000 developers from software vendors and their customers are working on the Kubernetes open source project, which forms the basis for container management in many solutions. When supplying new releases, innovation happens more quickly than with proprietary software. In the latter, release cycles of 12 to 18 months are the norm; it’s three months for Kubernetes. Open source container management solutions therefore have considerable advantages over vendor-specific solutions in terms of innovation and agility.

Comments