Containers and Cloud Computing

Monday Apr 10th 2017 by Paul Rubens

By allowing far greater application portability, Containers enable developers to work with greater efficiency between on-premise datacenters and the cloud. 

The Docker container system exploded onto the cloud computing scene just four years ago, but its impact since then has been profound and wide reaching. That's because containerization promises to change the way that IT operations are carried out just as radically as virtualization technology did a few years previously.

Although container technology itself dates back more than a decade, Docker made containerization easy to implement for the first time. It also arrived on the IT scene at just the right time: when many organizations were looking for a technology to help them make the transition from on-premise computing in their data centers to a hybrid computing approach that made use of their data centers as well as leveraging the power of public clouds.

What are Containers?

A shipping container is a standardized box that has been designed to carried on many different types of transport, including ships, trains, and trucks. The idea of an application container is very similar: it's a way of packaging an application so that it can be moved around and run on different computer systems – from a developer's laptop, to a test environment, to production systems in a data center or in the cloud – without modification.

To achieve this, a container consists of an entire runtime environment. That includes not only the application (or microservice) but also all of its dependencies – libraries and other binaries – and the configuration files needed to run it, all bundled into one package.

By containerizing the application and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away. For example, if a developer's environment uses one version of Python but the test environment has an earlier version, then an application may not run in the test environment properly. But if the application is bundled in a container with the correct version of Python then it can be moved between the two (or any other environment) and continue to work properly. Likewise, containers overcome problems due to differences in network topology or storage environments, or even security policies.

Just as virtual machines run on a virtualization host, containers run on a container host. But whereas a virtualization host runs virtual machines which contain their own operating systems as well as the applications that run on them, a container host runs a single operating system, and each container shares the operating system kernel with the host and with other containers running on it. Shared parts of the operating system are read only, while each container has its own mount for writing.


Source: Docker

Containers allow easier portability between computing environments.

Benefits of Containers

  • Application portability One of the key benefits of containerization is that containerized applications can easily be moved from one environment to another without encountering dependency problems, or other problems caused by a change in operating environment. This is particularly beneficial for companies that want to be able to move applications between their data centers and public clouds. This can also make bug tracing easier because there is no difference in environments when an application in a container is run on a laptop, a test server, or in production.
  • Fewer resources required Unlike virtual machines, containers do not have their own complete operating system. Instead they share the operating system of the container host. That means while virtual machines may be several gigabytes in size, containers may be just tens of megabytes. That fact means that a single server can host far more containers than virtual machines, reducing hardware requirements and related space, power, and cooling costs. 25% of companies that use Docker run an average of over 10 containers simultaneously on each host, according to research from Datadog.
  • Operational efficiency Since containerized applications can be moved easily from development to testing to production, containerization simplifies a move to the DevOps process for companies that wish to do so, and can also speed up the time required to develop an application, test it, and put it into production.
  • Low startup time Because they are so lightweight, containers can often be spun up in a fraction of a second. By contrast, a virtual machine may take several minutes to start up. That means containers can be provisioned whenever needed to react to spikes in demand, and shut down when no longer required, allowing resources to be used more efficiently. Research shows that the average lifespan of a container is 2.5 days, compared to 15 days for a virtual machine, suggesting that many companies use containers for dynamic workloads.
  • Microservices Many organizations are moving away from monolithic application design architectures to more modular microservices architectures where applications are split up in to different modules. Containerization is ideal for this type of architecture because each microservice can be put into a container and fired up almost instantly when needed.


Source: Docker

Containers have key advantages over virtual machines.

Disadvantages of Containers

  • Lack of isolation One of the key worries about containers is that they don't provide the same level of isolation to applications as virtual machines do, because containerized applications share the host's operating system kernel. That means that if there's a vulnerability in the operating system kernel, this could provide a way in to the containers that are sharing it. Of course that's also true with a virtualization hypervisor, but since a hypervisor provides far less functionality than a Linux kernel (which typically implements file systems, networking, application process controls and so on) it presents a much smaller attack surface.
  • Operating system dependency Since applications running in containers share the operating system of the container host, that means that containerized applications must run on a host running the same type of operating system. That means that, unlike with virtual machines, a Linux application in a container can't run on a Windows container host, and vice verse.

Container Software Implementation

Docker has become almost synonymous with containers, but it turns out that its container system is far from being the only one in town: there are other companies such as CoreOS, and even Microsoft (in partnership with Docker) that offer systems of their own.

But Docker is the most significant, so it’s instructive to take a look at what components go to make up the Docker container system. Other systems are broadly similar.

  • Docker host: This is a computer that runs containers.
  • Docker client: The Docker client runs on end-user machines, providing a user interface to Docker. It accepts commands and configuration flags from the user and communicates with a Docker daemon.
  • Docker daemon: The Docker daemon runs on a Docker host and is responsible for building, running, and distributing Docker containers. The Docker client and daemon communicate using a REST API, over UNIX sockets, or a network interface.
  • Docker image: A Docker image is a read-only template with instructions for creating a Docker container. An image is described in a text file called a Dockerfile.
  • Docker container: This is the runnable instance of a Docker image, which can be run, started, stopped, moved or deleted using Docker API or command line commands.
  • Docker registry: A registry is a library of images. Registries can be public or private, and can be located on a Docker host or client, or on a separate server.

Other container systems that provide an alternative to Docker include:

Container orchestrations systems

In order to extract the maximum benefit from containers in an enterprise environment, a container orchestration system is usually required. This can be used to automate the deployment and management of containers, and the scaling of multi-container applications in large clusters.

Docker provides an orchestration called Docker Swarm, but important alternatives include:

  • Kubernetes: Originally developed by Google, this is an open source orchestration system that is also built in to commercial orchestration products.
  • Amazon ECS: This is a highly scalable, high performance container management service that supports Docker containers and allows users to run applications on a managed cluster of Amazon EC2 instances.
  • Azure Container Service: ACS optimizes the configuration of open source orchestration systems such as Docker Swarm or Kubernetes for use with Microsoft's Azure.
  • CoreOS Tectonic: This is a commercial container management system built around the open source Kubernetes.
  • VMware Integrated Containers: VMware maintains that the most sensible way to implement containers is within a VMware virtual machine, to provide extra isolation and in order to be able to take advantage of the maturity of its vCenter management software. It offers a system called vSphere Integrated Containers which allows Docker containers to be run and managed from vCenter by putting them into minimal virtual machines.
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved