Docker and Kubernetes have always been among the most common terms in the vocabulary of cloud native development , especially when it comes to containers. These are open source technologies that allow you to facilitate the entire life cycle of containers, adding features over the underlying technologies, as well as allowing a simpler interface with users.

In this simple in-depth analysis we will focus in particular on the world of Docker, to understand first of all what it is, where it comes from, what differentiates it from the traditional Linux container, as well as understanding the reasons for its extraordinary success, which has made it in all respects a technological reference standard.

What is Docker

Docker is an open source software platform, developed by the company of the same name, which allows you to create, test and distribute containerized applications. In fact, Docker runs the software in standardized units: containers, environments that offer everything needed for execution: libraries, system tools, code and runtime.

Docker’s goal is therefore to facilitate the creation and management of containers, in order to distribute and calibrate the resources for an application in any environment , always keeping under control the executed code.

Docker containers can run anywhere, in on-premise data centers or in the public and private cloud. Thanks to the virtualization at the operating system level, typical of this technology, Docker allows you to create containers natively in both Linux and Windows environments.

However, it should always be kept in mind that Windows images can only be run on Docker installations on Windows hosts, while Linux images can be run on Linux hosts, even if using third-party technologies they can also be launched on Windows.

The brief history of Docker

Docker made its first announcement on the IT scene in March 2013, as an open source platform at the time known as dotCloud. The following year the first version of Docker Engine 1.0 was released while in 2016 it was the turn of Swarm, the tool dedicated to the orchestration of Docker containers, which makes its appearance in version 1.12.

Swarm had in some ways the ambitions of becoming a reference point in the world of container-as-a-service, but at this juncture we have witnessed the clear prevalence of Kubernetes, another open source technology, more complex overall, but even more complete in terms of functions and capable of adapting to a greater variety of operational contexts.

In 2017 Docker Enterprise made its debut, while in 2019 Docker was acquired by Mirantis, which in addition to the products acquired its intellectual properties. Initially intending to shut down Swarm in favor of Kubernetes, Mirantis later revised its decision, continuing to support Docker features for both orchestration platforms, including the proprietary one.

What is it for

To understand the purpose of a platform like Docker it is enough to focus on the difference between Docker containers and the more generic Linux containers. Docker mainly serves to simplify the construction and management of containers compared to the standard condition.

While based on the same technology, namely the LXC (LinuX Container), Docker aims to guarantee a much more advanced experience than the standard implementation. LXC basically allows light virtualization, independent of the underlying hardware, but basically rather immature and cumbersome to implement. In other words, basic LXC presents evident problems in terms of user experience.

Docker works specifically on many aspects to facilitate the interface between the container and the end user, for example through a series of automated procedures that guide step-by-step to the construction of the container itself, with a correct versioning of all images. create.

Docker also tries to facilitate the aspects related to the subdivision of the applications to be run on the various containers, to make the overall orchestration more practical and faster, especially when dealing with applications made up of a very high number of components.

Differences between Docker images and Docker containers

In the broad lexicon that refers to Docker technology, it is good to avoid the confusion that could arise when talking about Docker images and Docker containers. In fact, they are not the same thing and their understanding facilitates the correct operational contextualization of the platform.

Docker images

The Docker image contains the application code to run and all the libraries and dependencies needed by the code to run as a container. Based on this, when you run a Docker image, one or more container instances are actually generated.

Docker images are made up of various layers and each corresponds to a version of the image. The last level corresponds with the most recent version, but a state of the previous versions is preserved for the purpose of recovery or to facilitate use in other projects. This explains why developers often search for Docker images in community-powered repositories. Obviously from a single Docker image it is possible to create many others and the starting point will correspond to the same elements in common.

Container Docker

In the Docker image structure, the container simply configures itself as a specific layer, called the container layer. Container-level changes, such as adding and deleting files, are saved only in the container level, at the same time it is running, without affecting the base image, which can be used to boot multiple container instances at the same time.

Having to give a definition, Docker containers are therefore live instances of Docker images. Following this logic, it is easier for us to understand how Docker images are made up of read-only files, while the Docker container characterizes the editable part for all purposes.

When to use Docker

Docker is a great alternative when it comes to using containers in development projects. The reasons for using it are therefore strictly connected to the usefulness of the cointainers themselves, now indispensable tools for developing software based on a microservices architecture. In particular, this applies to the DevOps methodologies used for the development of cloud native applications, which provide for continuous CI / CD cycles (continous integration / continous deploy), to guarantee the end user always and only the most updated version.

The technology behind containers, and therefore of Docker itself, can be extremely summarized in three key terms:

  • Builder : tools used to create containers. In the case of Docker, this tool corresponds to Dockerfile;
  • Engine : it is the engine that allows you to run containers, as in the case of Docker command and Dockerd Daemon;
  • Orchestration : technology used to manage containers, with complete visibility on their execution status (activity, server / VM, etc.), as in the case of Docker Swarm or the famous Kubernetes.

The key benefit of a container is that both the application and the configuration of the execution environment are available. This allows you to manage the various images as instances, without having to install the application each time, as was the case with traditional procedures.

This feature is linked to increasingly refined evolutions, based on their ephemeral nature, which allows the container to be started and stopped without solution of continuity, exclusively according to the workloads that require it.

If an anomaly occurs, such as a crash or, more common case, the container is no longer needed because the application it is running has in the meantime ceased its activity, it is possible, for example, to orchestrate its closure according to certain context parameters, without having to force a systems engineer to manually execute each procedure.

Platforms such as Docker and Kubernetes allow you to create an end-to-end pipeline in managing the life cycle of co-controlled applications, generating high added value thanks to the level of automation they allow to achieve.

Why use Docker

Despite a relatively young age, Docker has become a real point of reference, to the point that, rather improperly, it is often used as a synonym for container, to which a much broader connotation should be recognized, considering that LXC it was implemented in the Linux kernel long before Docker was born.

At a functional level, a possible short memory problem in terms of the history of technology does not however constitute an insurmountable criticality. What we want to highlight are the practical advantages that a solution like Docker allows its users to obtain. The same reasons that convinced them to implement it in their development pipelines. In this final synthesis we will therefore also take up some of the aspects listed in the course of the previous paragraphs.

Docker is currently used by millions of developers around the world and is available in major cloud ecosystems, with customizations that help the user operate even more efficiently, especially when it comes to combining various technologies and services.

A conscious use of the Docker platform allows above all to extend and improve the use, also on Windows Server, of the standard Linux containerization functions, in particular:

Granularity and greater decoupling between processes

Unlike standard LXC, Docker allows you to run only one process inside a container. In this way, a granular structure is obtained that allows to obtain a greater functional resilience, since a maintenance, update or modification process implies a downtime limited to the process concerned, without further combinations.

Greater portability of containers

Docker containers can be run, after installing the platform, on any system, as long as it meets the host requirements. So there are no particular restrictions on moving to on-premise and cloud data centers. Standard LXC containers, on the other hand, are more finicky in terms of the specific configuration of the machines, physical or virtual, on which they are expected to run.

Versioning of containers

Thanks to the native layered structure, Docker preserves the history of images, so that you can go back to different versions at any time. Beyond the intrinsic advantage, this allows you to enable several very interesting functions, such as the one that allows you to adopt only the variations between two different versions.

Automating the creation of containers

Docker has various tools to allow rapid construction of the container on the basis of presets and procedures with a high level of automation, oriented towards the typical self-service of cloud services.

Repository and resource sharing

Thanks to a huge community, which is progressively enriched by participants, Docker users can have many images made available on the main repositories used by the developers. This greatly facilitates the start-up phase of development projects, whenever the need arises to give life to a new function.

Reuse of containers for different applications

In total consistency with the decoupling on which the microservices architecture is based, Docker images can be reused in different projects, such as templates to launch new containerized instances.

Categorized in: