In the world of virtualization today, there are two words that you no doubt here a lot about – Docker and Kubernetes.
The world of container technology has exploded in the last few years and is the direction that everyone is looking at to construct their microservices architecture moving forward. Containers present many advantages over the traditional virtual machine and allow your business to move in a fast paced, agile way. Containers are also behind the tremendous shift to immutable infrastructure. When the two words Docker and Kubernetes come up, they are often looked at as competing technologies, however, this is not the case.
In this post, we will take a look at Docker vs Kubernetes architecture, components, and uses for each and see how they each fit into your container infrastructure.
Let’s first look at Docker containers and see how this popular container engine is allowing businesses to effectively use container technology.
What Are Docker Containers?
Docker is a container technology which was released as open source via Docker Engine in 2013. Docker containers have become the de facto standard in enterprise container technology.
Docker has streamlined the process for packaging software into standardized units for development, deployment, and consumption of containers. While virtual machines virtualize hardware, containers virtualize operating systems. Docker containers allow packaging up all code and its dependencies so applications are able to run quickly and reliably, regardless of the environment. The Docker container image is a standalone executable package that includes all the necessary requirements for running the application – code, runtime, system tools, system libraries, and settings.
Docker container images run on top of the Docker engine. They share the same machine OS system kernel and do not require running an OS for each application, unlike a full virtual machine. This allows an extremely lightweight footprint per application and drives higher container densities per server. Docker is a multi-platform container technology that can run on Linux, Windows, on-premises, and in the cloud.
One of the most difficult aspects of software development is deployment.
In traditional environments, deployment involved building virtual machines, installing all the application dependencies, using configuration management tools and package management systems to effectively deploy software. However, Docker containers have changed all of this. Docker allows easily deploying applications via containers since it allows bundling up the code, prerequisites, and dependencies into a Docker container image. This allows the application to be deployed easily by simply downloading the Docker container image and running it.
Additionally, deployment via Docker allows businesses to run the container image on different distributions of host operating system. The same Docker image could be ran across any number of Linux distros, including CentOS, Ubuntu, etc. As long as the Docker engine is installed on each, the same Docker container image will run across all operating systems. This has drastically streamlined what the process looks like to deploy enterprise applications and services and made the process much simpler with less moving parts, requiring fewer tools and utilities to arrive at the continuous integration/continuous delivery or CI/CD pipeline which is desired.
Let’s take a deeper look into the Docker Engine itself as well as the architecture of Docker.
The Docker Engine
When looking at the architecture of Docker, we need to take a look at the Docker Engine. This is where the magic of Docker comes to life. The Docker Engine is essentially a client-server application that can be sliced into three major components:
- A server component that runs the dockerd daemon process
- A REST API exposes interfaces that programs can talk to the daemon and specify what work needs to be done
- A command line interface (CLI) client that is the docker command
In regards to the CLI, it interacts with the Docker daemon using Docker REST API calls and this can be done with scripting or simply using direct CLI commands. Applications that interact with Docker make use of the REST API interface and CLI commands.
The server component that runs the dockerd daemon process also creates and manages Docker objects that include:
- Images
- Containers
- Networks
- Volumes
Docker Architecture
As mentioned earlier, the Docker architecture is a client/server-based architecture in which the Docker client (CLI) interacts with the Docker server daemon. The server daemon and the client can be multi-homed on the same system, however, you can also split these two out and have the client connect to the server daemon over the network using REST API and other means.
A closer look at the overall Docker architecture contains the following components:
- Docker daemon
- Docker client
- Docker registries
- Docker objects
Docker daemon
The Docker daemon is the dockerd process. It is the server component that listens for any API requests and manages the objects.
What are these objects?
Images, containers, networks, and volumes are among the Docker objects it manages. Communication between the Docker daemon and other daemons can take place as well.
Docker registries
In simple term, the Docker registry is an online repository of available Docker images that can be pulled down to run inside a container. These are available to anyone. Additionally, you can create your own on-premises or private cloud Docker registry containing custom Docker images for use in your Docker containers.
Image registries can be configured in Docker so you can choose which registry is the source for your images when you issue a docker pull or docker run command. When you want to upload an image to the configured Docker registry, you use the docker push command.
Docker objects
The Dockers images mentioned above are just one of the objects that you create and use in the Docker ecosystem.
There are various other types of objects that play a role in Docker. These include containers, networks, volumes, plugins, and many other types of objects.
- Images – These are essentially templates that provide the instructions for the Docker Engine for creating the Docker container. Much like the term used to describe a customized workstation or server image in the virtual machine world that contain custom software, settings, drivers, etc, a Docker image contains the customizations you need over and above the default image.
- Container images are built in “layers” that allows changing these layers without affecting the other layers in the image. When you change the Dockerfile and rebuild an image, only the changed layers are rebuilt. This results in extremely efficient, small, and lightweight constructs on which to build your production application infrastructure
- Containers – The container is the word used to define the whole technology. It is the runnable instance of the image. Much like a virtual machine running on top of a hypervisor, you can start, stop, move, or delete containers. Additionally, you can connect them to one or more networks, attach storage, and “clone” a container based on the current state of the running container.
- Services – To scale containers across multiple containers across multiple Docker daemons. The Docker swarm mode contains multiple managers and workers. Each member of the Docker swarm is a Docker daemon. These all communicate via the Docker API.
Now that we have a better understanding of what Docker is, let’s take a look at what you can use Docker containers for and how they are changing the world of software delivery for enterprises today.
Docker Container Networking Model
What about the Docker networking model?
The Docker container network model is built on top of a set of interfaces known as the Container Networking Model (CNM).
The main goal and purpose of the CNM methodology is portability. It helps to provide portability of Docker containers across different underlying infrastructures.
What are the constructs of the CNM?
- Sandox – The configuration of the container network stack is contained in the sandbox. The many configuration aspects included are management interface, routing table, and DNS settings.
- Endpoint – The endpoint is the component that actually joins the sandbox to a network. The endpoint helps to abstract the network away from the containerized application. Again, portability is enhanced with this architecture with the ability to use different types of network drivers without concern for the actual connection to the network.
- Network – The container concept of the word network is different than in traditional networking environments. When thinking of traditional networks, you may think of the 7 layers of the OSI model. However, with the CNM model, the network is a collection of endpoints. This could be by way of a VLAN, bridge, or other network construct.
The CNM networking model for containerized microservice applications allow the following design consideration challenges to be successfully met:
- Portability
- Service Discovery
- Load Balancing
- Security
- Performance
- Scalability
Another unique aspect to the CNM network model is the modular network driver interfaces that are provided by the CNM. These are modular (pluggable) and open for development.
- Network Drivers – These actually make the network connectivity for Docker containers work and they are interchangeable. Customized network drivers can be written for Docker network as well.
- IPAM Drivers – Docker provides a default, native IP address management solution contained in Docker if IPs are not specified.
Docker Container Use Cases
What are enterprise businesses today doing with Docker containers?
There are many viable use cases for Docker containers being utilized in the enterprise environment today. A few of these include:
- Replacing Virtual Machines
- Quickly Deploying Software
- Creating Sandbox Environments
- Facilitating Microservices Architecture
- Allows Continuous Delivery (CD) Pipelines
Let’s look at each use case and see how they are being achieved with Docker containers.
Replacing Virtual Machines
While virtual machines has certainly revolutionized the world of server resources in the enterprise, there are many inefficiencies in running virtual machines when compared to the lightweight container technology of Docker containers. Since each container is not running the entire operating system as is the case in a virtual machine, the footprint is much smaller. Containers allow enterprises to replace many of the VMs they have been running with containers. Containers are much easier and much quicker to provision when compared to virtual machines. Replacing virtual machines with containers has drastically decreased provisioning and deployment time from perhaps hours down to seconds or minutes instead.
Quickly Deploying Software
All code, dependencies, and prerequisites are included in the Docker container image. This allows the software to be all rolled up into one package and deployed together as a single entity. Additionally, this helps to easily version and streamline the process of application development which aids in creating infrastructure that is immutable.
Creating Sandbox Environments
Due to their very small footprint and efficiency, Docker containers create a great way to have “sandbox” environments or environments that are completely segmented from production that allow testing, development, and application experimentation without impacting any other environments. Sandbox environments are extremely valuable in the world of software development and help organizations to effectively implement true CI/CD pipelines. Due to the quick provisioning time for Docker containers, a new sandbox environment or many environments can literally be provisioned in seconds.
Facilitating Microservices Architecture
Docker containers help to facilitate microservices architecture. Microservices allow for a much more agility with software development. With the more traditional monolithic applications, any changes made to the application affect the entire system. This may require the entire application to be redeployed if changes or updates are made.
However, as applications are divided up into smaller services and microservices, changes and updates can be made much more easily. Microservices lend themselves to making an application much more modular in nature which leads to the application being much more scalable.
Allows Continuous Delivery (CD) Pipelines
Continuous delivery works around a “pipeline” where the system rebuilds with every change and then is delivered to production via automated or semi-automated processes. Docker container image builds allow for creating an exact state for a Docker container. The Docker containers contain all the code, prerequisites, and requirements needed for delivering the application. This is much more difficult to deliver with traditional infrastructure systems. The Docker container build process allows reproducible and replicable environments that allow software systems to be easily delivered.
What is Kubernetes?
Many people often confuse the uses of Kubernetes and assume it is a competitor or competing technology to Docker containers.
However, Kubernetes is an open source platform released by Google that is used for managing and orchestrating containers across multiple hosts. It helps to solve many of the technical problems that businesses run into including managing compute resources, storage, auto-scaling, rolling deployments, etc. These are some of the most common technical problems that organizations run into when thinking about using containers for production use cases.
Kubernetes fills these operational needs. Like containers, Kubernetes is designed to run on bare metal, in VMs, on-premises data centers, in the public cloud and across hybrid cloud environments.
What is the use of Kubernetes
- Deploy their containers
- Provide persistent container storage
- Monitor container health
- Manage container resources
- Scale containers as needed
- Provide high-availability
Kubernetes helps to solve one of the technical problems of containers. They are by their nature simply ephemeral entities. Once they have served their purpose, they are destroyed and any data contained is lost. Host volumes can be mounted on specific hosts to preserve data in containers, however, if you have multiple hosts in a cluster, Kubernetes volumes and persistent volumes can help solve the problem of how do you have persistent data if a container might run on a number of hosts in a cluster.
When containers are deployed in Kubernetes, it can ensure that a certain number of container groups are running to service applications. With Kubernetes monitoring, you can define metrics of application health and monitor your applications to ensure SLAs are being met by the provisioned containers. To satisfy availability of applications, Kubernetes also is designed with high availability in mind as multiple master nodes can be designated to prevent any single point of failure in the container/Kubernetes environment.
Kubernetes Architecture
Kubernetes contains two major components that provide the functionality to provision, manage, monitor, and orchestrate containers. The two components of a Kubernetes environment include:
- Master – This is the centralized management of a Kubernetes environment. The master controls and schedules the container activities in the Kubernetes cluster.
- Nodes – The nodes in the Kubernetes environment are the worker “bees” so to speak that actually run the containers.
The two major components of the Kubernetes system include subcomponents that carry out various activities in the Kubernetes system.
Master Components
Looking at the master components, these contain the following subcomponents. They can run on the same host in a single host scenario, or they can run across different hosts in a Kubernetes cluster.
- API Server – The API server is the interface that provides RESTful API access for the Kubernetes master. Using a “POST” API call, new resources can be created. With the “GET” API call, the resource status can be returned. The resource information that is served out by the Kubernetes API server is stored in the Kubernetes backend data store which is called etcd.
- Controller Manager – The controller manager is a daemon that contains the core loops shipped with Kubernetes that provides a control loop that watches the state of the Kubernetes cluster by means of the API Server and constantly works to ensure the cluster state is always as close to the desired state as possible.
- Scheduler – The Kubernetes scheduler is a function that provides the all-in-one functionality that looks at all the nodes running to determine which nodes are suitable for running pods. The decision making provided by the scheduler significantly facilitates high-availability, performance, and capacity. Individual and collective resource requirements – QoS requirements, constraints, affinity and anti-affinity specs, data locality, etc are taken into account when determining workload-specific requirements.
- etcd – The Kubernetes etcd is an open source distributed key-value store that stores all the RESTful API objects and is the mechanism in Kubernetes that is responsible for storing and replicating data.
Node Components
The node components contain the following subcomponents in the Kubernetes cluster:
- Kubelet – The Kubelet is one of the primary processes running in the nodes. Its activities are reported to the API Server including various health metrics. The Kubelet process runs containers via container technologies such as Docker, etc.
- Proxy – The Proxy subcomponent handles the management of routing between the pod load balancer and the Kubernetes pods. Additionally, it takes care of routing from the Internet to services. Three proxy modes are available including userspace, iptables, and ipvs.
- Docker – Docker is the most popular container runtime environment in the market today. Kubernetes by default, uses Dicker as the container engine. Kubernetes does support other container runtime environments, including rkt and runc.
The interactions between the master and nodes are carefully orchestrated by the API Server. The client uses the kubectl command-line interface to send requests and commands to the API Server. Then the API Server responds to these requests by querying and writing information to the etcd. New tasks are coordinated by the scheduler which will determine the best node for the task. The Controller Manager monitors processes to achieve desired state. Logs are monitored by the API Server via the kubelet.
Differences between Docker and Kubernetes
So, we have covered quite a bit of territory between Docker and Kubernetes. Let’s take a quick look at the differences between Docker and Kubernetes.
What differences exist between Docker and Kubernetes?
They each solve different challenges
container engine, while Kubernetes is a container orchestration tool for managing microservice applications across container cluster nodes - Kubernetes can work with multiple container platforms, not just Docker. Docker has its own container orchestration engine, Docker Swarm
- Kubernetes was originally developed by Google and Docker containers was introduced by Docker, who has no affiliation with Google
- Docker hosts the containers and Kubernetes orchestrates their lifecycle and other activities
Similarities between Docker and Kubernetes
Even though Docker and Kubernetes are different in purpose, they are not competing products. In fact, they are very complimentary technologies. They go very well together and are often used together.
What similarities exist between Docker and Kubernetes?
- Both excel at servicing microservices application architecture
- Both are open source applications
- They are very symbiotic in nature and work well with each other
- The both support immutable infrastructure concepts and methodologies and used together, they make this methodology achievable
- They are both at home in on-premises installations as well as cloud infrastructures
- They support hybrid cloud
Which One Should You Adopt? Docker or Kubernetes?
There is one thing to note, adopting either of these technologies will not be a bad decision. Both Docker and Kubernetes have massive following among most of the major players and are the de facto standard when implementing container technologies.
Realistically, this is not a choice between Docker or Kubernetes. They are different technologies servicing different use cases.
The real question is, will you choose to use Docker with Kubernetes? Or, will you use Kubernetes to orchestrate a different container solution like rkt. Docker has massive momentum behind it, and you will most likely be looking at running containerized applications inside Docker containers.
Using Docker, you have a choice to use Kubernetes or Docker Swarm. Will you choose Kubernetes? Kubernetes has for the most part won the war of orchestration engines. However, there is still the choice to be made on whether or not to use it as your orchestration tool of choice.
Docker and Kubernetes are almost made for one another. They work so well together, the hard question may be, why use anything else? With each being the market leader in container engine and container orchestration tools, it would most likely be difficult to find a reason not to use both for your container infrastructure.
Conclusion
The world of containers is a very exciting segment of the virtualization community. The trend for moving applications to containerized microservices architectures will no doubt continue to be the driving force for application development for quite some time to come. Docker containers and Kubernetes have largely been responsible for the large success of containerized applications in the past few years and to come. Both are extremely powerful and represent the best of breed in each use case.
Using both Docker containers together with Kubernetes allows you to have the most powerful container engine being orchestrated by the most powerful orchestration tool for your container clusters. Using both together certainly provides you with the best of both worlds. While they have similarities and differences, they are a cohesive duo of container tools that can power your containerized microservices applications in on-premise and public cloud environments.
Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.