loader image

Getting Started with Containers & Dockers | Dockers

Introduction

Containerization revolutionized the software development and it becomes a common building block in today’s architecture, applications, big data environments, and data engineering applications can be deployed and developed inside containers In this article, we will know more containers and its advantage, and we will discuss Dockers which is a container image that packages all you need to runtime, system tools, and system libraries that you need to run containers.

Virtual Machines

Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, the application, necessary binaries, and libraries – taking up tens of GBs. VMs can also be slow to boot. to summarize VMs virtualize Server Resources.

From an application development perspective, developing applications deployed on VMs raises the risk of deployment failures, when you move your applications from a development server to production server, for example, you have a high potential of deployment failures, plus VMs increases the deployment effort for production teams, one additional point is that VMs consumes Server resources because VMs virtualize from server resources such as CPU, Storage, and Memory.

The more VMs you have on one server the more resources you will consume from that server.

Containers

Containers, on the other hand, virtualize the Operating System of the server, not the physical resources of the server, which makes it more portable, lightweight, and efficient. Imagine all your application files, dependencies, libraries, and any kind of resources your application needs exists on one box “Container”, no matter which server you will deploy on, your application has all that It needs to function correctly, your application is completely isolated and independent on the server it’s running on, isn’t that a great advantage.

Containers need a runtime engine to run one such as Docker. You can think of a container as a separate isolated system process on your Operating System.

Now, let’s see the same scenario we previously checked in VMs, in case of application deployment, you have less risk of deployment failures, less effort required from deployment production teams because your application totally independent on where it runs, it just needs a runtime and you are good to go, you can actually “Run Anywhere“.

What is Docker?

Docker is an open platform designed to make it easier to create, deploy, and run applications by using containers. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

Containers help you to package up all the applications components in one place and delivered or deploy it without any pre-configured requirements, so by doing so we eliminate the phrase “it works on my machine”.

Docker provides the ability to package and run an application in a loosely isolated environment “container”. So this isolation allows you to run many containers simultaneously on a given host. Containers are lightweight because they run directly within the host machine’s kernel.

Docker Architecture and Components

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets, or a network interface.

Docker Daemon

The Docker daemon “dockerd” listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

Docker daemon is the processing and orchestration engine in the docker architecture.

Docker Client

The Docker client is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Registry

Docker registry is the storage of the docker images, we have two options as docker registry we have the Docker Hub which is a public hub for images anyone can access, a Local registry which you can configure to have your own custom images, and you can configure the default registry for your docker daemon to choose from when you run commands such as docker pull from docker client

Images and Containers

Images are a read-only template which defines the instructions and steps to create a container, using a Docker file you define the required steps to create a container from that image.

The container is a runnable instance from the image using the instructions provided in the Docker file to create the container from the provided image, you can create, start, stop, move, or delete a container using the Docker API or CLI.

Container = Image + Running instructions (Docker File)

Install Docker Engine

Installation methods:

  • Using Docker repositories: this is the recommended approach for ease of installation and upgrade tasks.
  • Using RPM: you can download RPM packages and manage the installation manually, this approach is useful if you don’t have internet access in your machine server.
  • Automated scripts: this is useful in testing and development environments.

We will use the first approach which is the recommended from the Docker docs, we will walk together in the following steps, the used machine version is RedHat Version 7.6

1- Install the yum-utils package (which provides the yum-config-manager utility) and set up the stable repository.

sudo yum install -y yum-utils

2- Add the repository by running the following command

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

3- Check for the repository has been added successfully by viewing the content of directory /etc/yum.repo.d

ll /etc/yum.repos.d/

4- Now, we can proceed to install the latest version of Docker Engine and container by running the following command

sudo yum install docker-ce docker-ce-cli containerd.io

If you would like to install a specific version for the Docker do the following

a-List all available versions in the repository

yum list docker-ce --showduplicates | sort -r

b- Choose the specific version to install by running the following command

sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io

Now, we have done Docker installation but not running yet, to run the Docker engine, run the following commands

Start Docker Engine

sudo systemctl start docker

Check Docker Engine Status

sudo systemctl status docker

Verify that Docker Engine is installed correctly by running the hello-world image.

sudo docker run hello-world

For other operating systems instructions check the Docker documentation

In Next article we will go through some of the commands and operations you will need to work with Dockers, Stay Tuned 🙂



Check our latest articles

Facebook
Twitter

Leave a Reply

Your email address will not be published. Required fields are marked *

Unlimited access to educational materials for subscribers

Ask ChatGPT
Set ChatGPT API key
Find your Secret API key in your ChatGPT User settings and paste it here to connect ChatGPT with your Tutor LMS website.
Hi, Welcome back!
Forgot?
Don't have an account?  Register Now