In this article, we will talk about Containerization and Virtualization concepts. We are going through the solutions like Docker, install it, and write some functionality on Linux operating system.
Before we start talking about Docker we have to know first the concept of Containerization and Virtualization, we going first to Traditional Deployment to see how is going to work.
As we act on traditional deployment if we want to run normal software using java language on windows or Linux operating systems we need to have the operating system installed and work correctly on our machine, then we need the JVM and JDK to be installed, and we need the tool of IDE to write and run our application, installing application dependencies then we can run it.
- Installing or using the existing operating system
- Installing the tools needed by your software
- Install software dependencies
- Finally, run the software
Architecture of Traditional Deployment
When looking at the figure above which represents the traditional software deployment approach we found the next:
- Hardware: the physical server then allow me to give the resources to my operating system and applications
- Operating system: the operating system that holds the application
- Applications: the software we need to run on our operating system
When we look at the traditional Deployment we found some issues
Traditional Deployment Issues
- Isolation Issue: no way to define boundaries for applications in physical server and this caused a resource allocation issue, for example, App 1 can access the content and the resources of App 2 and App 3
- Scaling Issue: we found that the resources are underutilized
Now lets move to the next evolution of the software deployment which is virtualization
Virtualization is the approach of running different isolated operating systems on a single physical server
- First, we are going to virtualize Hardware or physical server
- single physical server allows us to run multiple virtual machines, each virtual machine include a full copy of an operating system, Binaries, Libraries
- virtualization allows more effortless adding, updating, and delete Applications that solve the scalability issue.
- virtualization allows better utilization of resources
- Utilization isolates Applications between virtual machines.
now let’s go to see the architecture of virtual machine comparing with the Traditional Deployment
- Hardware: the physical server that has the resources
- Operating system:- the operating system that controls the resources
- Hypervisor: is computer software, that creates and runs Virtual machines. A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and processing
- Virtual machine: A virtual machine (VM) is an isolated computing environment created by abstracting resources from a physical machine, and this virtual machine has the following components operating operating
- operating system: the operating system that will have control of resources
- Binaries and Libraries: the binaries and libraries that the application need to be run-able
- Applications: the program or project the want to run it
now we fully understand the Traditional Deployment and Virtualization now we are going to learn more about the Containerization
Container by definition is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
In another words, we can say containerization is the virtualizations of the operation system.
- a container is a virtual operating system
- a container is an abstraction of the operating system layer that allows you to make packages of code and dependencies together as standardized units of software that can run.
- Containers take up less space than virtual machines, boots quickly, and well isolated
- Containerization eliminate infrastructure wasted resources and utilization
- OS-level virtualization refers to an operating system paradigm in which the operating system allows the existence of multiple isolated user-space instances (containers)
- OS-level virtualization solutions are container engines.
- A container engine is a managed environment for deploying containerized applications.
User space instances have different names
- userspace – os level virtualization solution
- container – Docker
- Vps – open VZ
- virtual kernel – Dragon fly BSD
now let’s go inside Docker and know the architecture of it
What is Docker?
- Docker is an OS-level virtualization tool.
- Docker is an open platform for developing, shipping, and running applications.
- Docker provides tools, and a platform to manage the life cycle of your containers:
- Develop your application and its supporting components using containers.
- The container becomes the unit for distributing and testing your application.
- When you’re ready, deploy your application into your production environment, as a container or an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.
Docker uses a client-server architecture.
Docker Daemon (server)
- The Docker daemon (docker d process) listens for Docker API requests and manages Docker objects.
- A daemon can also communicate with other daemons to manage Docker services.
Docker D Objects
- Images: An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
- Containers: A container is a runnable instance of an image.
- Volumes: Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
- The Docker client (docker) is the primary way that many Docker users interact with Docker.
- When you use commands such as docker run, the client sends these commands to the Docker server, which carries them out.
- The docker command uses the Docker API and can communicate with one or more docker daemons.
- A Docker registry stores Docker images.
- Docker Hub is a public registry that anyone can use, and by default, Docker configurations look for the images on Docker Hub.
- Docker Hub is not the only registry in the market, and you can use your own docker registry.
Installing Docker on Ubuntu operating system
$ sudo apt install docker.io
To check if docker installed or not type in your terminal
$ docker --version
Then now you want to enable docker with Systemctl Function
$ sudo systemctl enable docker
now we can show the docker status
$ sudo systemctl status docker
now let’s check that we install docker correctly
$ docker run hello-world
In this article, we had walked through the concepts of virtualization and containerization, and we ended by showing how to install and test your Docker installation, in next articles, we will go through container operations using Docker platform.