Docker Easy
The Complete Guide on Docker World for Beginners
Table of Content s
Chapter 1: Introduction
1.1 Basic Concept & Terminology:
Lets start by clearing up the concepts and terminology:
1.1.1 Container
Containers are isolated parts of your operating system. They are almost like virtual machines. The difference is that they share a lot of resources, like the kernel, with the host operating system, whereas virtual machines enclose their own operating systems completely. Containers are much lighter to set up and run, but they are just as sufficient for running isolated software. Containers are not exclusive to Docker, and can be used without.
Containers are not a new concept, used since early 30's to the actual patent in 1956 by Malcom McLean. The need of shipping different packages of products with different constraints (size, dimension, weight) came to the standardization of a
container. Metaphors
shipping model called aside, in the software
production world we find the same needs. And here comes the Software Container, something that bundles the software product and manages its configuration for shipping .
1.1.2 Docker
Docker is a suite of tools for configuring, running and managing containers. The main command line tool, docker, can be used to quickly configure and start containers using pre-built images. The suite also includes tools like docker-compose, which is used to quickly start and stop a specific configuration of multiple containers.
Software Container is not a new concept too, but working on it is a hard-low-level job for most of the engineers. Docker, however, turns out being a fast, easy to use, and powerful tool for containerization of your software. Using Docker, you define images which are descriptions of a software environment settings and commands. And from those images, you can run containers which are the actual executable bundle.
Docker is a special kind of virtualization. The advantage over VMWare, Hyper-V, KVM or Xen is that Docker natively uses the kernel of the operating system and has no hypervisor in between. That makes Docker very fast, powerful and performant .
1.1.3 Images
Images are pre-built containers for Docker. In virtual machine land they would be comparable to VM snapshots. Anyone can build an image and then share it, and others will be able to run it without having to build it themselves. Also, images can be extended.
1.2 Introduction to Docker
Docker is an open source containerization platform. Docker enables developers to package applications into containersstandardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
Docker, an open source technology, is used primarily for developing / shipping and running applications. Docker enables you to segregate the applications from your underlying infrastructure so that the software delivery is quicker than ever. Management of infrastructure is made simple and easy via Docker, as it can be managed just the same way how we can manage our own applications. Bringing in the advantages of Docker for shipping, testing, deploying the code we can significantly reduce the delay between the development stages to hosting the same code on Production.
While developers can create containers without Docker, Docker makes it easier, simpler, and safer to build, deploy, and manage containers. Its essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and worksaving automation.
Docker also refers to Docker, Inc., the company that sells the commercial version of Docker, and to the Docker open source project, to which Docker Inc. and many other organizations and individuals contribute.
Your .NET Core container can access a SQL Server database running in a container or a SQL Server instance running on a separate machine. You can even set up a cluster with a mixture of Linux and Windows machines all running Docker, and have Windows containers transparently communicate with Linux containers. Companies big and small are moving to Docker to take advantage of this flexibility and efficiency. The case studies from Docker, Inc. - the company behind the Docker platform - show that you can reduce your hardware requirements by 50% when you move to Docker, while still supporting high availability for your applications. These significant reductions apply equally to on-premises data centers and to the cloud. Efficiency isn't the only gain. When you package your application to run in Docker, you get portability.
You can run your app in a Docker container on your laptop, and it will behave in exactly the same way on a server in your data center and on a virtual machine (VM) in any cloud. This means your deployment process is simple and risk-free because you're deploying the exact same artifacts that you've tested, and you're also free to choose between hardware vendors and cloud providers.
The other big motivator is security. Containers add secure isolation between applications, so you can be confident that if one application is compromised, the attacker can't move on to compromise other apps on the same host. There are wider security benefits in the platform too. Docker can scan the contents of packaged applications and alert you to security vulnerabilities in your application stack. And you can digitally sign packages and configure Docker to run containers only from package authors that you trust. Docker is built from open source components and is shipped as Docker Community Edition (Docker CE) and Docker Enterprise Edition (Docker EE). Docker CE is free to use and has monthly releases. Docker EE is a paid subscription; it comes with extended features and support and has quarterly releases. Docker CE and Docker EE are available on Windows, and both versions use the same underlying platform, so you can run your apps in containers on Docker CE and EE in the same way.
1.3 Virtualization
Virtualization refers to importing a guest operating system on our host operating system, allowing developers to run multiple OS on different VMs while all of them run on the same host, thereby eliminating the need to provide extra hardware resources. These virtual machines are being used in the industry in many ways as follows:
Enabling multiple operating systems on the same machine
Cheaper than the previous methods due to less/compact infrastructure setup
Easy to recover and do maintenance if theres any failure state
Faster provisioning of applications and resources required for tasks
Increase in IT productivity, efficiency, and responsiveness
1.4 Virtualization Host
From the above VM architecture, it is easy to figure out that the three guest operating systems acting as virtual machines are running on a host operating system. In virtualization, the process of manually reconfiguring hardware and firmware and installing a new OS can be entirely automated; all these steps get stored as data in any files of a disk. Virtualization lets us run our applications on fewer physical servers. In virtualization, each application and operating system live in a separate software container called VM. Where VMs are completely isolated, all the computing resources like CPUs, storage, and networking are pooled together, and they are delivered dynamically to each VM by a software called a hypervisor. However, running multiple VMs over the same host leads to degradation in performance. As guest operating systems have their own kernel, libraries, and many dependencies running on a single host OS, it takes up large occupation of resources such as the processor, hard disk and, especially, its RAM. Also, when we use VMs in virtualization, the bootup process takes a long time that would affect efficiency in the case of real-time applications. In order to overcome such limitations, containerization was introduced.