Why Docker


We use abstraction to be able to cope with the high amount of information our senses receive all the time, by filtering and focus on what we really need, without that ability we would not be able to function properly in life.

When we talk about applications, it is useful to abstract them as packages. There are some things we would like to do with those packages, which are finally going to be delivered to a client or user.

Here are some of the steps (Simplified Software Lifecycle):

  1. Create the package
  2. Test the package quality
  3. Ship the package
  4. Monitor package
  5. Deliver the package


An interface serves as a mean of communication between two things (the thing being my abstraction of anything), if we think about software, we can talk about User Interfaces, if we talk about humans language is also an Interface. So interfaces are ways to interact or communicate with something. If we have well-defined interfaces in our software package, it will be much easier to perform the necessary operations, regardless of the underlying technology.


So Docker provides an abstraction called container and a well-defined interface to that container. Besides being a great abstraction for applications in general, it enforces isolation of that application at the OS level.

Docker is a relatively new technology which is currently known to provide a solution for a more efficient infrastructure resources utilization and applications isolation, by creating an abstraction on top of LXC or Linux containers, in short LXC provides a userspace interface for the Linux kernel, which allows to create bounded processes which can access the kernel directly. An important and key concept here is how we can create an isolated process inside another OS, which resembles hypervisors virtualization or «Virtual Machines», in that category we are used to software such VirtualBox, VMWare, or KVM. The key difference is LXC bounded processes can access to the kernel directly, while virtualization actually simulates or virtualizes hardware, in order to run a complete OS on top of it, which is not as performant as it could be.

VM's vs Docker

Using docker vs Virtual Machines for isolating our applications, allows us to use infrastructure resources efficiently by providing a way of packaging applications with only the necessary resources.

Having applications wrapped in Docker containers has many benefits since you won’t need to install all the dependencies for specific languages in your hosts, but the only docker. That allows you to focus on automation and standardization of your deployment process without having to think in an endless amount of prerequisites and dependencies for different languages and tools.

Docker Concepts

This is some of the basic docker concepts, but not necessarily all of them.

Docker Image

A container Image is a stand-alone, executable package of a software you might have implemented in any language which contains everything you need in order to run. For example, if you run a Java application one of the requisites for the container to include would be the JVM and your application .jar file.

Docker Container

So you have docker ready Image, which you can implement by using the docker tooling (Docker CLI), now you would like to run an application based on that image, that running instance of your image is called a container.

Docker Registry

A registry is a server application that stores and lets you distribute Docker Images.

More information about containers and the Docker Registry: https://www.docker.com/what-container https://docs.docker.com/registry/


In software development, we normally select the right tool for the job, that helps us to implement solutions for problems effectively, but we are often limited by the complexity of operating solutions implemented in different technologies/languages. Docker greatly reduces that complexity by providing a common interface which makes reasoning about software problems such as deployment, security, and monitoring easier.

comments powered by Disqus