Hopp til hovedinnhold

How did the container become so ubiquitous in application hosting? Most people know that they're used to run an application, but can't a virtual machine also do that? Why did we ever move away from hosting our applications on a physical server ourselves?

Let's explore how we have previously hosted our applications, and why we transcended into containerised software!

To figure out how we have arrived at almost exclusively using containers to host our applications, let's explore what we previously used to run our applications. What brought our applications from physical servers onto virtual machines, and from there further onto containers?

Back at the start of software development, companies that had applications that was crucial for their businesses needed somewhere to host and run them. They couldn't just sign up for an account at a cloud service provider like we do today, and spin up whatever instances with whatever specifications they needed. The businesses needed to procure actual physical servers.

The buyer of these physical servers had a few things to consider. They had to consider how scalable the server would be, how many users the server should be able to handle, and how much disk space it needed to possess and so on.

Whichever server was obtained chances were that it would either end up with unspent potensial, or down the road having to be replaced by a more powerful one if the hosted application turned out to be more computing intensive than first thought. Substituting it with a new server could also mean downtime, and of course additional cost in the form of administering and installation of the newly bought machine.

Maintaining a server once you have it isn’t free either. There are a lot of operational costs associated with having a server of your own standing around. Moreover you will have to pay licences for the operating systems you are running.

Also, chances are your business would want to run more than the one application currently in production. There is a possibility that the two servers need different operating systems, dependencies, and might need to access the same resources, which again would mean that they will not be able to run together on the same server.

You end up with either having to buy different servers to serve each new application, or unique servers with a bunch of different apps running intricate dependency-management that is hard to maintain. With both options comes increased operating cost, complexity, and management of both the physical and the virtual.

Physical servers are obviously painful to both obtain, run, and maintain. So where do we go from here? Enter virtual machines (VMs)!

Virtual machines are in a simplified definition, an abstraction of the physical server. On the hardware of the physical server lives the operating system, and in the operating system lives the VMs. The OS runs something called a hypervisor, which can be described as a virtual machine monitor. This hypervisor could also be described as an emulator, and it virtualises the hardware of the physical server it runs on.

Virtual machine abstraction visualt
Virtual machine abstraction visualisation

When running a VM on the server, you're no longer constricted to one application per physical server, or maybe the equally painful alternative—many applications in one very confusingly configured and complex system. The virtual machines on their respective virtualised servers are running their own OS, referred to as a guest OS, in contrast to the host OS which is that of the physical server. Running an OS of their own enables them to function as an actual server, and you can perfectly fine run different guest operating systems on the same host machine.

Even though virtual machines come in very handy, they are not without problems. By running an OS of their own, the VMs offer a lot of overhead computations and processing power just to keep the VMs alive, without even running the application that they were set up for.

What the hypervisor basically is doing is splitting up the resources of the server. As a consequence, the VMs can't really share any resources if one of them receive very little traffic. The resources of the server are pre-distributed and can't be dynamically allocated between themselves on a requirement basis.

From the VM's perspective, they are running on their very own server. They all need their own respective operating system, which causes the hosted application to take up much more resources than the application itself. It needs a dedicated amount of CPU and memory, and if the application uses way less memory than expected, it just idles.

So VMs are a huge improvement to just the servers as they operate by themselves, but still poses some issues. Enter containers!

We've gone from unaugmented physical servers which only have the ability to run one application at a time, to moving onto virtual machines which is encapsulated in their hosts' OS and conceptually are their own machines. VMs still contains some complexity in the form of increased use of resources.

So back to containers! What are they?

Like a VM, the container is also a virtualisation. Not a virtualisation of the hardware on the server it runs on like the VM, but of the actual OS that the server is running. As VMs share the physical hardware of the server, containers can share the OS on the host machine. Also like the VM the container thinks it is running on its own dedicated hardware and operating system.

container visualt
Container visualisation

A container runs on top of a container engine which offers an API, thus making the enclosing system able to communicate with the engine to orchestrate the containers within.

Containers can be described as self-contained packages, and the applications running inside the container and its dependencies are packaged together. This makes moving around such a package very easy.

Since a container virtualisation does not need its own operating system, we can immediately see the resources saved by using this as opposed to VMs. While a VM takes up disk usage space in the gigabyte class, a container uses megabytes.

In addition to a decrease in the use of resources it will also be much faster to set up as we won't have to install a dedicated OS in every container. So no matter how many containers that lives on our server, only one OS will have to be administered.

Containers also thankfully works, in a different way than VMs when it comes to sharing of resources. When a container is resting and no longer in use, its resources can be allocated and utilised consecutively and dynamically by other containers in the same system.

Of course there are drawbacks for containers as well, but as long as we’re aware of them we can make the correct choice of when to choose what. A quite intuitive drawback to mention on containers is that persistent storage on disk can be quite tricky. As VMs are in control of their own OSs and conceptually are their own servers, persistent disk storage is a much lesser feat in the case of a VM.

We've climbed the evolutionary ladder of application hosting from self-managed physical on-premise servers, to VMs that can be run either on-premise, or in the cloud. Containers are lightweight, efficient, portable and neatly packaged so that they contain an application together with all of its dependencies, making it a finished piece of software package.

We've certainly come a far way from running applications on our own on bare metal!

Did you like the post?

Feel free to share it with friends and colleagues