Docker-Hosted Containers
Skip to content

What are the commercial benefits of Docker-Hosted Containers?

There are many technology-driven benefits to moving traditional applications into Docker-hosted containers running on Platform as a service (PaaS), but what about the commercial benefits? 

There are many technology-driven benefits to moving traditional applications into Docker-hosted containers running on Platform as a service (PaaS).

These include lightweight isolation, availability, portability, agility, scalability, and control across the entire application life cycle workflow, but what about the commercial benefits?  

For a start, organisations could containerise their 15-year-old web applications using Docker and deploy to the Cloud cheaply without code changes or a rewrite. Compared to separate VMs, each container with a shared kernel can save between 50%-80% of underlying resources, so we can pack more in. Containers can also significantly reduce the overhead costs of application lifecycle management, from development to go live in production and on top of that, containers save Guest OS licensing fees.

Let’s have a look at these benefits in more detail: 

Cheaper and faster time to market through containerisation 

If a container runs correctly on a developer’s laptop it could be deployed in an emergency direct into Production and should work straight away*. Significant savings in time, manpower and therefore cost can be made due to this simple fact. Production servers can run Linux or Windows 2016+ supporting the Docker environment, with no other run-times or software involved. Azure PaaS offers at least three container options to deploy to, which means no more long waits for building production server VMs, installing runtimes, patching and having to painstakingly fix errors due to dependencies or versioning incompatibilities. 

* Assuming correct configuration to resources external to the container. 

Many old apps (2001+) can be containerised ‘as is’ very cheaply

There are literally millions of on-premise business critical apps around the world still running that are at least 15 years old, and a high percentage of web or command line based apps can be containerised as the first step and then deployed immediately to Azure PaaS or AKS. Containerised apps are significantly easier and cheaper to refactor into modern, cloud native, highly available, decoupled apps and, if desired, migrated to a micro service (multi-container) model. 

REDUCED HOSTING COSTS OF CONTAINISATION UTILISING SUB SECOND AUTO-SCALING 

Let’s imagine a scenario: a business has 20 line of business applications, each hosted in VMWare on servers on-premise. For isolation, each application is hosted in its own VM. Let’s also assume that the server has been over-provisioned with hardware resources (which is extremely common) and the server only uses peak of 50% utilisation. We then create 20 Docker application images of these apps. 

From a performance perspective, it is generally acknowledged that (because containers do not require a Guest OS – as contrasted to a VM) containers can have packing ratio of greater than 5:1 compared to a VM. In the above example, like for like, containers could achieve up to an 80% saving in running costs compared to cloud hosted VMs by packing the containers in together onto fewer underlying hosts* or by configuring smaller container hosts. Additionally, if we can live with a peak utilisation of say 70%, we can then pack in even more containers and use sub second auto scale to cope with transient demand peaks #. 

*Containers still provide a good level of isolation between them compared to a VM. 

Very fast auto-scale is not usually possible with VMs, as boot up times of the Guest OS are measured in minutes, not sub-seconds and so an auto-scale VM based system may not be able to react in time for transient peaks and so customer affected downtime may still occur. 

ONLY PAY A LICENSE FEE FOR THE BASE HOST OS NOT FOR EACH VM OS 

In traditional on-premise and/or IaaS environments it may require a licence fee for both the Base OS (hypervisor) and for each Guest OS per VM. Containers only require a base OS plus Docker to run on. 

AVOID BUYING VULNERABILITY DETECTION SOFTWARE 

DevOps can utilise the vulnerability detection services built into container image registries, when building images, without the costs of having to purchase or manage dedicated software hosted on their own systems. 

ONLY PAY A LICENSE FEE FOR THE BASE HOST OS NOT FOR EACH VM OS 

When a 15-year-old application is containerised, it automatically gains 15 years of vulnerability patching out of the box, if utilising say a Windows 2022 Server Core or a modern Linux host base image. Additionally, right up to date custom base images can be utilised which have been stripped down to the bare minimum, supporting only the application dependencies required, such as the application runtime. These lean base images can mitigate many security vulnerabilities occurring in the bloated full Server parent. Hence if malicious code inside a traditional Windows VM tried to take advantage of a fictitious vulnerability in Windows Server DHCP or WINS service it might have some joy, but will have a tough time as these services are unlikely to be installed in the custom base image.

CONTAINERS CAN OVERCOME RUN TIME AND SANDBOX LIMITATIONS OF STANDARD PAAS

Azure, AWS and other public Clouds provide an operating system and a choice of pre-deployed runtimes: .NET, PHP, Python, Java, Ruby etc. along with sandbox limits as a platform (PaaS). If you need to migrate an application with an unsupported runtime or have special configuration requirements, then you often cannot use these services this way and most likely would need to downgrade to a VM. A container running in Docker has far less limitations.