bright ideas

When to use Docker vs Virtual Servers

Containers

|

05 Jul 2016

When to use Docker vs Virtual Servers

The use case for Docker is quite different from Virtual Servers (which includes IaaS/VMs/VPC), and if you were to start using Docker containers thinking they were functionally similar to Virtual Servers, then you'd be having problems.

Scale-up or Scale-out?

Most people these days are familiar with the terms "scale-up" (make a server bigger) and "scale-out" (add more servers), and these two terms have relevance in the Containers vs Virtual Servers discussion. By way of example, there are two methods to create a web service; have a single large web server that handles all of the web traffic, or have a number of smaller web servers fronted by a load balancer, with each smaller server taking a proportion of the overall web traffic.

  • Virtual Servers are commonly used in a scale-up model;
  • Docker is designed exclusively for a scale-out model, whereby services span multiple small servers (containers).

Unlike a single IaaS VM (which inherits HA resilience from the underlying virtualisation platform), a single Docker container is NOT resilient, because Docker hosts do not provide any underlying platform HA capability (although Docker Swarm offers basic HA recovery through a container automatic restart). In order to make a container-based application/service resilient, you must deploy multiple containers spread across multiple Docker hosts, with the containers themselves operating in a distributed (active/active) application cluster or load balanced group. This principle covers not only web servers, but also application and DB servers that run as containers.

Changing the application deployment architecture

It is important to understand that if you are already using Virtual Servers and are familiar running a single DB server with a few app/web servers in front (i.e. traditional 3-tier app), then these cannot just migrate "like for like", as that would see the DB deployed into a single container, which would severely impact availability.

If you want to deploy Docker, either for the first time, or when migrating from Virtual Servers, the application deployment architecture must change. To provide maximum availability, maximum performance, and maximum flexibility, every single element of the application must be deployed in a "shared nothing, designed for failure" model. This means that DB instances should span at least two containers in an active/active cluster, application servers should be load balanced across multiple containers, and the containers should be stateless, with any state (persistent) data stored centrally and available to all application containers simultaneously. Web servers should also be load balanced across multiple containers, with each web server completely stateless (read-only, pure HTML/PHP). Obviously, when an application is deployed in this manner, not only does it become highly resilient, but can also be instantly scaled through the addition of extra containers where needed, be that in the Web, App, or DB layer.

The container deployment model is best demonstrated with the picture below:

Container_Deployment_Model.png

Due to the application deployment architecture required, it is important to ensure that the back end app components themselves can support clustering. As an example, MySQL would need to be deployed in a multi-master configuration, MariaDB would need to be deployed in a MaxScale configuration, and Redis would be deployed in a multi-node cluster.

Before placing a containerised application into production, it is considered best practice to ensure that the application is able to survive a random shutdown of at least one Docker Host, and at least one node of each component in the application stack. Success would mean the application continues without service interruption and without any administrative actions being needed.

Don't forget about networking

The other difference between Virtual Servers and Containers comes from the networking. By default, Docker Hosts each form their own isolated private NAT network, which means that container communications across hosts is not possible. The workaround is to have containers communicate over the host network through exposed ports.

An alternative solution is to use network overlay technology such as VXLAN, which means that all containers within a Docker Cluster would functionally reside on the same Layer 2 network. Whilst VXLAN adds support for cross-host communications, it is still best practice to ensure that these communications are async, meaning that an app server should not connect to a DB server across hosts; rather, the overlay network should be reserved for replication traffic between clustered containers.

Use the differences to your advantage

Docker really is different in many ways from Virtual Servers, but by understanding its differences and using them to your advantage, it is possible to deploy a substantially more resilient, scalable application environment for a fraction of the cost of hosting on IaaS servers.

Neil Cresswell

Author: Neil Cresswell

Neil is a true cloud evangelist and assists clients with the daunting task of transforming their IT operations to take advantage of the "new world".

05 July 2016 / 0 Comments