bright ideas

Why containers are the future and how to get started

SDDC, Containers

|

19 May 2016

Containers in the enterprise

In days of past, IT was generally seen as a cost centre, with the only measurement of success being “no users are complaining”. There was historically no means for IT to provide advice to the business on how to innovate using technology, nor was IT’s advice ever sought. However, times are changing, and with the ever increasing drive for market differentiation, and a business focus on digital initiatives, the pressure is on IT to be much more than simply a support function. Now IT is being relied upon to help innovate.

The realities of a self-service portal

One of the ways IT have looked to help the business is through the creation of a Private Cloud, which simply meant a virtualised server environment (aka VMware vSphere) fronted by a self-service portal (aka vRealise Automation). In reality, creating a self-service portal is much harder than anticipated due to the requirement to be able to complete all provisioning tasks from within an orchestration script. In order to have a script perform tasks, underlying infrastructure must have either a CLI or API interface, which can be called, and automation tasks carried out.

The vast majority of existing IT technology simply cannot provide this programmatic interface, so automated provisioning quickly deteriorates into a portal which can only clone server templates, and at best, run through a simple “sysprep” script to perform post deployment personalisation; far from useful for business users.

Rapid vs robust

With modern applications and online services being built exclusively in web frameworks, the role of the legacy monolithic platform has diminished. Gone are the days of large, master/slave relational databases with client server applications, and in their place are noSQL ‘multi-master’ databases, web engines, and API gateways.

One of the most significant changes with the new platform services architecture is a switch from high availability being a fundamental infrastructure requirement, to high availability being built natively into the application through scale-out nodes and load balancing. This switch has meant there is far less of a dependence on a robust infrastructure platform, and more dependence on being able to rapidly recreate application instances should a running instance fail.

Treat your applications like cattle, not pets

I have heard many analogies for this transition, one of the most common is “pets and cattle” which simply says:

‘With legacy applications, you treated them as pets. You cared for them when they were sick, and you spent considerable amounts of money making sure they were healthy. With modern applications, you treat them as cattle. If they are unhealthy, you kill them and get another. Rather than having just one pet that you invest all your attention and money into, you have many hundreds of cattle, who as a combined herd, you invest your time and money into’.

Now, whilst this analogy doesn’t necessarily work for us kiwis (because we love our cattle too), the point of it is still valid.

So, how does IT create a platform that allows servers to be built, used, killed, and rebuilt at the drop of a hat? What mechanisms can be put in place that allows not only application images to be instantiated, but also have all requisite surrounding services, such as load balancers and firewalls, configured at the same time?

The answer is with containers.

What are containers anyway, and how do they work?

To put this simply, containers do to applications what virtualisation did for infrastructure. Containers abstract away the operating system components from the application, so rather than an application being comprised of a number of servers, each running their own OS (with the required patching, AV, firewall, and resource overheads), an application container simply comprises the application runtime (be that Java, Tomcat, Apache, php, websphere or any of a vast array of runtimes) and the application specific files.

Because of this abstraction, an application can run on ANY OS, on ANY server, and ANYWHERE that an IP connection can be made. A developer could run the application in a container on their laptop, then without any changes, that same container could be run inside a DC environment on Docker powered container execution cluster, or equally, it could be run from the likes of AWS or Google on a cloud container platform - all without ANY changes to the application what so ever. Need the application to support more throughput? No need to mess around adding more (virtual) hardware, just spin up more container images and load balance across them. Need to shut down a DC for maintenance? No need to go through complex DR failover procedures, just redeploy containers in the alternate DC and update the load balancer priorities. Simple really…

Embracing the new container paradigm

From an IT perspective, there are a number of ways to embrace this new container paradigm. Either build and maintain a completely parallel execution cluster (powered by Docker, or Mesosphere, or Kubernetes, or …) or enhance a legacy VMware virtualisation investment by adding a translation layer which makes VMware understand, and be able to execute containers. Both of these methods have their own pitfalls and benefits, and it may well be that a combination of both gives the best result to IT. Regardless, one or both MUST be embraced in order for IT to offer container execution to the business.

Understanding the complexities

ViFX are actively involved in the container landscape, and we have spent considerable time learning containers, and how they fit into the corporate IT model. We understand where each approach works best, and we understand how to deploy them in a “production ready” state. Through the combined experiences of our staff, we have deployed container platforms that power service providers, run mission critical CRM applications for banks, run mission critical eCommerce platforms, and large scale web farms for the digital media industry. Read more about our Container Services.

As a result of these experiences, we have digested the complex, interwoven dependencies of how to build, run, and support a container execution environment, and needless to say, it’s not all unicorns and rainbows. Containers might make things easy for the end user (business), but it does so at the expense of complexity for IT.

Technology_complexity.jpg

Containers comprise not only a runtime (Docker as a common example), but also a cluster manager (Docker Swarm), a service discovery controller (Consul, Zookeeper, etc.) - and that’s just to get the basic environment up. On top of this there are the overlay networking functions (VXLAN), persistent storage volume managers (Flocker, GlusterFS etc.), and then automation/management portals (e.g. Rancher, Mesos, DockerUCP), which allow the user to build their application landscape.

vSphere Integrated Containers

Taking a different approach, and using the “let’s just add a translation layer to our VMware platform”, is equally viable. This requires the addition of (still beta) technology from VMware called vSphere Integrated Containers (VIC). This technology simply emulates the Docker standards API language, and translates this into VMware provisioning tasks, which allows a container platform to be created without the need for Docker, Swarm, Consul, or any of that complexity.

There is however one consideration, and that is a need for the VMware platform to be operating at SDDC standard. So before VIC can be enabled, technology such as VSAN (or VVOLS), NSX (or at the very least, Distributed vSwitches), and vRealise Orchestrator, needs to be implemented. With VIC still at least nine months from being formally released, if this your preferred approach, then now is the best time to get the pre-requisites addressed.

In summary, containers are definitely the way of the future. They enable so much functionality. They enable the business to spawn fully configured application environments in minutes, and they require little to no IT input once the platform is operational. However, IT must take the lead and begin to offer this technology to the business now, as the alternative is that the business simply starts deploying in the public cloud.

Neil Cresswell

Author: Neil Cresswell

Neil is a true cloud evangelist and assists clients with the daunting task of transforming their IT operations to take advantage of the "new world".

19 May 2016 / 0 Comments