bright ideas

VMs and Containers - better together for sure

IT Infrastructure, SDDC, DevOps

|

19 Nov 2015

Containers_Blog_image_2.png

With the growing adoption of containers; Docker in particular, and the rising popularity of OpenStack, there has been talk that this could spell the end for virtualisation. The initial thoughts amongst container and open source advocates were that you had to choose one over of the other, but you can now integrate them to get increased synergies from the combined approaches.

With digital strategies and customer engagement being at the top of mind amongst forward-thinking businesses, application development is undergoing a further resurgence. IT environments need to become aligned to cloud native application development to accommodate this new wave of progress. At ViFX, we’re excited about the prospect of application container technology revolutionising the way that IT operations can support the development of applications, just as virtualisation did for infrastructure availability and efficiency a few years ago.

Shaky foundations

The reality of application development is that code gets moved from one environment to another – from a developer’s laptop to a test environment, from a staging environment into production, and then perhaps from the data centre environment into public cloud. When the supporting software environment is not identical, it’s likely that inconsistent results can be achieved, problems can arise and time can be lost in troubleshooting and developing fixes and workarounds.

There will also likely be differences in the network topology, storage configuration and security policies. How do you develop a robust application when the very foundation it sits on keeps moving?

Ship in the containers

With a container you get an entire runtime environment that houses the application along with all its dependencies, libraries and configuration files needed to run it. By containerising the application platform and everything it needs to function, it becomes shielded from the underlying infrastructure. While this sounds similar to virtualisation, containers don’t include a full operating system, instead sharing the host server’s operating system. This makes the package smaller than a virtual machine, enabling faster boot-up times for the containerised applications.

Why would you need VMs?

One of the key benefits of using containers is to simplify application development. To achieve this simplicity however, you need to be able to focus solely on the application without having to manage the underlying software or hardware. And this is where VMs come in.

Containers can be run in lightweight virtual machines which means that the hardware infrastructure such as networks, servers and storage are all taken care of. The additional benefit of hosting an application container on a VM is that you increase isolation as well as security.

vSphere Integrated Containers (VIC)

VMware have realised the need to provide seamless integration of containers and virtualisation and recently launched vSphere Integrated Containers (VIC). Rather than IT organisations having to manage separate frameworks, VMware has created a hypervisor optimised for Docker.

The result of this integration is a VM that is light enough to support a single instance of Docker, while also allowing IT to preserve all their existing investments in VMware management software and maintaining centralised control. And it makes Docker containers not only easier to manage, but also more secure. This is essentially a big nod to developers, enabling more streamlined application development and providing more control over data centre resources.

Launching a VM to run a single micro-service may at first seem like a heavy-handed approach, but thinking about the Instant Clone technology introduced in vSphere 6, there is an appealing alternative. A single running base VM can very quickly and efficiently be forked for use with containers. This provides a thin copy and avoids duplication of memory for common elements while still preventing containers from inadvertently communicating with their neighbours.

Linux containers require a Linux kernel for execution, and in the case of VIC this kernel is derived from the VMware initiative – Project Photon with only the VCH itself using Docker technology. The combination of a forked virtual machine with a bare-bones Linux kernel yields “just enough VM” to run a container.

The idea is that Docker and VMware together delivers the best of both worlds for developers and IT operations teams.

Why VMs and containers is a better approach

Integrating VMs and containers is actually better than a bare-metal Linux container architecture as it is more seamless, and creates hardware-level isolation. With VIC in particular you get the advantages of existing management, compliance, networking and security processes tools inherent in the vSphere platform, combined with the rapid application deployment and portability of Docker.

VMware’s CTO, Chris Wolf says that the vCloud Suite can extend the value of containers by providing:

  • Multi-tenant security as well as separation of zones of trust in single-tenant environments.
  • Fault domain isolation (application and/or OS failures will only impact the single tenant running in the VM).
  • Continuous application and data availability – applications and application platforms with no native high availability can leverage the native HA capabilities of the VM. Furthermore, the storage technologies such as VSAN ensure redundancy and data availability for any persistent data stored by a container.
  • Automated infrastructure operations (compute, network, storage, security) – the entire infrastructure service stack can be provisioned in seconds to minutes, depending on requirements.
  • Seamless integration with massive third party ecosystem – containers can leverage the hundreds of turnkey third party integrations offered through the VMware Solutions Exchange.
  • A lower TCO is achieved by leveraging a common management layer for both second and third platform applications.

How does it work with a Software Defined Data Centre?

Essentially, an SDDC environment is all about agility and innovation. And in today’s digital world that directly translates to application development and supporting Agile development, micro-services, and DevOps workflows.

By taking the automation and management tools central to the Software Defined Data Centre and combining these with the benefits of containers, your IT operations team can empower developers with the flexibility and speed that containers deliver. A win-win situation for everyone.

By deploying containers within an SDDC architecture you also get the added benefit of portability. When an application is redeployed to a new environment, the entire operational service stack goes with it, along with the same tooling and third party integrations.

With VIC, developers can leverage their Docker tools of choice, while the operations team can provide the infrastructure to support Docker containers on the most efficient and flexible SDDC platform.

Start thinking about a software-defined approach to your data centre  Get the eBook

Leverage your existing investment

Many IT managers are looking for ways in which they can support their businesses digital strategy, provide excellent customer engagement, and cope with disruption, all at the same time as leveraging their current VMware investment. By taking a Software Defined Data Centre approach and combining this with the benefits of containers you can leverage your current VMware investment and evolve your data centre in a manner that delivers an agile application development framework, and get the best of both worlds.

James Knapp

Author: James Knapp

James focuses on the cloud technology landscape, developing cloud strategies for our clients, as well as being a regular conference presenter on new IT service delivery models.

19 November 2015 / 1 Comment