bright ideas

vSphere 6 - Our key takeaways for you

IT Infrastructure

|

11 Feb 2015

vmw-one-cloud-new

Last week saw the beginning of VMware's 28 Days of February online event, where VMware outlined their strategy for 2015: One Cloud, Any Application.

Underpinning this capability is of course the hotly anticipated release of VMware vSphere 6. This release boasts 650 features and innovations, building on and expanding the technology capabilities underpinning the Software Defined Data Centre. I'm not here to discuss these at length, as they have been covered many times already, rather I'll attempt to summarise the top-6 capabilities in this release that we believe have the most impact in addressing customer requirements.

VMware vCloud Air Native Integration

The concept of a Hybrid Cloud, where applications are delivered from a combination of existing, on-premises infrastructure coupled tightly to infrastructure in the cloud, has generally been the more attractive choice for customers. It ticks all the boxes when looking to maximise existing investments, whilst lowering complexity and risk associated with managing legacy applications ill-suited to the distributed computing paradigm on offer elsewhere with AWS and Azure (which, arguably, suit a more modern application architecture anyway).

With vSphere 6 and NSX, a simplified Hybrid Cloud solution is now within reach of every VMware customer - allowing for an extension of existing practices in manageability and security, whilst increasing availability and recoverability.

The biggest technical hurdle in the uptake of this service in New Zealand will be, of course, that we have no local vCloud Air Network Service Providers. The closest will be the Telstra vCloud Air offering in Melbourne, set to go live in April. However, this provides a nice segue into...

vMotion Enhancements

When vMotion was introduced, I remember hearing both excitement and bewilderment from engineers. Workloads can transition between hosts with zero downtime? Clusters can re-balance load dynamically? What sorcery is this?

vMotion is now one of those features that have become the de-facto standard and well ingrained into our world, so it's easy to think of these enhancements as more of the same. However, when you consider the immense cost and complexity in establishing Metro-Cluster solutions, or the headaches borne through developing migration strategies (either through green-fields upgrades or to the Cloud), then the announcement of Long Distance or Cross-vCenter vMotion capabilities allow these previously complex undertakings to be viewed with less fear and open up more possibilities for IT.

Previously, vMotion tasks were limited to a 10ms latency tolerance which dictated the maximum distance between datacentres participating in Metro Clusters. With the support for up to 100ms latency tolerance, through the magic of socket-buffer resizing, the requirements to have complex storage ownership brokering solutions, along with the uncomfortable risks associated in having both datacentres within the same city, or country (especially in earthquake-prone New Zealand) have been largely removed. Suddenly, Melbourne doesn't seem that far away.

Multi-CPU Fault Tolerance

Fault Tolerance was always one of those impressive capabilities that was never fully embraced (at least in my experience), due to the limitations in supporting only a single CPU and requiring the primary and secondary machines to reside on the same storage platform. As a result, customers looking to protect business-critical workloads with multiple CPUs sought other solutions to address their zero-downtime availability requirements. With this announcement these restrictions are lifted.

While the technical feat of maintaining lockstep for four CPUs and up to 64GB of memory is undeniably impressive, I'm sceptical as to the rate of uptake we'll observe - primarily due to the requirements for additional, dedicated 10GbE, or 40GbE, networking. The Blade Revolution provided significant savings in network cable consolidation by enabling a level of oversubscription. Lately, we've observed an about-face with the emergence of Hyper-Converged platforms taking us back to requiring multiple 10GbE interfaces per host. Will the eventual decrease in network costs allow for greater uptake of multi-CPU Fault Tolerance, or will applications eventually evolve to better tolerate failure scenarios across shared-nothing architectures? The answer is probably a bit of both - but one development that will keep things interesting in this space is in VSAN 6.0.

VMware VSAN 6.0

VMware's VSAN had a slightly turbulent introduction into the industry, following hotly in the footsteps of other software-defined, Hyper-Converged scale out storage solutions like Nutanix. With this release, we see a maturity in the technology stack. Although not quite at true feature-parity with the competition, namely in storage saving options like data de-duplication or compression, VSAN is finally ready for prime-time with significant improvements in scalability and performance, with the introduction of support for all-flash nodes.

In terms of scalability, setting aside all the other impressive speeds and feeds, two stand out for me. The first is the maximum number of components; up from 3,000 to 9,000. VSAN datastores are object based, where objects on VSAN are items like individual machine disks and their snapshots - so why would you want 9,000 of these? This makes sense in the VDI world where the real-time application delivery capabilities of AppVolumes (previously Cloud Volumes) comes into play, as well as the GA on Instant Clone (previously Project Fargo or VMFork), allowing for just-in-time desktop deployment.

The second is support for High Density Direct Attached Storage, or JBOD (Just a Bunch of Disk) configurations. This is big news for customers who have previously standardised on blade servers for the cable and rack-space consolidation benefits, as it opens up opportunities to leverage VMware's VSAN offering alongside their existing storage infrastructure. This is where the power of policy-driven storage automation can be realised through the use of VVOLs.

Virtual Volumes

Virtual Volumes, or VVOLs, are a way to enable businesses to truly enter and embrace Software Defined Storage. With an impressive vendor ecosystem already providing support, the idea of policy defined storage, where individual applications (as Virtual Machines), have their own storage profiles detailing their required attributes (performance, recoverability, availability), can now be achieved. At first, this may sound daunting, and you may ask "But if the virtual disk is now the primary unit of data management on the storage array, instead of the LUN/Volume construct, then... now I've got thousands of polices to manage?" :(

Not so - VVOLs use vSphere's Storage Policy Based Management (SPBM) framework to capture service level requirements as policies or templates, and these are associated to the VMs. Placement of VM storage is then automated based on the virtual datastores that meet the policy requirements.

In embracing this direction, operational efficiencies through ever-maturing automation capabilities provided with native, and third party Cloud Management Platforms should continue trends in enabling faster deployment (time to market), and granular Disaster Recovery options for new and emerging business services.

vCenter Server Appliance

I've long been a fan of this alternative to the native Windows installation; notably in its smaller footprint, simplified deployment and lower cost (in avoiding Windows Server and SQL/Oracle database licenses). But it has always lagged behind its feature-rich older brother and ended up being selected against in architecture design decisions when considering all requirements and constraints. The direction is clear though: VMware are looking to get out of the dependencies they've had on Microsoft's OS for vCenter server. The past few releases of the vCSA have shown continual progress in achieving that and now, with the release of vSphere 6, the vCSA is pretty much at feature-parity to the Windows flavour (ignoring VMware Update Manager for now). It has a far simplified deployment model, greater scalability of its embedded vPostgres database and supports a new Linked Mode architecture. This de-coupling from Windows follows the trends of other solutions in the vSphere management stack, and its appliance delivery model continues to reduce complexities in infrastructure management components.

Conclusion

With all the new capabilities of vSphere 6, it's easy to get mired in the minutia. As always, the technology should not be selected because it's shiny and new, but rather that it enables or solves a requirement. With this new release, requirements that were previously too hard to deliver to, due to astronomical cost or complexities, are now within reach. Opportunities now arise for the more budget-constrained to achieve capabilities previously reserved for the big IT shops.

With this expansive toolkit, it's up to us to be creative solving these problems whilst reducing complexity.

Nick Bowie

Author: Nick Bowie

Nick specialises in cloud infrastructure and the Software Defined Data Centre, helping to facilitate implementation of first class, architecturally driven infrastructure solutions.

11 February 2015 / 0 Comments