Over the last six months we have been hearing a lot about VMware’s Virtual SAN solution, and how it will shake up the industry by providing the first true Software Defined Storage platform. But what is it really, and why would you use Virtual SAN vs traditional storage?
If you look at the storage industry, not much has changed in 10-15 years. Sure, things have got faster, but in reality, we still have dual controller subsystems with disks, RAID arrays, hot spares, and cache. Technology like SSD and auto-tiering has attempted to make storage faster, and technology like thin provisioning, deduplication and compression has attempted to make storage more efficient; these however are in truth, just trying to extend the useful life of an aging architecture. Even the more modern hybrid or all-flash arrays are really just old architecture with faster “disks”.
Because of “cloud” technology, people are now familiar with systems that leverage a distributed model to ensure data protection (i.e. keep multiple copies in different places), so it was natural that this cloud-like architecture would eventually find its way into mainstream storage.
Distributed data protection model
VMware Virtual SAN and hyper-converged compute systems share a common trait, and that is a cloud-like architecture, where parity based RAID is no longer used as the method of providing data redundancy; instead a distributed data protection model is leveraged to ensure data availability/durability, performance, and resilience. This architecture in some ways, is substantially more reliable than traditional storage, primarily because it simplifies data availability (by removing complex RAID algorithms), but also because by being physically distributed, it totally removes any points of weakness inherent within a more traditional storage system. I could almost instantly recall the number dual controller, or double disk RAID failures that I have had to address in my career. These are failures that “should never happen” but do (regularly), and this is why the legacy “designed not to fail” architecture started to creak at the seams, and why a new “design for failure” model was needed.
Stick with legacy or embrace the new?
When looking to consider your future storage platform, a conscious decision needs to be made; stick with legacy, or embrace the new. Legacy is obviously well known, along with its strengths and weaknesses; and because of weaknesses, organisations choosing legacy have to architect (purchase) around them; this means buying more hot spare disks, or using non-parity/double parity RAID. Working around weaknesses simply lowers efficiency, and starts to unbalance the storage financial equation. The new, in comparison, is based on commodity hardware (disks, SSDs) configured in a distributed duplication model, where (simply put) multiple copies of data are transparently held on physically separate/independent nodes. This “shared nothing” architecture is not vulnerable to failures in one node, or even two nodes, as data is distributed across many nodes.
Integration with VMware hypervisor
VMware Virtual SAN takes this distributed architecture model, and integrates it tightly with the VMware hypervisor; providing a storage platform optimised for the running of virtual servers. Because of the deeply integrated nature, Virtual SAN is able to offer VM centric capabilities that many legacy storage platforms cannot; features such as per-VM replication, or per-VM snapshots. Also because Virtual SAN runs inside the hypervisor, there is less overhead involved than virtual appliance based solutions such as is commonly found in hyper-converged compute solutions.
Gain CAPEX savings
When looking at the economics of Virtual SAN, many variables must be considered, including providing either like-for-like availability, performance, and features as traditional storage, or by exploiting the new capabilities enabled with Virtual SAN to improve storage services. As a rule of thumb, providing a like-for-like storage environment with Virtual SAN will generate CAPEX savings in the order of 30-40%, or alternatively, by spending the same capital as would be expected with traditional storage, far more capabilities would be provided by Virtual SAN.
The "less is best" approach
Operationally, Virtual SAN takes a “less is best” approach; and by removing the management of traditional RAID arrays, LUNs and storage mapping, dramatically reduces the degree of specialist storage administration needed. Rather, the VMware administrator is able to provision VMs directly within the VMware hypervisor, with the single provisioning act being performed inside VMware and within the Virtual SAN platform. Further, by moving critical data protection capabilities such as Snapshots and Replication, from the LUN level to the VM level, provides a much more granular and controlled approach. This reduces both the administrative overhead managing multiple LUNs with different capabilities, and also reducing the amount of “orphaned” capacity that might exist in each physical LUN.
Scale on demand
Finally, having a storage platform that can scale “on demand” to suit an evolving storage consumption model (organic growth + project growth), across both performance and capacity, reduces the risk of “dead end” purchasing. This dead end purchase model exists because companies need to reduce initial capital outlay, so purchase legacy storage boxes only as big as they absolutely (they think) need. If growth is beyond expectations, then generally the storage platform would need to be replaced before its effective end of life (or depreciation cycle). With Virtual SAN, a scale-out model exists for growth, with resilience, capacity, and performance being able to be incremented independent of each other. Additional capacity sees the introduction of larger drives, additional performance sees the introduction of SSDs, and the addition of resilience sees the addition of more nodes. Scaling is linear, with no overhead or degradation resulting from the scale-out model.
In summary, with today’s Cloud computing model, and the business acceptance of pay-as-you-grow financial models, traditional storage architecture struggles to remain relevant. Technologies such as VMware Virtual SAN hold vast differences, and are able to align much better to business consumption/financial models; this makes VMware Virtual SAN the logical choice for the next generation of compute platforms.