Call a Specialist Today! 800-886-5369

SimpliVity Enterprise Storage
Hyperconverged Infrastructure Overcomes Storage Issues

Is your enterprise storage strategy holding you back from achieving the agility and scale you need to reach your cloud computing and Software-Defined Data Center goals? To date, siloed physical enterprise storage and network assets have stymied server virtualization. That’s because virtualization is still running on specialized hardware systems in the data center. These physical, non-scalable, non-pooled storage and network resources are often over-provisioned to support workloads on virtualized servers. The resulting environment is complex and costly to manage, and scale only exacerbates the problem.

There are a few issues for virtualized workloads that cannot be solved with siloed legacy infrastructure stacks.


Contact us for more info!


Enterprise Storage Issue #1: Performance

Virtual infrastructure shifts the one-to-one relationship between physical server and storage to a many-to-one relationship between virtual machines and a single storage controller presenting a single LUN to the hypervisor. When you have multiple workloads with different IO streams multiplexed by the hypervisor, it results in random IO streams competing for resources. This increases the IOPS required to service the virtual workloads. To address performance issues, storage administrators typically add more disk spindles. However, this also increases capacity. It’s this over-provisioning that leads to a higher cost per gigabyte of storage allocated to every virtual machine.

Enterprise Storage Issue #2: Capacity

Virtualizing workloads results in an increase in enterprise storage capacity requirements. Since virtual machines encapsulate the operating system, application and data, a virtualized data center aggregates a massive volume of redundant data, and creates a tremendous amount of inefficiency in enterprise storage capacity. So too does virtual machine sprawl. The rapid deployment of virtual machines – each requiring an allocation of enterprise storage – increases the volume required for virtualized infrastructure. As the number of virtual machines increases, so does the associated volume of snapshots consuming enterprise storage capacity.

Enterprise Storage Issue #3: Management and Mobility

VM-centric management defines, allocates and optimizes at a virtual machine level and without physical constraints. However, all too often, enterprise storage constrains server virtualization.

Virtualization enables portability. However, the relationship between the virtual machine and its datastore in the virtualization domain ties it to a physical storage system. As a result, siloed enterprise storage hinders mobility and makes it inefficient. That’s because storage system constructs, such as LUNs (or NFS mounts), volumes, RAID groups (or aggregates) and physical disks, dictate management of virtual workloads. For example, configuring policies for snapshots or replication at the enterprise storage construct level – a LUN – means that datastores on the same LUN inherit the LUN-based policies. Snapshotting and/or replicating a LUN hosting tens to hundreds of virtual machines when only a single virtual machine copy is needed results in inefficient use of storage capacity and network bandwidth.

Enterprise Storage Can’t Keep Pace

Relentless data growth is a storage issue plaguing you for years. Data growth is trending at a 40 to 50% increase per year, according to IDC, and will reach 50x the amount of information by 2020. Consequently, technology advancement centers on minimizing the impact of data growth on capacity. However, the problem is not just the data growth.

There’s another bigger issue in today’s modern data center: IOPS requirements have increased by 10x in the post-virtualization world. Employing flash addresses the problem. Flash overcomes the issue of stagnant HDD IOPS that just can’t keep pace with requirements. The problem is that flash is pricey and it’s only suitable for portions of the data lifecycle.

Worrying about having adequate capacity to keep pace with data growth is no longer the primary concern that keeps you up at night; it’s ensuring adequate performance/IOPS to fuel application requirements—and achieving it in the most cost-efficient way. Layering data mobility, data protection and management challenges on top of the capacity and performance issues, it becomes clear that data is the culprit as well as the legacy infrastructure housing it. Today’s virtualized environments require a new data architecture to solve the data problem.

Hyperconverged Infrastructure Overcomes Storage Issues

OmniCube hyperconverged infrastructure combines server compute, storage, network switching, and virtualization software in a commodity x86 appliance, providing a highly-scalable data center building block. Multiple OmniCube systems form a federation. Through the vCenter console, OmniCube provides unified global management—across the global federation of OmniCube systems.

Hyperconverged infrastructure has a software-centric design. Storage is managed as storage. There are no concerns about LUNs or volumes. It’s simply a scalable pool of elastic resources.

SimpliVity’s hyperconverged infrastructure addresses the storage challenges you’re grappling with today. OmniCube deduplicates, compresses, and optimizes data written to production storage – at inception – and maintains it in its optimized state for the life of data. Data efficiency extends not only to capacity, but also to performance. OmniCube reduces IOPS requirements and improves performance by writing and reading less data to and from disk.