Call a Specialist Today! 800-886-5369

SimpliVity Server SAN
OmniCube Hyperconverged Infrastructure Aligns With Server SAN

To gain a competitive edge, cloud providers seek to leverage low-cost infrastructure components and “sweat” their assets. They need to operate their IT efficiently and at scale. Their goal is to create higher margins for the services they offer, or to be able to be extremely competitive (and disruptive) on the pricing of cloud computing services. If you look at leading cloud players, such as Google, Amazon and Facebook, they use commodity scale-out components in their data center design—x86 servers, low-cost fabrics, “on board” storage—and benefit from the economies of scale. They leverage open source software to support the applications powering their business. They’ve also decided that true fault tolerance is too expensive to maintain at the hardware layer. Instead, they leverage the business continuity and automation capabilities in the virtualization layer to provide some failsafe measures.


Contact us for more info!


This is in contrast to the norm of most enterprise and “carrier-class” data centers, where the focus has been on maximizing uptime and delivering service level agreements of “five nines” (99.999% guarantee). These data centers take a best-of-breed compute, storage, networking approach when it comes to hardware selection. They also have many failsafe measures, including redundancy, replication between sites, and backup operations. Their services also come with a much higher price tag—the result of expensive, name brand equipment and the added resiliency features.

The other aspect of cost is operational budget: staff; power, cooling, and space, etc. The new cloud architectures are better equipped for automation. With a homogenous shared resource pool, the management shifts from managing hardware elements to managing applications or workloads. Policies take a “top down” approach, aligned with the workload, application or virtual machine, versus a “bottom up” one aligned with the hardware construct, such as a storage array LUN. Once you abstract policy away from the hardware layer, it is easier to enable automation. VM-centric policies enable greater automation across the environment. This makes the cloud architecture better aligned for operational efficiency: time and cost savings for the staff managing the environment.

Server SAN: ‘A New Phenomenon in Data Center Infrastructure’

The battle is not just with cloud computing vendors anymore. Enterprise IT has caught on. Enterprises want to emulate Google’s and Amazon’s data centers. A new phenomenon in data center infrastructure is what Wikibon calls a Server SAN. Server SAN categorizes a pooled storage resource comprising more than one storage device directly attached to separate multiple servers. Server SAN can leverage commodity compute-based architecture. This eliminates the need for an external enterprise storage array, simplifies storage deployment and management, enables scalability, and helps IT organizations realize a software-defined data center.

Hyperconverged Infrastructure Aligns With Server SAN

OmniCube hyperconverged infrastructure combines server compute, storage, network switching, and virtualization software in a commodity x86 appliance, providing a highly-scalable data center building block. Deploying multiple OmniCube systems forms a federation. OmniCube provides unified global management—across the global federation—from within vCenter. OmniCube integrates data protection and data efficiency features to converge even more data center functionality.

OmniCube’s Data Virtualization Platform provides unparalleled data efficiency. It deduplicates, compresses, and optimizes data written to production storage—at inception—and maintains it in its optimized state for the life of data. Data efficiency extends not only to capacity, but also to performance. OmniCube reduces IOPS requirements and improves performance by writing and reading less data to and from disk.

Data protection is integrated. Backup polices established at the virtual machine level dictate the frequency of virtual machine copies, storage destination (local, remote site, or public cloud), retention time, and preference for application- or crash-consistent copies. This eliminates the need for additional backup software, backup hardware, replication, backup deduplication, and cloud gateway solutions. And, since data remains in a highly-optimized state, backup, recovery and replication of even large data sets occur rapidly and take up minimal storage capacity.