Call a Specialist Today! 800-886-5369

SimpliVity Data Center Infrastructure
Modernized with hyperconvergence to improve performance and minimize cost

The economic downturn of a few years ago forced many companies to extend hardware refresh cycles and wring out more from capital investments in their data center infrastructure. Today, it’s not “how long can I hold on to legacy data center infrastructure?” but rather “how long do I have to?”

The legacy data center infrastructure can no longer meet the demands of today’s modern data center. Virtualization, cloud computing and the deployment of new applications that require high-density and high-performance computing are mismatched with yesterday’s enterprise storage, servers, and networking data center infrastructure.


Contact us for more info!


IT organizations are re-assessing data center infrastructure, looking critically at:

Servers

Virtualization and cloud computing have had an impact on traditional data center infrastructure. Large-scale virtualization deployments demand more powerful x86 servers, which can be expensive. More powerful multicore processors, as well as denser configurations of servers with more memory, higher IO, and faster onboard bandwidth are required to speed overall system throughput. x86 servers need to be refreshed to keep pace with changing requirements for data center infrastructure.

Performing a server hardware upgrade can address some of these issues—and at a fraction of the cost of a new x86 server. Adding CPUs or memory can increase performance. However, it cannot always fix poor-performing hardware.

Staying on top of your data center infrastructure is a chore. Especially since refreshing servers is a continuous process. One way to take advantage of opportunities available with better x86 server technology to deploy more virtualized workloads or adopt more demanding workloads. Addressing the problems with incumbent x86 servers, such as reduced efficiency, scalability, or posing unnecessary risk to the business, will pay off in short order.

The investment in more modern data center infrastructure introduces rapid payback. How? Consolidating aging x86 servers in to a single virtualized server delivers energy consumption savings (power and cooling) and space efficiency that buys down the initial cost of the new server. It can also decreased maintenance costs, contribute to productivity savings, and reduce planned and unplanned downtime.

Enterprise Storage

Servers are not the only component in your data center infrastructure affected by virtualization. Virtualization introduces complexity in setting up and managing enterprise storage for virtual workloads. That’s because virtual machines map to datastores in the virtualization domain, and datastores map to a complex storage topology. This includes LUNs (or NFS mounts), volumes, RAID groups (or aggregates) and physical disks.

Mapping virtual machines to the spindle is a necessary pre-requisite to assure workload storage performance. Workloads will frequently move to different compute and storage locations. Therefore, continually tracking the relationships between server and storage domains is required to control and optimize the environment effectively. It can be extremely challenging, complex and time consuming to assure performance.

Enterprise storage IO is an issue. For many years, technology innovation in the data center focused on minimizing the impact of data growth on enterprise storage capacity.

The bigger issue is that IOPS requirements have increased by 10x in the post-virtualization data center. Consolidation of several to hundreds of virtual machines on a single physical server contributes to an increasingly random IO workload—each with its own pattern for reading and writing data to the underlying enterprise storage. The highly random IO stream funneled through the storage pipe has the potential to adversely impact overall performance as virtual machines contend for disk resources.

To address IO issues, IT organizations are using flash more often because hard disk drive IOPS are stagnant and just can’t keep pace with requirements. However, throwing hardware at the problem only eats away at server virtualization ROI. Using an all flash array is pricey and it’s only suitable for portions of the data lifecycle.

The desire to solve for storage performance problems has also led to storage over-provisioning, which further exacerbates the storage cost problem. There’s a need to balance ensuring adequate performance/IOPS to fuel application requirements with cost-efficiency.

The hyper-scale architecture of converged infrastructure can provide greater storage performance and functionality. Even while dramatically reducing total cost of ownership. Converged infrastructure significantly reduces the complexity of storage supporting virtualized and cloud workloads.

Virtualization

Virtualization is one of the bigger initiatives with data center infrastructure in the last decade. Virtualization delivers compelling business value including scalability, ease-of-management, cost savings, improved utilization, increased availability, system resiliency, rapid deployment, and IT agility.

Virtualization is vital for IT optimization and improved IT service delivery. Since virtualization abstracts, pools and automates server resources, organizations can overcome the limitations of rigid physical architectures. Virtualization turns physical devices into a set of resource pools that are independent of the physical asset they run on. It encapsulates the virtual machine into a single file to enable portability. Virtualization eliminates the economic and operational issues of infrastructure silos.

To date, siloed physical storage and network assets have stymied server virtualization. Virtualization is still running on specialized hardware systems in the data center. Physical, non-scalable, non-pooled storage and network resources support workloads on virtualized servers. And, it’s over-provisioned and over-utilized. The resulting environment is complex and costly to manage, and scale only causes more headaches.

Converging silos into a single system with virtualization enables a fully-virtualized environment. Alternatively, hyperconverged infrastructure provides the building blocks to establish a modern software-defined data center at scale.

Server SAN

A new phenomenon in data center infrastructure is what Wikibon calls a Server SAN. Server SAN categorizes a pooled storage resource comprising more than one storage device directly attached to separate multiple servers. Server SAN can leverage commodity compute-based architecture. This eliminates the need for an external enterprise storage array, simplifies storage deployment and management, enables scalability, and helps IT organizations realize a software-defined data center.

SimpliVity’s hyperconverged infrastructure offers the convergence of compute, storage, networking and management in a single device.

Deduplication

Relentless data growth gave rise to deduplication technology in data center infrastructure. Deduplication is a method of reducing bandwidth and storage capacity needs by eliminating redundant data and retaining only one unique instance of the data on storage media. It was first popularized in backup, archive and WAN optimization solutions. Deduplication is CPU-intensive process. Running it on production systems could cause contention for resources — slowing application performance.

There are different deduplication techniques, each with a different impact on performance. Performing deduplication “inline” means that the process occurs as the data is being written to disk. Alternatively, “post process” means that data is written to disk in its regular state. Then, at a later time, a process kicks off to deduplicate the data. That means that the data is written to disk, read from disk, deduplicated and written to disk again.

SimpliVity performs deduplication in real time at inception (when data is first written to disk). Data is maintained in its deduplicated state throughout its lifecycle. If a copy is taken for backup, the backup is performed on data in its optimized state. There’s no process of “rehydrating” data to make backup copies and re-deduplicating it after. Saves IOPS and improves performance—across all data lifecycles, tiers, data centers, and to the Cloud.

SimpliVity performs deduplication, compression and optimization without the system incurring a performance penalty. SimpliVity delivers zero overhead in its deduplication process is via an accelerator card, which boosts resources for this process.

WAN Optimization

WAN optimization is a valuable component of the data center infrastructure. It improves the efficiency of data transfer over a wide-area network. Technologies such as deduplication, which reduces data transferred, and compression, which shrinks the size of the data, limits bandwidth utilization.

Two use cases where WAN optimization is impactful include: Remote and branch office scenarios and backup and disaster recovery. A typical use case for WAN optimization is the transfer of data between headquarters and remote and branch office locations. Backup of remote site data to a central data center is streamlined with WAN-optimized data transfer where only unique data is sent over the WAN. Backup copies stored at an off-site location facilitate disaster recovery should the primary site be compromised. In both cases, cost, time and risk are minimized.