Call a Specialist Today! 800-886-5369

HPE SimpliVity Data Protection – Backup
A unique, highly-efficient approach to data protection

Unless your business only relies on paper records, you probably have backup processes and technology to make copies of important digital data. And, if you don’t, you should. Typically, you’re making a backup copy of a volume, LUN or file(s) at one or more points in time during the day. Best practices suggest having multiple copies of backup sets – one to be stored off-site and one kept on-site – maintained for a specified period of time.

Contact us for more info!

But that’s not all. The backup process must complete within the designated window of time. Typically, this backup window is during off-peak hours when it won’t be too disruptive to other applications sharing the network and storage devices. Meeting your backup window depends on the method of backup, the volume of data that has to be copied, your storage media, how fast your network and systems respond, and, of course, how much time has been allocated to the process.

While tape media remains a popular – and inexpensive – choice of backup media, your local copies of backup most likely reside on disk. If you don’t have a second site where you store off-site copies on disk, storing them in public cloud storage is an option.

Successful recoveries depend on successful backups. If backups are incomplete due to errors, you run the risk of not being able to restore a piece of data. Data loss translates into financial losses, but it also has an impact on operational, customer service and compliance objectives.

To minimize the risk of data loss or downtime, you may have service level agreements (SLAs) or service level objectives (SLOs) in place. Recovery Time Objectives (RTOs), specifying the amount of time between an outage and resumption of operations, and Recovery Point Objectives (RPOs), specifying the amount of data loss you can tolerate from the time you suffer an outage until resumption of operations, are the primary SLAs.

Why Is Backup So Broken?

Is backup broken or just never fixed? It’s a perennial “top three” pain point in surveys on backup and recovery, and organizations continue to make huge investments in backup software, hardware and services. If it’s not completely broken, it’s limping along—and dragging you down with it.

It’s more likely that the challenges never cease and your backup suffers because of them. What challenges?

Backup Challenge: Data Growth

You are probably managing a large volume of data. And it’s relentlessly growing every year – upwards of 20% to 30% annually. Data growth directly impacts backup and recovery processes, systems and staff. More data means more backup capacity and network bandwidth. More data takes more time to back up – and recover. It also increases overhead for operational staff.

Backup Challenge: Cost

The volume of data under management directly relates to backup and recovery costs. So, it shouldn’t come as a shock that data protection is an expensive proposition. With backup hardware, backup software and network bandwidth tied to capacity, data growth impacts costs. Operational expenses add to the burden. Space, power, cooling, and operational staff to manage could be up to one-third of total costs. That said, the risk and costs of not having backup and recovery in place could be higher—and be even more detrimental to your business.

Backup Challenge: Complexity

Complexity often goes hand-in-hand with unabated data growth and high costs. The number of sites, systems and capacity under management adds to the operational burden in backup and recovery. IT organizations that use multiple data protection solutions complicate backup operations. Troubleshooting errors, failures in systems or processes, and pinpointing sluggish performance of backup and recovery systems introduce more complexity.

Backup Challenge: Time

Time is a constraint. There are only 24 hours in a day and more often you only have a small portion of it carved out to perform backup jobs. With more and more data to copy and move around, it puts tremendous pressure on systems – and staff – to get it done. The time allocated for the backup window and RTO become constrained given the volume of data to copy and move within and between sites.

Backup Challenge: Remote Sites

Companies with remote and branch offices have a wide distribution of corporate data across sites, complicating backup and recovery. To make it worse, remote and branch offices often lack on-site IT personnel to troubleshoot issues. Backing up data locally or to the central site will depend on the availability of on-site staff, the volume of data to protect, available bandwidth, and capabilities of backup infrastructure.

Backup Challenge: Virtualization

How can virtualization introduce challenges in backup? Isn’t virtualization a good thing? Yes, virtualization is a good thing. Virtualization enables business continuity and greatly simplifies disaster recovery. However, it does introduce some challenges when it comes to making backup copies.

In a pre-virtualization environment, there was a one-to-one relationship between the application, host system and storage. A storage system LUN held data from a single server, and it was simpler to achieve an application-consistent backup.

After virtualization, it’s more difficult to efficiently capture array-based snapshots. A single datastore on a LUN contains multiple virtual machines’ data. Snapshots of the LUN include a mix of workloads. If your goal is to back up a single virtual machine, your LUN snapshot will unnecessarily include all virtual machines sharing the datastore on a LUN. Backup applications that back up at the file or virtual machine level, whether specifically designed for virtual environments or adapted for it, get the job done. However, they introduce unnecessary complexity and cost.

Backup Solutions

So, given all of the challenges you face today, what’s the best course for backup? If you’re like most IT professionals, you have stayed the course with your incumbent vendor, updating and augmenting the backup solution, and evolving it to meet your changing needs. Or maybe you’re one to completely re-architect and modernize your backup infrastructure and approach.

What’s Required in Post-Virtualization Backup?

Virtualization is a “change event” creating opportunities for next-generation data protection. Some requirements for a modern backup approach include:

What about Deduplication in Backup?

Deduplication is an ideal way to reduce costs in backup. In fact, deduplication has become a staple feature of backup hardware and software solutions. However, compartmentalizing deduplication to only within the backup process, limits efficiency and cost savings. Besides, backup data in a deduplicated and compressed state requires undeduplication before moving out of backup storage.

Where Does The Cloud Fit In Backup?

The public cloud is emerging as a new storage tier in backup. It’s infinitely sized, doesn’t have to be directly managed, and comes with a predictable cost model. The public cloud is also a destination to store copies of data for disaster recovery.

HPE SimpliVity Integrated Data Protection

HPE SimpliVity OmniCube has a unique, highly-efficient approach to data protection. OmniCube hyperconverged infrastructure combines server compute, storage, network switching, and virtualization software in a commodity x86 appliance, providing a highly-scalable data center building block. Deploying multiple OmniCube systems forms a federation. OmniCube provides unified global management—across the global federation—from within vCenter.

OmniCube’s Data Virtualization Platform provides unparalleled data efficiency. It deduplicates, compresses, and optimizes data written to production storage—at inception—and maintains it in its optimized state for the life of data. Data efficiency extends not only to capacity, but also to performance. OmniCube reduces IOPS requirements and improves performance by writing and reading less data to and from disk.

Data protection is integrated. Backup polices established at the virtual machine level dictate the frequency of virtual machine copies, storage destination (local, remote site, or public cloud), retention time, and preference for application- or crash-consistent copies. This eliminates the need for additional backup software, backup hardware, replication, backup deduplication, and cloud gateway solutions. And, since data remains in a highly-optimized state, backup, recovery and replication of even large data sets occur rapidly and take up minimal storage capacity.