Backup refers to the copying of data so that these additional copies may be restored after a data loss event. Backups differ from archives and backup systems differ from fault-tolerant systems. Backups are useful primarily for two purposes: to restore a computer to an operational state following a disaster (called disaster recovery) and to restore small numbers of files after they have been accidentally deleted or corrupted.
In today’s global economy, companies are increasingly distributed. That puts IT in a precarious position, especially when it comes to backup processes. A new architectural approach allows IT to project virtual servers and data to the edge, providing for local access and performance. Learn to take advantage of this new approach to recover faster, reduce risk, and save money.
Many IT executive view cloud computing as an attractive platform for data backup. They know cloud can help protect business-critical data and provide near-ubiquitous information access. But while having a cloud-based solution is valuable, developing one in-house is tricky. That's why many organizations want to contract with a third-party cloud provider for cloud-based data backup. Read this buyer's guide to learn how to choose a provider to suit your business needs.
Are you considering in-house disaster recovery management? In the last five years, many companies have. But did you know that without the proper resources in place, managing disaster recovery yourself can put a strain on your budget, your staff and your disaster preparedness? Read this IBM-Forrester global study “The Risks of ‘Do It Yourself’ Disaster Recovery” to learn the critical components that make a disaster recovery strategy successful, and the key questions you need to ask before bringing disaster recovery in-house.
Zurich Insurance suffered a major flooding incident in 2011 that put their office out of commission for almost a month. The company invoked its disaster recovery plan and moved 120 staff to IBM’s Damastown Technology Campus. Read the case study to find out how planning and working with IBM helped Zurich Insurance cope with this serious incident.
"Virtualization introduces new challenges for managing your data center. Applications now compete for shared resources such as storage, CPU cycles and memory, often causing performance bottlenecks and unhappy customers. This paper details 20 metrics that indicate when capacity issues are occurring and recommends a corresponding tool that will analyze the data and resolve your VM performance issues.
Read the White Paper "
"IOPS (I/O operations per second) is an easily understood and communicated unit of measurement, which is why it’s so widely used. Unfortunately, it’s also easy to oversimplify. IOPS describes only the number of times an application, OS or VM is reading or writing to storage each second. More IOPS means more disk I/O, and if all IOPS are created equal, we should be able to measure disk activity with it alone. But they aren’t.
Read the White Paper "
For remote office protection, you can reliably get a backup and a DR copy of data at any location without the need for onsite expertise. Read this paper to learn more about the next generation of deduplication.
When rack-mounted servers first appeared on the scene in the 1990s, they offered considerable advantages over the behemoth boxes they replaced. Their small, standardized footprint went a long way toward making data centers easier to manage. In the ensuing decades, form factor size and compute power have had an inverse relationship.
Their universal standardization earned them the nickname “pizza box” servers, and it was a key driver of the scale-out computing model popular in the early 2000s. Populating a rack of eight servers and either clustering them or implementing failover from one to the other was far easier than previously possible.
As the amount of information we generate grows, and as our relationship with information grows more complex, the race to innovate new products and services to help us harness information, manage it, and tap into it more easily intensifies. This paper discusses the continuing development of HP’s strategy for delivering Converged Storage that improves the ability of your business to capitalize on information. Building on the foundation provided by fusing industry-standard technologies, federated scale-out software, and converged management, HP is now extending Converged Storage into new solutions and segments with a new initiative that introduces the next evolution of this HP Converged Storage strategy and vision.
This IDC paper “Business Value of Blade Infrastructures” indicates the considerable cost savings and the improved agility of the IT infrastructure by migrating to an HP BladeSystem environment. In fact, HP BladeSystem cut data costs by 68%. Customers participating in this study were able to pay back their initial investment in just over 7 months, a significant factor given the financial constraints most IT organizations are facing.