The IBM I backup challenge
System i data protection has been unchanged almost since its introduction. Occasionally, changes were made to support new tape drives or media formats, but for the most part the status quo is an AS/400 connected to a locally-attached tape drive or library. All the management of those tapes is essentially manual and there is virtually no way to replicate the backup jobs to an off-site location -- the tapes have to be physically shipped.
The challenge is that the data protection options available to the rest of the data center environment have expanded exponentially over the past few years. Technologies like disk-to-disk backup with data deduplication or MAID have become available. Even the tape-drive technology has advanced and many System i sites would like to support the newest LTO tape format or have the flexibility to switch to something other than an IBM tape library.
The overreaching concern is that this lack of modern capability in the AS/400 data protection strategy makes the system an island of its own. Attempts to integrate the protection of this environment into the rest of the enterprises data protection process have been both complex and expensive. The concern is not just the lack of efficiency and extra cost of managing a separate process, it is also the risk that because of reliability of the i, data protection will be taken for granted. In the event of a failure, the recovery of the data may not bring the system to its most recent image because backups are not done on a regular basis.
Welcome to the future, IBM i
There have been attempts in the past to add System i backup support to Open Systems backup applications. Most were a loose collaboration between a primary backup application supplier and an AS/400 software specialist, and most of these met with failure. They were expensive, complex to integrate and required a high degree of operational attentiveness.
Finally, with backup virtualization technology, the opportunity to integrate the IBM i platform into the existing data protection infrastructure has become reality. Backup virtualization solutions typically run on an appliance that is highly optimized for transferring data from a backup server to a backup target. Similar to how server virtualization abstracts the OS from the hardware, backup virtualization provides a layer of abstraction between the server and backup target. The result is the ability to backup from anything, to anything.
For this to work in the System i environment, the backup virtualization provider must be able to make the receiving end of its appliance look like an IBM Fibre Tape Library. This is similar to how server virtualization software makes mixed pieces of server hardware appear to be the same reference platform. Once this front-end translation is done, the backup virtualization appliance can redirect these backup jobs to any of the modern backup targets, like MAID disk, deduplicating disk or advanced tape libraries.
IBM i modernization without change
The problem with most upgrades or modernization projects is that they are time consuming, expensive and painful. The outcome is that most environments would rather just "live with" what they have. The value of backup virtualization is that the System i environment can be integrated into the enterprise data protection process with essentially no change to the day to day operations of the System i environment. Even implementation of the appliance typically requires no change to the environment other than possibly pointing the backup jobs at what appears to be a different tape library.
The AS/400 administrator continues to use utilities like "savelib" or BRMS to perform their data protection tasks. The only difference is that backup data is now directed at this different virtual representation of an IBM tape library. Once integrated into the backup process, backup virtualization allows for the System i administrator to offload the backup responsibilities to the backup team. To the backup team, operations is not affected. This looks like just another backup job. They can use their existing backup hardware to store send data offsite.
Backup virtualization solutions like those from Gresham Storage or IBM also open the i up to using competitively priced tape libraries and not being locked into the IBM offering. While the emulation from the backup virtualization appliance appears to be an IBM library, in reality it can be virtually any available tape library on the market. In most customer cases in which a new library is not purchased, the existing library used by the UNIX and Windows environments has enough capacity to handle the System i workload, further reducing costs.
The combination of letting the AS/400 administrator offload the backup process and leveraging the existing backup infrastructure should allow for a dramatic improvement in backup ROI. For example tape drives can be leverage between platforms automatically. Backup's can be sent to the tape drives from the System I backup utilities and then can be assigned to the open systems backup application as needed. In fact with the use of disk, all the backups can come from all backup applications on all platforms simultaneously and then be spooled to tape as needed.
IT budgets are going to be under intense scrutiny in 2009, and the consolidation of System i backups is an ideal way to contain the cost of the backup process and improve the reliability and recoverability of the System i environment. This allows the administrators to focus on those responsibilities while leveraging the strength of the backup team. The result is that backup virtualization reduces costs, increases efficiencies and improves recoverability.
ABOUT THE AUTHOR: George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.
This was first published in January 2009