Problem solve Get help with specific problems with your technologies, process and projects.

Why hard disks are important in performance tuning

Hard drives are the fastest of all auxiliary storage devices, but they are painfully slow compared with other components. Therefore, it's essential they are tuned to the utmost.

Of all the components in a computer, the auxiliary storage devices are the slowest. While the hard drives (i.e., DASD) are the fastest of all the auxiliary storage devices, they are painfully slow when compared to the speed of the other components (which operate at essentially the speed of electricity). It is, therefore, essential to make sure that your hard drives are tuned to the utmost.

The first priority is to even out the distribution of data among all available drives. As new data is written, the system automatically tries to spread the data out evenly among the drives -- subject to any special disk configurations on your system, such as auxiliary storage pools (ASPs) and preferred storage unit and contiguous storage attributes for files.

Normally this works great. But when new drives are added to your system, the system sees that there is an imbalance in the distribution of data because the new drives do not have any data on them. Consequently, almost all new data is written to the new drives.

Ron Turull

This still seems like what we want until you think about how data is used in a typical application. Roughly speaking, about 90% of data retrieved in a typical application is recently written data. That means the new disk drives will have a disproportionately high amount of activity, thus degrading application performance. To evenly distribute disk activity, we not only need to even the data distribution among the drives at the system level (which the system will do) but we need to also even the data distribution at the object level.

How to distribute data at the system level
As discussed in the previous tip "Using the WRKDSKSTS command to increase performance," to determine the data distribution at the system level, you simply need to execute the WRKDSKSTS command and examine the "% Used" column on the resulting display. If the numbers in this column are not roughly equal, the data on your system is not evenly distributed among the available disks. The most probable cause of this is the addition of new disks units to your system recently, although other factors may also contribute.

The easiest way to correct this and boost the performance of your system is to utilize the free storage option on the SAVLIB command. This is done by specifying the *FREE option on the STG parameter (Storage) of the SAVLIB command.

Normally when objects are saved, the default value *KEEP is specified for the STG parameter. That leaves the objects intact on the system. If the objects are then restored, they are written to the exact same storage locations they currently occupy. Using *FREE on the STG parameter deletes the data portion of objects and frees the associated storage while retaining the object description on the system. Now, when the objects are restored their data is written to the disks evenly distributed.

Use the following three-step procedure to accomplish this:

    This will be your backup in case a media error is encountered. You can make two backups to be doubly safe.
    This deletes and frees the storage for all user libraries and the objects they contain.
    This restores the libraries and objects while distributing the data evenly.

Warning: The steps should be done with the system in restricted state (use the ENDSYS command to put the system in the restricted state).

How to re-distribute data at the object level
If, by using the DSPFILDST program discussed in the previous installment, you find large files that are unevenly distributed, you may be able to improve the performance of applications that use those files by following similar steps as those given for redistributing data at the system level. However, use the SAVOBJ command instead of the SAVLIB command and the RSTOBJ command instead of the RSTLIB command.

You can also use the following steps, particularly if you are using referential integrity (RI) on your system (see step 3):

  1. SAVOBJ OBJ(<list of files>) STG(*KEEP) ...
  2. SAVOBJ OBJ(<list of files>) STG(*KEEP) ...
    As a backup in case of media errors.
  3. DLTF FILE(<list of files>) ...
    If you are using RI, consider using the *KEEP value for the RMVCST parameter on the above DLTF command. This will delete the files but keep any referential constraints associated with the files. When the files are restored in the next step, the constraint will still be defined although you will need to re-enable them using either the CHGPFCST command or the WRKPFCST command.
  4. RSTOBJ OBJ(<list of files>) ...
    This restores the files while distributing the data evenly.

Remember to include in this process logical files defined over the files as well as journal receivers used for the files. Also, to be safe, the procedure should also be performed with the system in restricted state.

More to come
In the next installment, we'll discuss some new OS/400 commands that automate some of the manual processes just discussed. Stay tuned.

About the author: Ron Turull is editor of Inside Version 5. He has more than 20 years' experience programming for and managing AS/400-iSeries systems.


  • DASD failure notification member Keith Kerlin provides a program to get your pager to notify you of a DASD failure -- 24x7.
  • Determine DASD cache battery life
    In this Hall of Fame tip, Justin Haase provides an easy way to see when you can expect your cache batteries to start posting warnings, and have your CE replace more than just the one the next time they come out.

This was last published in November 2003

Dig Deeper on Performance

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.