I have looked at most of the file reorganizations on this site that drop delete records. What I am interested in is something that will take the number of actual records, compare it to current number records, match it against the total record capacity in the file, and change the sizing.
For example: If originally a file was setup to hold 10,000 records, but the actual amount in the file is only 5. I want to downsize the record capacity to let's say 10 records. This way I would regain the space. By the same token, if a file was originally setup at the 10,000 record mark, but has been incremented many times, I want to change the sizing and change the increment value to extend the file, so it doesn't happen with any frequency. I wonder, is there any code "free" in place to do this?
There is a misconception in the use of the iSeries database. A file capacity does not bear any relation to the actual size used on disks. Indeed, a lot of site set the size of their files to *NOMAX, thus ensuring no process will ever be stopped by a "file full" message (unless they reach the maximum capacity as defined by IBM or the disks fill up).
The real problem is caused by deleted records. A deleted record uses as much space as a live one. If you have processes that constantly create and delete records, this can become a nightmare. To check, do a DSPFD on all your files using the *MBRLIST option and drop the result into a file. If you find files with a lot of deleted records, they need to be reorganized (or cleared if they have no live records). Be careful as it may take a long time to reorganize, especially if you have a lot of indexes or logical files based on the physical.Apart from reorganizing, which is cumbersome, you can compile or modify your files to allow them to use space allocated to deleted records: REUSEDLT(*YES). Please note this is not compatible with a few features you may use in logical files such as LIFO, FIFO or FCFO. It is also unsuitable if you need to ensure records are processed in strict creation order. But they are ways to circumvent the problem. The easiest being the use of timestamps. You may add a timestamp field to your file and instead of reading your file by record number, read it based on a key using the timestamp.
To come back to your question, there is (or was) a way of allocating on the machine, space for records in a file. I have never used it and do not see the justification now that disk space is plentiful and cheap.
MORE INFORMATION ON THIS TOPIC
Search400's targeted search engine: Get relevant information on DB2/400.
The Best Web Links: tips, tutorials and more.
Check out this online event, Getting the Most out of SQL & DB2 UDB for the iSeries.