We're converting some old master files in to new format files. These master files have approximately 58 million records each. We wrote COBOL programs to convert the format and write to the new files. In order to speed up the process we are splitting the each master file into 20 parts (i.e., 20 files and spreading the records in to files like 1 to 20000 to FILE1, 20001 to 40000 to FILE2, etc.) and running the same program to populate the records in the new format in the NEWFILE, in multiple jobs, after the necessary overrides. It is taking about five hours to complete all these jobs, even after using 85% of the CPU. I would like to further speed up the process. Can someone help/guide me how I can improve the performance? What is the best approach for dealing with such conversions? We need to convert four such files.
The first thing to consider is the files that are the destination. The destination files should not be keyed, nor have any triggers. If possible, don't have logicals over them, although this will increase the time required to build the logicals afterward. The second thing to consider is to override the files to use record blocking. The third thing to consider is to read and write without keys. A typical over ride command to do the override would look like this:
MORE INFORMATION ON THIS TOPIC
The Best Web Links: tips, tutorials and more.
Ask your programming questions--or help out your peers by answering them--in our live discussion forums.
Ask the Experts yourself: Our application development gurus are waiting to answer your programming questions.
Dig Deeper on RPG iSeries programming
Related Q&A from John Brandt
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.