Application Performance

Recently I’ve been seeing a lot of posts on LinkedIn about application modernization and the performance gains that are being realized by using SQL and modern programming techniques.

This reminded me of an experience I had WAAAYYYY back when I was a programmer for a System/36 consulting firm.

There was a report that was run weekly and it took (literally) HOURS to run (somewhere in the range of 6-7 hours).

The end user asked me if I could take a look at it and see if the performance could be improved. I think they were hoping I could shave 30 minutes to an hour off the run time.

I looked at the OCL (remember, this was a S/36) and saw that the original developer had reversed the blocking factors on the files that drove the report.

They had coded the DBLOCK (data block size), on the input primary file, very small and the IBLOCK (index block size), for the random access files, very large.

This meant that when the program asked for more data from the main data file, it only got a little bit… and every time it asked for more records from the master files (that were accessed dynamically) it got a huge amount of data.

I reversed the DBLOCK & IBLOCK settings on the file statements in the OCL and was able to get the reports runtime down to 25 minutes.

By making the DBLOCK very large I told the system that, when it retrieved data from the primary file, it should get as much data as possible into the buffer … so subsequent reads of the primary file didn’t have to go to the disk. By making the IBLOCK relatively small, I told the system that it should only get a relatively small amount of data because the next time it read the data probably wouldn’t be in the buffer.

I presented the modified report (which had ZERO changes to the report logic itself) to the user and told them the new run time.

Unfortunately, the user did not believe that the report could be right when I cut the run time from 6 hours to 25 minutes. I tried to explain the changes that were done, but they were still skeptical.

I ended up having to run the old report and the new report (again, zero changes to the RPG that created the report) and do a sampling of calculations to validate that the results were the same and accurate.

I’m not sure why the original developer used the blocking settings they did … perhaps they didn’t understand what the settings did? Maybe they just typoed. Maybe it was deliberate and they would be able to ‘save the day’ by miraculously fixing the reports performance.

Leave a Reply

Your email address will not be published. Required fields are marked *