We are using net server/TCIP on our AS/400, which is on a WAN. We also try to 'bring down' the users while we do our backups. To be sure the users are disconnected we terminate some subsystems as well.
The down sequence is something like:
ENDTCP option(*CNTRLD) delay(60) ENDSVR(*YES)
ENDSBS QINTER *IMMED
ENDSBS QCMN *IMMED
ENDSBS QSERVER *CNTRLD delay(10)
ENDSBS QSNADS *CNTRLD delay(10)
The up sequence is something like:
STRSBS QSERVER, QINTER, QCMN & QSNADS
STRHOSTSVR SERVER(*ALL) RQDPCL(*TCP)
We have a couple of delays in the process, but aren't sure if they are long enough, or where they should be.It seems to be working, but there are messages in the joblogs and in QTCP msgq that indicate it could be cleaner. We also get 39 joblogs for QP0ZSPWP every day.
Is there a better sequence, or are we missing something? What we want is as smooth a transition as possible--users want minimum downtime.
Without knowing what messages are appearing in the joblogs that indicate that it could be a cleaner shutdown, I?m not sure I can give you an answer on what to do differently. As long as things seem to be running OK, I wouldn?t be overly concerned about it.
Here are a couple suggestions on minimizing downtime for users while doing backups.
First of all, separate your volatile objects(data files, data areas, journals, etc.) objects from your non-volatile objects(programs, display files, print files, etc.). If you want to keep users down while you do a backup, you only need to keep them down while the volatile objects are being saved. If you take this approach, you will probably want to use the save while active feature on the rest of your libraries, especially if they contain display files that the user may have open.
Consider using Save while active or using a weekly SAVLIB, but daily SAVCHGOBJ strategy.
For more information on backups, see the OS/400 Backup and Recovery guide.
Dig Deeper on Past Releases
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.