Manage Learn to apply best practices and optimize your operations.

Journaling production files

I recently read an article where someone said they journaled all of their production files. Is there a way to do this by library? Creating a journal and receiver for all our production files could take weeks? Is there a way to estimate the space required to journal all your production files? Is there a way to estimate the amount of processor drain on the system when journal all of your production files?
The preferred way is to do this by library. The steps for setting this up are quite easy, too. Here is how I do it on my system. Note: I do this process for each individual application library I want to start journaling on:

1. Create a message queue to receive journal messages:

TEXT('Journal Messages')

2. Create a journal receiver (Tip: Put receivers in a different library)


3. Create a journal referencing the receiver just created:

MNGRCV(*SYSTEM) „² Lets the system manage the receiver names

4. Start journaling for all files in the library (Note: STRJRNLIB is a TAATOOL. If you don't have TAATOOLS on your system then you will have to start journaling manually using the appropriate system commands:

STRJRNAP Start Journal Access Path
STRJRNOBJ Start Journal Object
STRJRNPF Start Journal Physical File


You might notice that in step 4 I also specified DLTRCV(*NO) ... This is because I prefer to delete journal receivers after I have saved them to tape. I don't want the system to automatically delete them for me.

Journaling your production objects is a good way to enhance your ability to recover in the event of an outage. However, if you lose the system and you haven't saved your receivers to tape you're out of luck. Save your receivers to tape often and move these tapes off site along with your regular backup tapes. Even better yet, use "remote journaling" to replicate your journal receivers to another system, in real time. (That's a whole topic I won't go into right now though.)

Another advantage to journaling data is that you have a detailed audit record of everything that has happened to your production data. This can be an invaluable resource to tap when you are trying to figure out "what happened" when something goes wrong.

The performance penalty for journaling has been decreasing significantly with just about every new release of OS400. In today's environments with faster disk drives and processors I'd venture to say you won't even be able to notice that you have turned on journaling.

I personally have created a separate ASP (Auxiliary Storage Pool) for my production journal receivers, too. This moves the disk IO associated with journaling to a separate set of disk arms. It also provides additional protection of your data. For example, if you were unfortunate enough to lose two disk drives in a RAID set that contained your production data your would have to recover this data from tape. If your receivers are in a separate ASP then after restoring your data you could apply the journaled transactions thereby fully recovering your production application data to the point of failure. You ask, "How much disk space will this take" ... It all depends on how "active" your data is. If you have 2,500 changes a day to a journaled set of objects, very little disk space is required. If you have 25,000.000 changes a day like we do, you could eat up 60-100GB of disk space every few days! You do have control over how much disk space is used, though. Back up your receivers to tape often and then remove them from disk using the DLRJRNRCV command.

Good luck! I hope you find this information to be helpful.


Visit the ITKnowledge Exchange and get answers to your backup & recovery questions fast.

Search400.com's targeted search engine: Get relevant information on backup & recovery.

Check out this Search400.com Featured Topic on 30 backup/recovery tips in 30 minutes .

Dig Deeper on Data backup, storage and retrieval on iSeries

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.