Hi Ed,
Probably the most truthful answer to your question is that there is neither 'too big' nor 'too small' setting, but that - as always - it all depends on your requirements.
My rule(s) of thumb in your situation would be:
1. Configure the LOGFILSIZ and LOGPRIMARY values to provide enough transaction log space for the usual (or expected?) workloads. If not sure, configure more rather than less.
2. Configure the LOGSECOND as large as the LOG file system space permits (this space normally won't/shouldn't be used - unless some very long, or large, or stuck transaction clogs up the LOG space).
3. With (1) in mind, I'd rather go for a larger file size than a larger number of primary log files (less files meaning "cleaner" logdir - in my mind - but this is of course debatable)
4. However, if archiving the log files to the Commvault is the only method of ensuring the required RPO (Recovery Point Objective) can be met in an Disaster Recovery situation, then it probably makes sense to keep the individual log files smaller so that they can be filled up (and closed!) quicker and consequently archived [to the archive device] more often, which will allow for a smaller RPO.
Otherwise, and performance-wise, with the modern storage devices (SSD, cache...), I don't think there is much difference (if any) in handling the small(er) vs. the larg(er) log files.
That's my two cents on the subject (all open for debate :-), hope you will find it helpful!
Regards, Damir
------------------------------
Damir Wilder
Senior Consultant
Triton Consulting
London
------------------------------
Original Message:
Sent: Fri November 22, 2024 09:35 AM
From: Edward Spencer
Subject: Determination of Log File Parameters
Thank you, but unfortunately I have read this information already, and it is basically telling me the same thing. And, yes, I do get it, there are some things I need to figure out as a DBA, but this is for databases I have not created for COTS products I do not work with. You would think that the product would offer some guidelines on what the best numbers to use would be, but they will offer no advice on it.
Currently the settings are for Circular logging, but we need to set the databases up for Commvault backups, which require me to change everything.
Again, Thanks for the reply.
Ed
------------------------------
Edward Spencer
Original Message:
Sent: Fri November 22, 2024 03:19 AM
From: Jan Nelken
Subject: Determination of Log File Parameters
Hi Edward,
You may want to consider "infinite logging" - but this simply shifts pain problem elsewhere (file system); the only good way to "solve the problem" is to find and fix the application that requires too much log space.
FYI - check also:
https://www.ibm.com/docs/en/db2/12.1?topic=logging-configuration-parameters
------------------------------
Jan Nelken
Original Message:
Sent: Thu November 21, 2024 10:45 AM
From: Edward Spencer
Subject: Determination of Log File Parameters
I hope this does not come across as a lame question, but ever since I started working with Db2 LUW, I have been taught that when setting the parameters for LOGFILSIZ, LOGPRIMARY and LOGSECOND is to just give it a best guess. That has never seemed right to me, and I have been researching this for some time and cannot seem to find anything that gives a good explanation of how to determine the most effective settings for these parameters.
For LOGFILSIZ, how big is too big (or too small)? Should I be looking at anything in particular about my database to determine the best size?
For the Primary and Secondary numbers, will setting them too high or low cause the database to work less efficiently? (Yes, I know setting too low an all of these parameters can lead to transactions not being able to get written, but why not just set to the MAX for all of them?)
I don't feel the idea of just keep making them bigger until there are no more issues is a good practice and I am hoping there is some sort of rule or formula to help is setting these parameters up.
------------------------------
Edward Spencer
------------------------------