Db2

 View Only
  • 1.  Determination of Log File Parameters

    Posted 3 days ago

    I hope this does not come across as a lame question, but ever since I started working with Db2 LUW, I have been taught that when setting the parameters for LOGFILSIZLOGPRIMARY and LOGSECOND is to just give it a best guess.  That has never seemed right to me, and I have been researching this for some time and cannot seem to find anything that gives a good explanation of how to determine the most effective settings for these parameters.

    For LOGFILSIZ, how big is too big (or too small)?  Should I be looking at anything in particular about my database to determine the best size?

    For the Primary and Secondary numbers, will setting them too high or low cause the database to work less efficiently? (Yes, I know setting too low an all of these parameters can lead to transactions not being able to get written, but why not just set to the MAX for all of them?)

    I don't feel the idea of just keep making them bigger until there are no more issues is a good practice and I am hoping there is some sort of rule or formula to help is setting these parameters up.



    ------------------------------
    Edward Spencer
    ------------------------------


  • 2.  RE: Determination of Log File Parameters

    Posted 2 days ago

    Hi Edward,

    You may want to consider "infinite logging" - but this simply shifts pain problem elsewhere (file system); the only good way to "solve the problem" is to find and fix the application that requires too much log space.

    FYI - check also:
    https://www.ibm.com/docs/en/db2/12.1?topic=logging-configuration-parameters



    ------------------------------
    Jan Nelken
    ------------------------------



  • 3.  RE: Determination of Log File Parameters

    Posted 2 days ago

    Thank you, but unfortunately I have read this information already, and it is basically telling me the same thing.  And, yes, I do get it, there are some things I need to figure out as a DBA, but this is for databases I have not created for COTS products I do not work with.  You would think that the product would offer some guidelines on what the best numbers to use would be, but they will offer no advice on it.

    Currently the settings are for Circular logging, but we need to set the databases up for Commvault backups, which require me to change everything. 

    Again, Thanks for the reply.

    Ed



    ------------------------------
    Edward Spencer
    ------------------------------



  • 4.  RE: Determination of Log File Parameters

    Posted 2 days ago

    DB2 has the Autoconfigure feature, which you may use to start with.
    But it requires some basic knowledge of your workload.
    For example, you may run it as follows to get some recommendations regarding logging.

    db2 connect to mydb
    db2 autoconfigure using workload_type mixed num_stmts 10 tpm 6000 num_remote_apps 100 apply none | grep -E '^ Description|^ -|\(LOG'

    In reality you should adjust these suggestions with overall log space size needed (from your experience of running this or some other systems with similar workload).
    Usually storage managers don't like archiving logs too often. For example, if you realize after all these suggestions, that log files are closed at, say, 1-5 minutes rate, then you should to increase LOGFILSIZ to get at least 10-15 mins log close rate.

    P.S.:
    Note, that LOGFILSIZ & LOGPRIMARY still (!!!) can't be changed online (actually, it's a big shame on DB2 for LUW).



    ------------------------------
    Mark Barinstein
    ------------------------------



  • 5.  RE: Determination of Log File Parameters

    Posted 2 days ago

    Hi Ed,

    Probably the most truthful answer to your question is that there is neither 'too big' nor 'too small' setting, but that - as always - it all depends on your requirements.

    My rule(s) of thumb in your situation would be:
    1. Configure the LOGFILSIZ and LOGPRIMARY values to provide enough transaction log space for the usual (or expected?) workloads. If not sure, configure more rather than less.
    2. Configure the LOGSECOND as large as the LOG file system space permits (this space normally won't/shouldn't be used - unless some very long, or large, or stuck transaction clogs up the LOG space).
    3. With (1) in mind, I'd rather go for a larger file size than a larger number of primary log files (less files meaning "cleaner" logdir - in my mind - but this is of course debatable)
    4. However, if archiving the log files to the Commvault is the only method of ensuring the required RPO (Recovery Point Objective) can be met in an Disaster Recovery situation, then it probably makes sense to keep the individual log files smaller so that they can be filled up (and closed!) quicker and consequently archived [to the archive device] more often, which will allow for a smaller RPO.

    Otherwise, and performance-wise, with the modern storage devices (SSD, cache...), I don't think there is much difference (if any) in handling the small(er) vs. the larg(er) log files.

    That's my two cents on the subject (all open for debate :-), hope you will find it helpful! 

    Regards, Damir



    ------------------------------
    Damir Wilder
    Senior Consultant
    Triton Consulting
    London
    ------------------------------



  • 6.  RE: Determination of Log File Parameters

    Posted 2 days ago

    Edward,

    We run DB2 on Linux Enterprise Edition v11.5.8.0 and use COMMVAULT for backups.  Here is what I have set "globally" across all my servers for those parameters.  I don't really know why they were selected, but they work for us.  

    db2 get db cfg show detail

    Log file size (4KB)                         (LOGFILSIZ) = 1024                       1024
    Number of primary log files                (LOGPRIMARY) = 13                         13
    Number of secondary log files               (LOGSECOND) = 12                         12



    ------------------------------
    Matthew Shalberg
    ------------------------------