Planning Analytics with Watson

Managing Transaction Logs

By Errol Brandt posted Thu July 29, 2021 03:00 AM

  

Managing Transaction Logs

Following on from last month's post on managing server logs here is a short post on managing the transaction logs.

I must admit, I rarely refer to the transaction logs these days. In the early days I relied on them when a user complained that 'somebody changed my number'. Now, with better security and user familiarity, this is a rare occurrence.  Nonetheless, apart from hunting down unintended changes, I've used transaction log data to determine the best time to schedule jobs. There's a four (or five) hour time difference between Auckland and Perth, so log files can help me understand what kind of work people are doing at different times of the day. 

Christoph Hein kindly shared article by Yuri Kudryavcev to explain how to process the transaction logs using AWS Athena.  I've not yet had a chance to try this myself but I think this approach is interesting because the logs contain a lot of valuable information:

  • Date and time the change was made
  • Name of the client who made the change
  • Whether the new data is simple data (N) or string data (S)
  • Value before the change
  • Value after the change
  • Name of the cube in which the change was made
  • Elements that identify the cell that changed
It good to have such information available, but the transaction log files can become enormous. Even with a diligent approach using CubeSetLogChanges at the start and end of a TI process, and restricting the number of cubes with logging in the }CubeProperties cube, its easy for the transaction files to consume a lot of space if they are left unmanaged.

My preference is to keep a only week's worth of transactions before deleting the files. Perhaps once I've had a chance to test Yuri's idea I will load them up in an external database before deletion, but I don't have much need for files older than this.

If you would like to implement something similar on your database, here's what you'd need to do:

1. Create a deletion process

I delete the server logs using a process called 'sys.file.logs.delete'.  It takes one parameter pLogRetentionDays to delete files older than this number of days.

#==============================================================================================
# sys.file.logs.delete
# This process will delete the tm1s*.log files on a nightly basis to prevent drive overflow
# Parameter pLogRetentionDays specifies the number of days to retain.
#==============================================================================================

# Cubes

cSysCube = 'SYS control';

# Constants

cLogDirectory = GetProcessErrorFileDirectory;

# Variables

nToday = NOW();
sSearchString = 'tm1s*';

#==============================================================
# Process and Delete Server Logs
#==============================================================

sFile = WildcardFileSearch(cLogDirectory | sSearchString, '');

WHILE (sFile @<> '');

# Found a file, now retrieve its date code

sFileYear = SUBST(sFile, 5, 4);
sFileMonth = SUBST(sFile, 9, 2);
sFileDay = SUBST(sFile, 11, 2);

nFileDate = DAYNO(sFileYear | '-' | sFileMonth | '-' | sFileDay );

IF ( nFileDate = 0);
# something funky - ignore
ELSE;

# calculate age of file

nDays = nToday - nFileDate;

# Delete if the age is greater than retention days
# Note, additional processing steps could be added here before the file is deleted

IF ( nDays > pLogRetentionDays );

# Process the log entries for user statistics

sLogFile = cLogDirectory | sFile;

# Insert any additional processing logic here
# e.g. a process to copy to another server, or a process
# to scan the log files for key information

# Now delete file

ASCIIDelete(sLogFile);

# Log the deletion

sMessage = sLogFile | ' was deleted';
LogOutput ('info', sMessage);

# Reset the file string

sFile = '';

ENDIF;

ENDIF;
# Look for the next file

sPriorFile = sFile;
sFile = WildcardFileSearch(cLogDirectory | sSearchString, sPriorFile);

END;


This code is relatively straightforward.  It loops through the log directory to determine the age of the tm1s files.  A file is deleted if it is older than the retention days specified on the parameter.  

2. Overnight Chore

Once the process has been created and working as intended, it's just a matter of adding this job to the overnight core.  Remember to specify the number of days to keep. I use 7, but you can make this whatever you want.  

Conclusion

Transaction logs contain a lot of valuable information, and Yuri's idea of storing and processing the logs has merit.  This, however, does not diminish the need to manage the files and ensure they do not overflow the drives. 

Rather than waiting for a  phone call from the server administrator, I prefer to automate the cleanup through a small nightly job.

I've shared the code that I use, but I'm sure there are other approaches. I'd love to hear what others are doing, especially if you've found a way to harvest information about user behaviour.


0 comments
10 views

Permalink