Original Message:
Sent: Fri October 11, 2024 03:58 PM
From: Vlad Didenko
Subject: Prevent a TI Process from Running Multiple Instances
Apologies for the confusion earlier, I was thinking about CubeSaveData instead of SaveDataAll. You're absolutely right that we want to avoid unnecessary locking.
To handle bulk data updates efficiently, you can temporarily remove the cube rule with CubeRuleDestroy before processing updates, and then restore it afterward using RuleLoadFromFile.
Disabling and re-enabling cube logging will also improve performance during the updates.
I would recommend using a wrapper process where you:
- Disable cube logging with CubeSetLogChanges(cub, 0) to reduce overhead during data processing.
- Use a CMD or PowerShell command to copy the current RUX file to back it up.
- Remove the cube rule with CubeRuleDestroy(cub).
- Run your data update process using ExecuteProcess().
- Once the processing is complete, restore the cube rule with RuleLoadFromFile(cub, 'PATH_TO_COPIED_RUX_FILE').
- Finally, re-enable cube logging with CubeSetLogChanges(cub, 1).
Of course, you have to keep in mind the changes will not be available in the Transaction log if you disable logging, so it may not be an option for you.
I hope this helps!
------------------------------
Vlad Didenko
Founder at Succeedium
TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
https://succeedium.com/teamone/
Succeedium Planning Analytics Cloud Extension
https://succeedium.com/space/
Original Message:
Sent: Fri October 11, 2024 02:23 PM
From: Asgeir Thorgeirsson
Subject: Prevent a TI Process from Running Multiple Instances
Thanks for the input @Vlad Didenko!
I use SaveDataAll
cautiously-just once per night, right before the daily backups-for two reasons: it can cause lock contention, and it affects users' ability to roll back their latest manual inputs.
I'd like to explore or learn more about your suggestion to detach cube rules with RuleLoadFromFile
during bulk updates. It sounds like a great way to optimize performance. Could you share more details or examples of how you've used this approach?
Appreciate the insights!
------------------------------
Asgeir Thorgeirsson
Financial Solutions Engineer
Icelandair
Reykjavik
+3548930750
Original Message:
Sent: Fri October 11, 2024 11:21 AM
From: Vlad Didenko
Subject: Prevent a TI Process from Running Multiple Instances
The best option depends on your specific requirements. If you need to run the process several times one by one, then using the "Synchronized" option is recommended. In other cases, I have used a system cube to store the statuses and write logs using "SaveDataAll" without any problems. Another simple solution is to use a text file as a flag. You just need to manage the logic to ensure the file is deleted if the process fails, which can be done in a wrapper process.
When performing extensive bulk cube updates, you may want to consider detaching the cube rules (using RuleLoadFromFile) and disabling cube logging. Then you can re-enable the rules and logging once the load is complete.
------------------------------
Vlad Didenko
Founder at Succeedium
TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
https://succeedium.com/teamone/
Succeedium Planning Analytics Cloud Extension
https://succeedium.com/space/
Original Message:
Sent: Fri October 11, 2024 04:12 AM
From: Frederic Arevian
Subject: Prevent a TI Process from Running Multiple Instances
If only one process is in this case, Synchronized( GetProcessName() ) ; is heavy. Using a cube is not a good idea as it will put locks on it, plus as I test, the write in the cube is not alway detected. The more simple is to write a file somewhere and delete at the end of the process. If the file exist you stop the process or wait in it that the file is not there. Wirte it by Executcmd to not have to wait the end of the prolog to have the file.
Regards
------------------------------
Frederic Arevian
Original Message:
Sent: Thu October 10, 2024 10:30 AM
From: Asgeir Thorgeirsson
Subject: Prevent a TI Process from Running Multiple Instances
Hey everyone,
I'm trying to figure out how to prevent a TI process from running if another instance of the same process is already in progress. I want to avoid data-locking conflicts, as I've experienced situations where processes end up waiting for each other and automatically stopping and starting endlessly.
Does anyone have any tips or strategies for checking if a TI process is currently running before allowing it to execute again? Any insights would be greatly appreciated!
Thanks!
------------------------------
Asgeir Thorgeirsson
------------------------------