Planning Analytics

 View Only

Planning Analytics Best Practice: SaveDataAll

By Paulo Monte posted Thu December 15, 2016 02:46 PM

  

To improve performance on the shut down and startup of your Planning Analytics database, customers should execute the SaveDataAll process periodically.  Running this process regularly will also improve the startup\shut down performance during a maintenance window and provide a more stable environment.

Where is my Planning Analytics data stored?
Planning Analytics is powered by our TM1 in-memory database, so the data you enter and change is stored in memory for faster retrieval.  The changes are also simultaneously written in the transaction log on disk in real time.  Dimensional, Rule, and Process data are persisted to disk in real-time.

Where does my data go when Planning Analytics is shut down?
When the TM1 database is stopped, the system stores to disk in-memory data from cubes having updates since the previous disk update.  The transaction log file is also truncated and archived with a time-stamp.

Can I store the data in memory to disk without stopping the TM1 database?
Yes, you can create a simple process with the SaveDataAll command. When executed, this process stores to disk in-memory data from cubes having updates since the previous ‘SaveDataAll’ or Database shutdown.  The Process can also be scheduled on a regular schedule as a chore.

When and how often should I run SaveDataAll?
General recommendation is to run SaveDataAll weekly.  IBM also asks that SaveDataAll be performed just before the 3rd Saturday maintenance window to ensure a fast and high quality recycling of your Database.  Sufficient time should be planned to execute the SaveDataAll process to allow for completion before the maintenance window at 12:00pm UTC.  The amount of time required to complete the process varies considerably based on the size of the model and how much change has occurred since the last execution of SaveDataAll.  While the IBM team has seen cases of SaveDataAll taking 4 to 5 hours, it is more common for the process to complete in under 2 hours.

It is also recommended to run the process after a data load is performed, especially if transaction logging was disabled during the data load to improve performance.  This will ensure data integrity of the service as the transaction logs are relied upon for the backups.

Some customers choose to run the process daily or more often based on high user interaction. If so, schedule it to run when there is the least amount of users in the system.

Does SaveDataAll block users from reading or writing to Cubes?
No, while this was historically a ‘locking’ activity, SaveDataAll no longer will interrupt concurrent user read and write activity to cubes being serialized to disk.

Where can I learn more?
To learn more about running a SaveDataAll process please see the Planning Analytics documentation

How to use SaveDataAll and schedule it?
These instructions use Architect, but the steps are very similar when using Performance Modeler

    1.   Open Architect and log into the desired TM1 instance.

    1.   Right Click on 'Processes' and select "Create New Process'

    1.   Select 'Advanced' tab, then 'Epilog.'

    1.   Type 'SaveDataAll;' and don't forget the semi colon at the end.

    1.   Select the 'Schedule' tab.

    1.   Check the checkbox to schedule as a Chore with a desired name.

    1.   Select data and time (please note that all instances are running on UTC time), then Save.

    1.   Repeat above steps for all other TM1 instances








#ExpertPost
#GettingStarted
#PlanningAnalyticswithWatson
#PlanningAnalytics
#Resources

0 comments
40 views

Permalink