I would say running in parallel processes would be the way to go (as suggested by Philippe).
If you have rules and feeders in the target cube, that might slow things down and you could start by detaching (deleting) rules and then attach then again at then end. If there are heavy feeders, that can really slow down writes.
Original Message:
Sent: Wed September 13, 2023 07:52 AM
From: Asgeir Thorgeirsson
Subject: TI process Array - Batch Processing for CellPutN
Thanks!
In my case, I want to "replicate" all cube data to annother identical cube with additional variant dimension.
What vould be your suggestion for that?
------------------------------
Asgeir Thorgeirsson
Original Message:
Sent: Wed September 13, 2023 07:43 AM
From: George Tonkin
Subject: TI process Array - Batch Processing for CellPutN
For my time and money I go straight to text file and import that.
Basically create your source view, asciioutput it and then consume it.
Typically a caller process, a child to export and a child to import/update metadata etc. if needed.
I find this far quicker on many models and in a recent case where it took over 3 hours to snapshot a version, it now takes under 30 minutes (lots and lots of data in the model)
Another benefit is that we keep the extracted files, zip these and then have them as a sort of archive if we need to import in the future after deleting or want to import to Dev etc.
Hope that helps and like always with TM1, others will have their views and statements like "It depends..." will be heard.
------------------------------
George Tonkin
Business Partner
MCI Consultants
Johannesburg
Original Message:
Sent: Wed September 13, 2023 07:35 AM
From: Asgeir Thorgeirsson
Subject: TI process Array - Batch Processing for CellPutN
Thank you @George Tonkin
I am exploring best performance practices when copying from cube to cube.
------------------------------
Asgeir Thorgeirsson
Original Message:
Sent: Wed September 13, 2023 06:48 AM
From: George Tonkin
Subject: TI process Array - Batch Processing for CellPutN
Short answer is No.
Arrays, Dictionaries etc. are things we only dream about.
Can you give more context into what you are doing?
When dealing with SQL, you can obviously aggregate in the Select statement.
When copying from cube to cube, you could read from C levels.
When reading from text files you are limited but these in my experience are really fast unless you have ForceReevaluationOfFeedersForFedCellsOnDataChange=T which could be firing each time a value is updated.
------------------------------
George Tonkin
Business Partner
MCI Consultants
Johannesburg
Original Message:
Sent: Wed September 13, 2023 06:08 AM
From: Asgeir Thorgeirsson
Subject: TI process Array - Batch Processing for CellPutN
I would like to increase the performance of writing data to a cube.
Instead of writing data cell by cell,
- is it possible to use batch updates to accumulate changes and write them in larger chunks?
- and is it possible to declare an array as a datatype in a TI process and use it for a bulk update?
Perhaps like this:
CellPutN(ValueArray, CubeToUpdate, ElementArray);
Or something like this
DataBatch = [
{123, Cube, V1, V2, V3, ...}
,{456, Cube, V1, V2, V3, ...}
];
CellPutN(DataBatch);
------------------------------
Asgeir Thorgeirsson
------------------------------