Planning Analytics

Planning Analytics

Get AI-infused integrated business planning

 View Only
  • 1.  TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 06:08 AM
    Edited by Asgeir Thorgeirsson Wed September 13, 2023 06:10 AM

    I would like to increase the performance of writing data to a cube.

    Instead of writing data cell by cell, 

    1. is it possible to use batch updates to accumulate changes and write them in larger chunks?  
    2. and is it possible to declare an array as a datatype in a TI process and use it for a bulk update? 

    Perhaps like this:

        CellPutN(ValueArray, CubeToUpdate, ElementArray);

    Or something like this

        DataBatch = [ 

             {123, Cube, V1, V2, V3, ...} 

            ,{456, Cube, V1, V2, V3, ...} 

          ];

        CellPutN(DataBatch);



    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 2.  RE: TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 06:48 AM

    Short answer is No.

    Arrays, Dictionaries etc. are things we only dream about.

    Can you give more context into what you are doing?

    When dealing with SQL, you can obviously aggregate in the Select statement.
    When copying from cube to cube, you could read from C levels.
    When reading from text files you are limited but these in my experience are really fast unless you have ForceReevaluationOfFeedersForFedCellsOnDataChange=T which could be firing each time a value is updated.



    ------------------------------
    George Tonkin
    Business Partner
    MCI Consultants
    Johannesburg
    ------------------------------



  • 3.  RE: TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 07:36 AM

    Thank you @George Tonkin

    I am exploring best performance practices when copying from cube to cube.



    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 4.  RE: TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 07:43 AM

    For my time and money I go straight to text file and import that.

    Basically create your source view, asciioutput it and then consume it.

    Typically a caller process, a child to export and a child to import/update metadata etc. if needed.

    I find this far quicker on many models and in a recent case where it took over 3 hours to snapshot a version, it now takes under 30 minutes (lots and lots of data in the model)

    Another benefit is that we keep the extracted files, zip these and then have them as a sort of archive if we need to import in the future after deleting or want to import to Dev etc.

    Hope that helps and like always with TM1, others will have their views and statements like "It depends..." will be heard.



    ------------------------------
    George Tonkin
    Business Partner
    MCI Consultants
    Johannesburg
    ------------------------------



  • 5.  RE: TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 07:52 AM

    Thanks!
    In my case, I want to "replicate" all cube data to annother identical cube with additional variant dimension. 
    What vould be your suggestion for that?



    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 6.  RE: TI process Array - Batch Processing for CellPutN

    Posted Tue September 19, 2023 10:52 AM

    I would say running in parallel processes would be the way to go (as suggested by Philippe). 

    If you have rules and feeders in the target cube, that might slow things down and you could start by detaching (deleting) rules and then attach then again at then end. If there are heavy feeders, that can really slow down writes.



    ------------------------------
    Emil Malmberg Fosdal
    Solution Architect
    CogniTech A/S
    ------------------------------



  • 7.  RE: TI process Array - Batch Processing for CellPutN

    Posted Wed September 13, 2023 07:49 AM

    Hello,

    No way like "bulk".

    You can seek to parallelize your processing by using RunProcess and creating as many source views as possible to "split" your flow.
    ProcessCopyMaster
      RunProcess CopySlave (Month=1)
      RunProcess CopySlave (Month=2)
    .....
      RunProcess CopySlave (Month=12)

    Important note: When the source and target are the same cube, it's best to play with two TI processes: 1 - export to a file, then 2 - import the file.

    Regards,

    Philippe



    ------------------------------
    Philippe CHAMPLEBOUX
    ------------------------------



  • 8.  RE: TI process Array - Batch Processing for CellPutN

    Posted Thu September 21, 2023 11:22 AM

    There is a Bedrock process I use for this sort of thing. I have a cube where a LOOOOOT of string values are defined by some fairly complex rules (it is what it is, I will hear no dissent on this matter) and got the processes that depend on using these values to work significantly faster by using the bedrock cube copy process to "make the values static" by just copying the rule-derived values to a "*_Static" companion of the given measures. I run the processes in parallel by kicking off the bedrock copy process filtered to particular tranches by running 5 chores at the same time.

    IIRC the bedrock processes for copying have a flag where you can tell them to do direct cube to cube or direct intra-cube, OR do a csv export-import sequence. I can't remember off the top of my head which one was faster for this particular use case, but sometimes it does make a big difference in performance.



    ------------------------------
    Tom Cook
    ------------------------------



  • 9.  RE: TI process Array - Batch Processing for CellPutN

    Posted Tue September 19, 2023 07:13 PM
    You can try the lastest version of TM1Py. There is a new TM1Py function (write_async) which performs much faster than single threaded TI.

    But parallel processing also can help a lot. I use RushTI  now to orchestrate the multithreading

    Ardian