Planning Analytics

Planning Analytics

Get AI-infused integrated business planning

 View Only
Expand all | Collapse all

Lightweight method to obtain TM1 Object JSON representation

  • 1.  Lightweight method to obtain TM1 Object JSON representation

    Posted Tue November 26, 2024 07:41 AM

    Hi,

    Is it possible to obtain the JSON representation of a TM1 object used by the git integration through the rest api without setting up the complete Git integration?

    Potential use cases are:

    1. Ship a TI Process Library: Storing a library of TI processes in JSON format to deploy them across multiple TM1 servers that do not have Git integration enabled.

    2. Templating and View and Subset Deplyoment: Adding templating capabilities to a subset or view json represnation, enabling you to easily modify and deploy them with different values.

    I look forward to your feedback. 

    Regards,

     Florian



    ------------------------------
    Florian Scherzberg
    ------------------------------


  • 2.  RE: Lightweight method to obtain TM1 Object JSON representation

    Posted Tue November 26, 2024 09:52 AM

    Hi Florian,

    The short answer is Yes, 

    The long answer is to save all objects to File Manager/ Document storage of choice (Dropbox/ Box etc..)/ Github (does not have to be linked directly from TM1) you can still use Git/ GitLab/ DevOps etc.. to 'store' the JSON objects for later recall

    For v12 this is kept inside TM1 with ExecuteHttpRequest, I have working examples of moving Cubes/ Dimensions/ Elements/ Processes/ Data between environments. Just building up the content blogs/ videos as time allows and testing the various JSON functions to aid with backups/ branch development/ integrity testing/ process libraries/ synchronisation etc..

    For v11 this requires dependencies such as TM1Py or the various other REST API access methods to do the same but outside of TM1/ TI



    ------------------------------
    Edward Stuart
    Solutions Director
    Spitfire Analytics Limited
    Manchester
    ------------------------------



  • 3.  RE: Lightweight method to obtain TM1 Object JSON representation

    Posted Tue November 26, 2024 10:02 AM

    Thanks Edward for your examples you wrote before!



    ------------------------------
    Vitalij Rusakovskij
    ------------------------------



  • 4.  RE: Lightweight method to obtain TM1 Object JSON representation

    Posted Tue November 26, 2024 10:42 AM

    Thanks, Edward. When I use TM1py with v11 to dump a process, the JSON output differs slightly from the one generated through Git integration. The main difference is that the scripts for the prolog, metadata, data, and epilog are stored as long strings within the JSON, rather than being saved in separate files.

    This format makes it straightforward to clone a process, but it's not ideal for tracking changes or using tools like syntax highlighting. Is there a REST API endpoint that provides the output in a format with separate files?

    TM1py dumped JSON

    TM1 Git Integration



    ------------------------------
    Florian Scherzberg
    ------------------------------



  • 5.  RE: Lightweight method to obtain TM1 Object JSON representation

    Posted Tue November 26, 2024 11:20 AM

    TM1Py gives you the same output ExecuteHttpRequest will give you as the raw JSON is designed to be machine readable.

    Github has parsed the response and auto formatted the JSON to be human readable and there are a number of tools that can do this.

    For v12 things get interesting as using something like JSONDiff to compare production to development and to return object names where objects differ (i.e. have been changed).

    My traditional backup script would take 2 hours to .zip a model but using JSONDiff I can merge only different objects/ data and this is done in a fraction of that time

    https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=functions-jsondiff

    A simple example from the IBM Docs:

    # the source document

    jSource = '

    {

    "baz": "qux", "foo": "bar"

    }

    ';

    # the target document

    jTarget = '

    {

    "baz": "boo", "hello": [ "world" ]

    }

    ';

    # create the patch

    jPatch = JsonDiff( jSource, jTarget );

    Creates the output:

    [

    { "op": "replace", "path": "/baz", "value": "boo" },

      { "op": "remove", "path": "/foo"},

      { "op": "add", "path": "/hello", "value": ["world"] }

    ]

    again, this is not human readable as such but with JSON as a datasource you can parse this and return the relevant messages back to a text file, slack channel, teams channel, email etc.. etc.. and run subsequent processes to act on this data

    For v11 you could utilise TM1Py and further python scripting to replicate the JSONDiff function (https://github.com/xlwings/jsondiff) this relies on knowledge of python and supporting the scripts generated.

    You could use Yuri's script to backport ExecuteHttpRequest to v11 and export JSON to text files and compare in TI scripts (this relies on powershell)

    For my v12 testing I did use Visual Studio code to open the JSON files and used the extension "Prettify JSON" to review the files. I'm sure there is a file compare extension for Visual Studio code that could assist for the human readable aspect

    A lot depends on your environment and what is/ is not allowed by your IT/ infrastructure team



    ------------------------------
    Edward Stuart
    Solutions Director
    Spitfire Analytics Limited
    Manchester
    ------------------------------



  • 6.  RE: Lightweight method to obtain TM1 Object JSON representation

    Posted Wed November 27, 2024 05:39 PM

    Hi @Florian Scherzberg,

    Could you elaborate more on "Templating and View and Subset Deplyoment: Adding templating capabilities to a subset or view json represnation, enabling you to easily modify and deploy them with different values."?

    Regarding "1. Ship a TI Process Library: Storing a library of TI processes in JSON format to deploy them across multiple TM1 servers that do not have Git integration enabled."

    This functionality is nicely implemented in SPACE. 

    You can leverage the SPACE TM1 Lifecycle Manager to create and deploy releases (or just snapshot your code)  OR you can use the SPACE Advanced Git integration, which has several cool features:

    You can deploy releases directly or through Git. You can also cherry-pick items for each release and Git connection:

      You can configure different Git repositories or branches for your different TM1 servers

      You can compare and see the difference between your current model and the release or Git version. Besides processes, you can track other items: chores, rules, views, dimensions, hierarchies, attributes and even cube data

      You can see the difference in a human-readable format too:

      And you can even automate your CI\CD using SPACE jobs



      ------------------------------
      Vlad Didenko
      Founder at Succeedium
      TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
      https://succeedium.com/teamone/
      Succeedium Planning Analytics Cloud Extension
      https://succeedium.com/space/
      ------------------------------



    • 7.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Thu November 28, 2024 10:15 AM

      Thanks  @Vlad Didenko, the Git integration in PAW looks very promising. The cherry picking feature was new for me. That's very cool. More or less exactly what we need for our TM1 experts to have a low barrier to use git. There is an open ticket as to why we can't use it yet (cf. Versioning of all project-related code in a single repository )

      Beside controlling it via the GUI, it is very important for us to control everything via REST API, e.g. to use it in our data or ci pipelines. 

      With templating I mean that you have a subset or view in which you have a placeholder, for example: 

      SELECT 
          {[Time].[2024].[Jan], [Time].[2024].[Feb]} ON COLUMNS, 
          {[Product].[All Products].[Product A], [Product].[All Products].[Product B]} ON ROWS 
      FROM [Sales]
      WHERE ([Region].[North America], [Scenario].[Actual])

      In a simple case I want to replace the year with a placeholder {{ year }}

      SELECT 
          {[Time].[ {{ year }}].[Jan], [Time].[ {{ year }} ].[Feb]} ON COLUMNS, 
          {[Product].[All Products].[Product A], [Product].[All Products].[Product B]} ON ROWS 
      FROM [Sales]
      WHERE ([Region].[North America], [Scenario].[Actual])

      You can then load the view with different names for different years. Using the same logic, you can use placeholders for dimension, cube and element names.

      In more sophisticated cases, you can think about using jinja to create mdx queries, similiar how  dbt does it for sql. 

      For me it would be ideal: 

      • Getting the nice human-readable format directly from a rest api endpoint without the need to configure TM1 Git integration. 
      • Optional: Convert it to a template
      • Optional: Commit it and push it to a own stand alone git repository
      • Load it via rest api again from a script, a ci pipeline or a command line interface 


      ------------------------------
      Florian Scherzberg
      ------------------------------



    • 8.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Thu November 28, 2024 02:25 PM

      @Florian Scherzberg, the SPACE Advanced Git integration differs from the native PAW or TM1 REST API Git integrations. It offers a user-friendly interface to preview and compare objects and it integrates with the SPACE TM1 Lifecycle Manager, enabling you to manage your code through structured releases. This approach is specifically tailored to the needs of TM1 projects, providing a more intuitive and project-focused alternative to the traditional Git workflows typically used in general software development.

      It looks like you have already done good research on view templates!

      Below I created a simple example using the SPACE Python integration in PAW. The Python script loops through all the .mdx files and updates the corresponding cube views using the provided template values:

      You can execute this script manually or automate its execution by scheduling it as a SPACE job.

      You can also use the Jinja2 library to handle more complicated templates.



      ------------------------------
      Vlad Didenko
      Founder at Succeedium
      TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
      https://succeedium.com/teamone/
      Succeedium Planning Analytics Cloud Extension
      https://succeedium.com/space/
      ------------------------------



    • 9.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Fri November 29, 2024 03:01 AM

      @Vlad Didenko Thanks! I got it! Spaces ist not just a slang for Planning Analytics Workspaces. It's a third-party browser extension to extend the scripting and deployment capabilities indside of PAW.  

      I have invested a little time and have implemented the feature to dump and load process objects as JSON or YAML here (Link Github). The YAML files use multiline strings for the Prolog, Metadata, Data and Epilog scripts. It feels a bit like reinventing the wheel and it's not (yet) the most ideal solution but good enough for now.  

      tm1cli process dump <name> --folder <path> --format <json|yaml>
      tm1cli process load <name> --folder <path> --format <json|yaml>


      ------------------------------
      Florian Scherzberg
      ------------------------------



    • 10.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Fri November 29, 2024 06:20 AM

      Love these posts, not in the last place because GIT integration is involved, but because people get really creative.

      Allow me to add my 2 cents here, hoping this might help you @Florian Scherzberg...

      Starting with your main question about being able to get the format used by the GIT integration, that's not exactly possible, not in the last place because, whilst we are following OData standards wherever we can, we don't exactly follow the schema nor the format. The nice differences, in the JSON portions of it, take advantage of improvements/additions in OData v4.01 JSON format (something we don't support in the REST API just yet, but could, like we already implement some of the newer capabilities that came with OData v4.01 as well). But the main thing you are after, the rules, TI code and MDX in plain text (@Edward Stuart, GitHub doesn't do any parsing or formatting here, it is all TM1;-), where we use annotations (and an accomponying vocabolary that is specific to our GIT integration), is not something that will show up in the REST API as such, not in the last place as that would imply it returning multiple pieces of content in a response (which could be done in a multi-part response but let's not go there). That said however you most definitely can grab all those pieces in the shape you like them using the REST API today!

      Let's use your TI example, you are looking at the }bedrock.cube.data.clear process, you can ask for the plain text values of individual properties, and therefore in this cases the procedures, as in:

      `GET http[s]://<<tm1server>>:<<http-port>>/api/v1/Processes('}bedrock.cube.data.clear')/PrologProcedure/$value`

      which will return just the prolog code in plain text.

      What you might be even more interested in is the complete 'Code' block as you see it in our GIT repository. 'Code' is actually a 'hidden' property of a process as well and, like the processes, you can ask for the plain text value and get that complete code block using:

      `GET http[s]://<<tm1server>>:<<http-port>>/api/v1/Processes('}bedrock.cube.data.clear')/Code/$value`

      Personally I don't think that over the wire human readable is that important, especially if you are grabbing heaps at a time and the only difference is JSON string encoding which a client can easily (and typically always already) unwind, but hope this is of help nevertheless.

      PS Saw you referring to using jinja bases templating for MDX, that's under way (you might not have seen my comments wrt this topic on another thread in this community but it's coming;-).



      ------------------------------
      Hubert Heijkers
      STSM, Program Director TM1 Functional Database Technology and OData Evangelist
      ------------------------------



    • 11.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Fri November 29, 2024 09:01 AM

      Thanks @Hubert Heijkers The hint to the hidden code property is brilliant. I will utilize this.

      A well-formatted, human-readable output is important as we want to perform line-based diffs and code reviews with Gitlab/Github. With an additional code file, it is easier to use syntax highlighting or autocompletion in your IDE. 



      ------------------------------
      Florian Scherzberg
      ------------------------------



    • 12.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Fri November 29, 2024 10:21 AM

      @Florian Scherzberg, great job on tm1cli! It feels like you could easily add there features like MDX templating...

      @Hubert Heijkers, could you share any insights or hints about how MDX templating might look\work? And maybe some ETA, so we can hold our hands from duplicating efforts :)



      ------------------------------
      Vlad Didenko
      Founder at Succeedium
      TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
      https://succeedium.com/teamone/
      Succeedium Planning Analytics Cloud Extension
      https://succeedium.com/space/
      ------------------------------



    • 13.  RE: Lightweight method to obtain TM1 Object JSON representation

      Posted Fri November 29, 2024 03:30 PM

      @Vlad Didenko, those that know me know that I only like to talk about stuff for which we have at least a working prototype in hand already, but since it was my proposal anyway...

      The short term goal is to support drill-through to MDX views and, unlike native views which are pretty ridged and you can only update the slicer, I feel would our users a disservice if we didn't give them to option to freely consume the context provided by the cell tuple in any means they come up with in another MDX query. And since users are typically even more creative then us developers that build this lovely engine templates was IMHO the way to go. So I've proposed to implement support for templates first and then add support for drill-through to MDX building on this new feature.

      As for the templates themselves, we'd be using something very similar to Jinja, we develop in C++ and likely will use Inja (which is heavily inspired by Jinja as you can image) but the tricky part is in the 'data' we make available for use when applying the template. I've got ideas as to how to go about that but it is to early to commit to something.

      As for timing, again - as always - everything can change on a daily basis, I'd say we' have this for v12.6, currently tentatively slated for Q2 2025.



      ------------------------------
      Hubert Heijkers
      STSM, Program Director TM1 Functional Database Technology and OData Evangelist
      ------------------------------