Planning Analytics

 View Only
Expand all | Collapse all

Sharing data between TM1 servers

  • 1.  Sharing data between TM1 servers

    Posted Wed April 27, 2022 01:14 PM
    Hi all

    I currently use text files to share data between TM1 servers/models (on-prem. PAW 2.0.72).
    Is there a better way?

    All the TM1 models or servers are located physically on the same windows server.

    In a TI Process
    Can I read cube data from another TM1 model?
    or
    Can I write data to a cube in another TM1 model?

    If not, what would be the best approach?
    Is there a simple "hello world" example available?

    Thanks, Asgeir

    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------

    #PlanningAnalyticswithWatson


  • 2.  RE: Sharing data between TM1 servers

    Posted Thu April 28, 2022 02:42 PM

    Hi Asgeir,
    Going with flat files is certainly the simplest and a very common approach.
    I did confirm that ODBO (datasource type via TurboIntegrator) was deprecated as of Planning Analytics 2.0.9.6
    Therefore the best, certainly trendy, but definitely powerful option: use TM1py.  TM1py is an open source library that you can install.  

         Excerpted from the TM1py PDF available on the web:  "By wrapping the IBM Planning Analytics (TM1) REST API in a concise Python framework, TM1py
         facilitates Python developments for TM1.    .... TM1py offers handy features to interact with TM1 from Python" 

    Reading data from cubes and writing data to cubes is relatively painless once you have the tools in place.  While configuration involves some effort, you would then be able to do much more with TM1py as needs evolve.  Code examples are readily available.

    as a first step, you could visit https://code.cubewise.com/blog/getting-started-with-tm1py or https://www.ibm.com/docs/en/wsd?topic=notebook-adding-data-from-planning-analytics-connection

    Hope this helps,



    ------------------------------
    Sarah Milius
    Master Consultant
    QueBIT Consulting
    United States CT
    ------------------------------



  • 3.  RE: Sharing data between TM1 servers

    Posted Fri April 29, 2022 02:40 AM
    Hi, 

    have you tried ODBO data source?

     

    Regards,
    Veronika


    ------------------------------
    Veronika Gultom
    ------------------------------



  • 4.  RE: Sharing data between TM1 servers

    Posted Fri April 29, 2022 07:23 AM
    The ODBO is deprecated so I have not used that option in Architect.

    I am now trying to use the PAW modeling workspace as myuch as possible.
    Oddly, I can share views from various TM1 databases in the same PAW book but I cannot share data between them in PAW.
    I would like to have the option in PAW to create a process in one database and simply read a cube in another database. 
    Perhaps this is something for the PAW development team to consider.

    There are also still some options missing in the PAW modeling environment, e.g.:
    - Creating TI-process with text as a data source if the file is on a shared drive.
    - Removing or adding dimensions to a cube that I just created.
    - Copying dimensions, cubes, and other objects.
    - Saving a view for a drill-down process. The view does not work unless I save it in PM.
    For the above, I still use the Performance Modeler. 

     


    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 5.  RE: Sharing data between TM1 servers

    Posted Fri April 29, 2022 12:13 PM
    Hi,
    On one of your points about TI processes using a source TXT file on the share drive.  When developing the TI process you do need to upload a file to use as the source, however I just use a template file that has the header and a few rows of data.  Then in the prolog I use the "DatasourceNameForServer" variable to set the actual location of the full file on the shared drive that is to be used as the source file.  My understanding of the reason for this change is that Workspace does not have access to the share drive, and any two modelers might not have the same connecting drives on their computer.  By requiring the upload of the file needed for development, Workspace and the TM1 instance can save that file in a location to which they will have access.

    Good luck!

    ------------------------------
    Scott Brown
    ------------------------------



  • 6.  RE: Sharing data between TM1 servers

    Posted Fri April 29, 2022 04:58 PM
    I would like to have the option in PAW to create a process in one database and simply read a cube in another database. 
    I guess ODBO and even Cognos Package Connector were deprecated as it is not easy to support them in PA Cloud and REST API
    I would recommend using tm1py to create a generic parameterised script which could copy data between 2 models
    If Google Sheets is an option, TeamOne add-on can be used to import\export data between GS and TM1 models

    - Creating TI-process with text as a data source if the file is on a shared drive.
    In my TM1 models, I have a placeholder cube which has a placeholder view with dimension 1 to 50 (or more if needed). That view is used as a datasource for each data process, providing V1 - V50 variables. Then in Prolog you can change the datasource to whatever you need using DatasourceType, DatasourceNameForServer, DatasourceASCIIDelimiter and other variables and still use V1-V50 variables to reference each file column. The advantage of this approach is you never get messages like unable to open a view or file because it was deleted or not accessible.

    - Removing or adding dimensions to a cube that I just created.
    The only option is: Export data > Recreate cube > Import data
    So you can control how the data should be loaded into the new cube structure

    - Copying dimensions, cubes, and other objects.
    In my projects I use TeamOne for this kind of tasks. Basically I create a GS document containing a separate tab for each cube, dimension, hierarchy. Then TeamOne creates/updates dimensions, hierarchies, attributes, subsets, cubes and views from sheets. This way I have a transparent documentation which is used to automatically update the model. You can also compare 2 models side-by-side, see the difference and deploy objects between models.

    - Saving a view for a drill-down process. The view does not work unless I save it in PM.
    Is it a drill-down or a drill-through? What do you mean by "The view does not work" ?

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 7.  RE: Sharing data between TM1 servers

    Posted Sun May 01, 2022 10:03 AM
    Thank you @Vlad Didenko for your detailed reply.
    Regarding the PAW drill-down processes. They do not work if I tamper with the drill-down view in the PAW- or PAW modeling environment. If I Open the same view in PM and save it from there, then everything works fine. So there is something wrong with the view manipulation in PAW when saved to the server. ​


    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 8.  RE: Sharing data between TM1 servers

    IBM Champion
    Posted Mon May 02, 2022 02:43 AM
    Edited by System Fri January 20, 2023 04:13 PM
    Vlad,

    On your suggestion to have a placeholder cube. It is true that it can solve some issues but it creates other ones in my opinion.
    You would need to define (many of or all of) your variables in the Metadata tab and the Data tab - double work/probability or errors and even not necessary if you use the dimension names or the header record in the file.

    So you have 50 or more spare dimensions in the TM1 model ?

    The data preview is not possible either. Typically you want to set string or numeric data types for the variables: for numeric this will not work since dimension elements are always text ? I understand that you can change it inside the process but not ideal IMO.

    ------------------------------
    Wim Gielis
    ------------------------------



  • 9.  RE: Sharing data between TM1 servers

    Posted Mon May 02, 2022 10:55 AM
    Hi @Wim Gielis

    ​The solution is super simple and it works for all data sources with up to 50 fields (and you can easily adjust it). The naming below is from my own TM1 framework. It uses "}" prefix for some objects so they do not show up when I hide control objects in Architect or in PAW

    1. Create 50 placeholder dimensions: }.TM1.dim1, }.TM1.dim2, ..., TM1.dim50
    2. In each placeholder dimension create 1 element: TM1.Elm1 in }.TM1.dim1, TM1.Elm2 in }.TM1.dim2, ..., TM1.Elm50 in }.TM1.dim50
    3. Create a placeholder cube containing all the dimensions above: TM1.Placeholder
    4. Create a placeholder view in that cube: TM1.Placeholder.View

    Important: that view should have zero suppression ON

    5. Use TM1.Placeholder > TM1.Placeholder.View in your load processes. Now you have 50 variables named V1-V50 automatically (TM1 will use V1-V50 variable names because dimension names start with "}", otherwise the variables would be named as dimensions).
    You can rely on these variables in each process where you use this Placeholder View (much easier to work with than with other named variables):


    6. Set Contents = Other to as many variables as you need in a given process (if file contains only 10 columns, set it to V1-V10 variables only)
    7. Use DatasourceTypeDatasourceNameForServer,DatasourceASCIIDelimiter to change your data source to another view or file
    8. Use V1-V50 in Metadata or Data tabs (the variables will rename named V1-V50)

    Since the Placeholder View has zero suppression ON, the Metadata or Data tabs will never execute, you can rely on this fact if you want to skip the load for some reason and don't want to use ProcessBreak or fail the process
    You never have issues such as "data source view does not exists" or "cannot open the source file"
    You don't need to change your process variables if the source file columns change or the source view changes (because you always use V1-V50)

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 10.  RE: Sharing data between TM1 servers

    IBM Champion
    Posted Mon May 02, 2022 11:13 AM
    Edited by System Fri January 20, 2023 04:39 PM
    Hello Vlad

    I do understand what you technically do. Can you also address the disadvantages I listed ?

    For example, what does the Metadata and Data tab look like in terms of variables ?

    How do you treat numeric variables since elements in dimensions are strings ?

    You don't have a preview for text file data source ?

    For an ODBC data source, you never change the query, for example a WHERE statement, in Architect to dig into issues with the data (like WHERE the data is only 1 cost center and account and month and year) ? I find it easy in Architect to just add WHERE clauses until I find the culprit.

    ------------------------------
    Wim Gielis
    ------------------------------



  • 11.  RE: Sharing data between TM1 servers

    Posted Mon May 02, 2022 11:48 AM
    Hi Wim

    For example, what does the Metadata and Data tab look like in terms of variables ?

    As I mentioned previously, you use V1-V50 variables in Metadata / Data tabs, for instance reading a file with 6 columns or a cube view with 5 dimensions:

    You don't have a preview for text file data source ?
    Notepad++ will provide much better preview than Architect :)
    For TI debugging: AsciiOutput( fileLog, V1, V2, ..., V50 );

    For an ODBC data source, you never change the query, for example a WHERE statement, in Architect to dig into issues with the data (like WHERE the data is only 1 cost center and account and month and year) ? I find it easy in Architect to just add WHERE clauses until I find the culprit.
    Sure you can do it as well:
    DatasourceType = 'ODBC';
    DataSourceNameForServer = 'ConnectionName';
    DataSourceQuery = 'Your dynamic SELECT statement';

    I don't see any disadvantages

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 12.  RE: Sharing data between TM1 servers

    Posted Mon May 02, 2022 01:25 PM
    for convenience you can also do this in the beginning of Metadata\Data:

    ##@ Let's make it look clear
    elmCc = V1;
    elmAccount = V2;
    elmVersion = V3;
    ...
    amount = StringToNumber(V6);
    ...

    ##@ Now you know what you are doing here:
    CellPutN( amount, cub, elmCc, elmAccount , elmVersion ...);​​

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 13.  RE: Sharing data between TM1 servers

    IBM Champion
    Posted Mon May 02, 2022 02:10 PM
    Edited by System Fri January 20, 2023 04:49 PM
    Hi Vlad,

    So any process, except the obvious "Someone screwed the budget and need to reload from yesterday's budget", you are going to rename the variables in Data tab and most of the time in Metadata tab. Seems not much of a gain to me, on the contrary, but YMMV. Converting numbers, coming to a text files as strings, to numbers again does not seem what I fancy a lot. You could use StringToNumberEx but then the format is to be written in every such variable definition.

    If the datasource is a view, you get the variables names for free - no work to be done.

    Tastes can differ, that is clear :-)

    ------------------------------
    Wim Gielis
    ------------------------------



  • 14.  RE: Sharing data between TM1 servers

    Posted Mon May 02, 2022 02:38 PM
    you are going to rename the variables in Data tab and most of the time in Metadata tab.
    Why would you need to rename the variables? You always deal with V1-V50, they never change, this is the beauty of this approach.

    Converting numbers, coming to a text files as strings, to numbers again does not seem what I fancy a lot
    It is not that scary. In my current projects I often deal with users who can upload files saved in different locales (English and French), so I created a simple script to automatically identify field, decimal and thousand delimiters and use NumberToStringEx to correctly parse numbers.

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 15.  RE: Sharing data between TM1 servers

    IBM Champion
    Posted Mon May 02, 2022 03:03 PM
    Edited by System Fri January 20, 2023 04:35 PM
    Why would I need to rename variables ? Code like this with V1...V50: good luck.

    Processes that are easy enough to handle with the TI wizard, I would force them to V1...V50 but these processes are almost oneliners.

    nCalcForIFRS16 =  cellgetn(sSourceCube, vContract, vCompany, vVersion, 'calc4ifrs16');
    vStatus = attrs('Contracts', vContract, 'contractStatus');
    
    #INTERNAL, CURRENCY AND INDEX ATTRIBUTES
    #these are both cube-cells and attributes for ease of future calcs. Put the currency in the contractCurrency and index on contractIndex
    
    sCurr = cellgets('Contracts',vContract,vCompany, pVersion, 'CURRENCY');
    sIndex = cellgets('Contracts',vContract,vCompany, pVersion, 'INDEX_USED');
    nInternal =  cellgetn('Contracts',vContract,vCompany, pVersion, 'IS_INTERNAL_LEASE');
    sTransitionMethod = cellgets('Contracts',vContract,vCompany, pVersion, 'TRANSITION_METHOD_Q');
    
    attrputs(sCurr, 'Contracts', vContract, 'contractCurrency');
    attrputs(sIndex, 'Contracts', vContract, 'contractIndex');
    attrputn(nInternal , 'Contracts', vContract, 'Internal');
    attrputs(sTransitionMethod,'Contracts', vContract, 'transitionMethod');
    
    
    #ALL LEASE CONTRACTS ARE CALCULATED. ALSO LOW VALUE AND SHORT TERM.
    if(  nCalcForIFRS16 = 1  & vStatus @<>'VALIDATED'  &  vStatus @<> 'TERMINATED' &  vStatus @<> 'ERROR'  );
    	
    	# get classification
    	vClassification =   cellgets(sSourceCube, vContract, vCompany, vVersion, 'CONTRACT_CATEGORY');
    
    	#get currency and update the attribute
    	vCurrency =   cellgets(sSourceCube, vContract, vCompany, vVersion, 'CURRENCY');
    	attrputs(vCurrency, 'Contracts',vContract,'contractCurrency');
    
    	#get Accounting, Operative Org and free dimension 1
    	vAccounting =  'Monthly';
    	vDim2 =   cellgets(sSourceCube, vContract, vCompany, vVersion, 'CONTRACT_BANK');
    	vDim1 =   cellgets(sSourceCube, vContract, vCompany, vVersion, 'CONTRACT_ORG');
    
    	#START MONTH
    	sTransitionMethod = attrs('Contracts',vContract,'transitionMethod');
    	if(sTransitionMethod @= 'Simplified 2' % sTransitionMethod @='');
    
    		nStartdate =  attrn('period', cellgets(sSourceCube, vContract, vCompany, vVersion, 'DATE_START'), 'Periodnumber');
    		nInitialstart = cellgetn('Sys.Assumptions Central','Start Period', 'number');
    		nPassedPeriodSinceInit = 0;
    		if(nStartDate < nInitialStart);
    			nPassedPeriodSinceInit = nInitialstart - nStartDate;
    		endif;
    		nStartDate = if(nStartDate < nInitialStart, nInitialStart, nStartDate);
    		nLength =  cellgetn(sSourceCube, vContract, vCompany, vVersion, 'Contract lenght (Months)');
    
    	elseif(sTransitionMethod @= 'Simplified 1' );
    		nPassedPeriodSinceInit = 0;
    		nStartdate =  attrn('period', cellgets(sSourceCube, vContract, vCompany, vVersion, 'DATE_START'), 'Periodnumber');
    		nLength =  cellgetn(sSourceCube, vContract, vCompany, vVersion, 'Contract lenght (Months)');
    	endif;
    
    	# PERIOD IS LOCKED
    	# Dont allow contracts that has passed a locked/reported period
    	ncheckDate = if(attrs('Contracts', vContract, 'modificationAsOfPeriod') @<>'', attrn('Period', attrs('Contracts', vContract, 'modificationAsOfPeriod') ,'periodNumber'), nStartDate);
    	if(cellgetn('Sys.Assumptions', 'Period Lock' ,pVersion , numbertostring(nCheckDate) , 'No Country', 'number') = 1);
    		asciioutput(sLogDirectory | sLogFile,'--------'| vContract |'---------');
    		asciioutput(sLogDirectory | sLogFile,'------- Period is locked-------');
    		itemskip;
    	endif;
    
    	# MODIFIED CONTRACTS,
    	# Modifed contracts have rules on them on periods prior to modificationAsOfPeriod. Increase the start period to match modificationAsOfPeriod
    	# Set the isModified flag on those passed periods. (which is the only measure exempt from rule). Use that flag to feed other calcs.
    	vModifiedDate = attrs('Contracts', vContract, 'modificationAsOfPeriod');
    	nPassedPeriodsForModifiedContract = 0;
    	if( vModifiedDate @<> '');
    		nPassedPeriodsForModifiedContract = attrn('Period', vModifiedDate , 'periodnumber') - nStartdate;
    		a = nStartDate;
    		while( a <= nPassedPeriodsForModifiedContract + nStartDate  - 1);
    			cellputn(1, sTargetCube,'CC', vContract, tVersion, numbertostring(a), vDim1, vDim2, vCompany, vAccounting, 'isModified' );
    			a = a +1;
    		end;
    	endif;
    
    
    	# ORIGINAL TERM
    	# update 1.10. Add a measure to count the original term. Can be either shorter or longer than the effective term
    	nOrgStartdate =  attrn('period', cellgets(sSourceCube, vContract, vCompany, vVersion, 'DATE_START'), 'Periodnumber');
    	nOrgEnddate =  attrn('period', cellgets(sSourceCube, vContract, vCompany, vVersion, 'DATE_CONTRACT_END'), 'Periodnumber');
    	while(nOrgStartdate <= nOrgEnddate );
    		if(cellisupdateable(sTargetCube,'CC', vContract, tVersion, numbertostring(nOrgStartdate ), vDim1, vDim2, vCompany, vAccounting, 'Count Original Term') = 1 );
    			cellputn(1, sTargetCube,'CC', vContract, tVersion, numbertostring(nOrgStartdate ), vDim1, vDim2, vCompany, vAccounting, 'Count Original Term');
    		endif;
    		nOrgStartDate = nOrgStartdate +1;
    	end;
    
    
    	# VARIABLES
    	nRate 		= cellgetn(sSourceCube, vContract, vCompany, vVersion, 'DISCOUNT_RATE_MONTHLY');
    	nContractValue	= cellgetn(sSourceCube, vContract, vCompany, vVersion, 'Contract Value');
    	nIndexRate 	= cellgetn(sSourceCube, vContract, vCompany, vVersion,'INDEX_Y_CHANGE_PERCENT');	
    	nFreq		= cellgetn(sSourceCube, vContract, vCompany, vVersion,'PYMT_FREQUENCY');	
    	nPrepaid		= cellgetn(sSourceCube, vContract, vCompany, vVersion,'PYMT_IS_PREPAID');	
    	nFreeMonths 	= cellgetn(sSourceCube, vContract, vCompany, vVersion,'NUMBER_FREE_MONTHS' );
    	nVAT	 	= cellgetn(sSourceCube, vContract, vCompany, vVersion,'VAT_RATE' ) + 1;
    	nPriorFraction	= cellgetn(sSourceCube, vContract, vCompany, vVersion,'PRIOR_PAYMENT_FRACTION' ) ;
    	nFreq = if( nFreq = 0, 1, nFreq );
    
    
    
    	j =  nStartDate+ nPassedPeriodsForModifiedContract ;			
    	k = nstartDate + nLength - nPassedPeriodSinceInit ;
    	mn = 1 + nPassedPeriodsForModifiedContract + nPassedPeriodSinceInit ;
    	mnFreq = mn;
    
    	# FREE MONTHS 
    	# According to IAS17 transition rules, free months before 1901 should adjust all payments. Find how many periods were free prior to 1901
    	# This does not work when payments are also itemized over P1 usage
    	#Update v1.091 rev2 - 9 Jan 2019 - updated the below nPymt of free periods to calculate right for all payment frequencies.
    	#3.6.2020/NM: Calculate FreeMonthsBeforeInit to only include free months that take place before IFRS16 initialisation.
    	IF(nPassedPeriodSinceInit >= nFreeMonths);
    	  nFreeMonthsBeforeInit = nFreeMonths;
    	ELSE;
    	  nFreeMonthsBeforeInit = nPassedPeriodSinceInit;
    	ENDIF;
    	#nFreeMonthsBeforeInit = nFreeMonths;
    
    	#18.6.2020/NM: Changed nFreeMonthsBeforeInit to nFreeMonths in below formulas.
    	nPymtValueOfFreePeriods = cellgetn(sSourceCube, vContract, vCompany, vVersion, 'MONTHLY_PYMT' ) * nFreeMonths / nFreq;
    	if( cellgetn(sSourceCube, vContract, vCompany, vVersion, 'P1 - Contract Monthly Payment') <> 0 );
    		nPymtValueOfFreePeriods = cellgetn(sSourceCube, vContract, vCompany, vVersion,'P1 - Contract Monthly Payment' ) * nFreeMonths / nFreq;
    	endif;
    	
    	nFreeMonths = nFreeMonths - nPassedPeriodSinceInit;
    	cellputn(nPymtValueOfFreePeriods, sTargetCube,'CC', vContract, tVersion, numbertostring(j), vDim1, vDim2, vCompany, vAccounting, 'VALUE_OF_FREE_PERIODS_RECEIVED');
    
    	#3.6.2020/NM: No point in adding measures here - as these are already rule referenced and must be in place.
    	#dimensionelementinsertdirect('IFRS16 Measures', '', 'NUMBER OF FREE PERIODS BEFORE INIT', 'n');
    	#3.12.2019/NM: Changed "nPassedPeriodSinceInit" to "nFreeMonthsBeforeInit".
    	cellputn(nFreeMonthsBeforeInit, sTargetCube,'CC', vContract, tVersion, numbertostring(j), vDim1, vDim2, vCompany, vAccounting, 'NUMBER OF FREE PERIODS BEFORE INIT');​


    I can't help to believe that the processes you are talking about, our of the simplest kind and will probably never change again.
    For any real-world TI process, for instance view-based, you are each time going to swap in real variable names - IMHO.
    Hence I see no benefit. Even more, for the Metadata tab, you do the work twice with the variables.
    The disadvantage of "The view does not exist" is easily circumvented by creating a "ZZZ_Tmp" view containing 1 cell and dimensions ordered like the cube order. It can be easily scripted if you don't want to do it manually. Opening a TI process when that message comes up: it's easy to dismiss the message box, in my opinion.

    What do co-workers and former co-workers or customers think of the approach ?

    In Architect/Perspectives, you never ordered the TI processes based on the datasource type, in the Properties pane ?

    I don't say that I never used this approach, but only for 1 TI process where I cannot know the incoming variables upfront. When I know that information, it's pointless to use 50 spare variables.

    Apologies to the OP for hijacking the thread but this way of working intrigues me.



    ------------------------------
    Wim Gielis
    ------------------------------



  • 16.  RE: Sharing data between TM1 servers

    Posted Mon May 02, 2022 03:31 PM

    Hi Wim, that's a lot of code :)

    Anyway, using my approach just a few lines in the beginning:

    vContract = V1;
    vCompany = V2;
    vVersion​= V3;
    # more if needed​
    # all other code stays as is


    For any real-world TI process, for instance view-based, you are each time going to swap in real variable names - IMHO.
    Why? Unless you change the number of dimensions in a cube or the order of dimensions. But in that case you will have to modify the process no matter what approach you use

    What do co-workers and former co-workers or customers think of the approach ?
    They love it! We have many file uploads where we use my generic process which can:
    • export data from Excel to Txt file (even without installing Excel on TM1 server)
    • transform the file (add/change columns, add/replace values, transpose rows/columns)
    • detect file locale
    • load data into specified TM1 cube
    I don't say this approach is the best, but I've been using it a lot. It just depends on your particular needs and preferences.
    In the end those are just the variable names... The process logic is what we focus on
    Let's agree to disagree here :)

    ------------------------------
    Vlad Didenko
    Founder at Succeedium
    TeamOne Google Sheets add-on for IBM Planning Analytics / TM1
    https://succeedium.com/teamone/
    ------------------------------



  • 17.  RE: Sharing data between TM1 servers

    Posted Thu September 21, 2023 02:59 PM

    I usually use the Template cube also, and I have another cube }SysCubesDims (with Dimensions: }Cubes, }DimensionType, }SysCubesDims_m). 

    Every Dimension is mapped to a Dimension Type (Account, Org Unit, Cost Center, Product, Customer, Version, Time Period, Measure, etc)

    One of the measures in this cube is populated with position of that dimension in the cube.

    Then in Metadata / Data tab, I have a few lines of code (with EXPAND Function), to assign the Generic Variable Name to Meaningful Variable Names. 



    ------------------------------
    Ardian Alikaj
    ------------------------------



  • 18.  RE: Sharing data between TM1 servers

    Posted Sun May 01, 2022 10:21 AM

    Regarding some of my points here:
    I would like to see future enhancements planned within the PAW environment rather than having to rely on some workarounds or third-party solutions. 
    Nevertheless, thank you all for your suggestions, ...very much appreciated :)

    At least one of those points (saving a drill-down/server view within PAW) is clearly a bug that needs to be fixed. I do not know where else to report those.

    The PAW modeling environment is very user-friendly and I am hopeful that the PAW development team will take this further, building in some object manipulation features like copying and renaming objects and enhancing the cube manipulation for adding and deleting dimensions in a cube. A built-in, simple DB connection between TM1 models would be very nice to have. Also, some feeder reports or help to simplify feeder manipulation would be much appreciated.

    Again, thank you all!



    ------------------------------
    Asgeir Thorgeirsson
    ------------------------------



  • 19.  RE: Sharing data between TM1 servers

    IBM Champion
    Posted Tue May 03, 2022 12:20 PM
    I use a placeholder/ dummy cube for SQL queries as this removes the risk of the ODBC name changing and use this on known tables/ data sources so the variables can be input against the Dummy cube for each process and used in the TI. Any debugging is set within the process via AsciiOutput to the logging directory to identify issues with the dynamic SQL query and what data is being output. I also use a similar process for excel loads from prescribed templates which works well providing the users don't go rogue with the templates.

    Regards to the original request, theoretically you could run a web service with REST API connectivity (to move data between servers) and embed that webpage on a PAW book to achieve the solution direct within PAW. Not a perfect or potentially sustainable solution but one that would/ could work.

    Again, it would be possible to build an application with the PAW API with Rest API integrations but you would be moving away from PAW as the chosen development/ user interface which I agree is not ideal.

    We are seeing an increase in requests to access source system APIs as opposed to ODBC connections and would welcome the ability to run JavaScript/ Python direct within PAW

    ------------------------------
    Edward Stuart
    ------------------------------