Not an administrator and need to see what is in those TM1 Process Error logs?
Need to preview other files in your cloud environment?
No doubt you answered yes, so this article is for you!
In one of our client environments, Custodians who are not administrators need access to TM1 Process Error logs. They also need to review other files in the logs, import and backup folders. Without access to the rich environment,PAW administration or FTP they were reliant on the administrator.
That has changed using a solution that relies on the undocumented ReturnCSVTableHandle to facilitate this.
We created a solution in PAW with a book to show available files to users.
Each tab in our book deals with a particular requirement e.g. review process errors, import files, backup logs etc. For each tab a dimension was created to store the list of files found. Processes were created to clear and populate the dimension based on a wildcard file search. We also added attributes to allow users to flag to delete or email the relevant files to themselves.
After configuring each tab, we created a drill process on the }ElementAttributes cubes for each dimension. We then configured the drill process to use the filename being passed during the drill as the datasource name for the server.
The critical piece of the puzzle is the ReturnCSVTableHandle function that we add into the Epilog. After jumping through a couple of hoops, users are able to right-click and drill to the TM1ProcessError or other files configure for drill. These are viewable in a PAW window and can be downloaded via the browser.
This seems to be a great workaround for users not currently able to see what the process errors, are as well as being able to review other files e.g. import files, before uploading them.
Hope this will be an exciting addition to your company or your client's solutions!
Some notes and disclaimers:
I have only tested this with PAW on the cloud, not local.
The drill to CSV does not work in Architect/Perspectives, only PAW for now.
PAW does not recognise the ReturnCSVTableHandle function – you need to edit in the rich environment in Architect.
As with all data, consider who has access to these PAW books to avoid disclosure of possible sensitive information in error logs etc.
The Configuration Process
Create a dimension in which to list the files found by your process that populates using a wildcard file search
Add any attributes you may want to use later. You need to add at least one to allow us to get a cube view.
PAW File management book
Create a PAW book for your file lists to be displayed and drilled through to.
In my example, I added the }ElementAttributes cube for my Process Errors dimension as an exploration.
I had the filenames on the rows and the attributes on columns.
Configure the Drill Assignment
Create the Drill Assignment on the relevant cube; in my case }ElementAttributes__S-File List-Process Errors with:
['CubeDrillString']=S:'Drill to Process Errors';
Create the Drill Process
Leave the details on the first screen for cubename and dimensions as is.
On the next step, set the Datasource Type to Other then Launch TurboIntegrator.
Go to Advanced then Parameters and change the dimension for your file list to a variable like pFilename.
Go to Epilog and add in ReturnCSVTableHandle; after the generated statements
Save the process with the name you used in your drill assignment e.g. Drill to Process Errors
Close the process.
Create a dummy datasource
You will need a sample/dummy file to act as the initial datasource.
Create a text file or CSV file then upload this to the model_upload folder on the server.
Note: If using a CSV, the process looks at the first record to determine how many columns are present and thus how many variables are required. Configure accordingly or we will set the DatasourceASCIIDelimiter to something like Char(1) to avoid splitting to columns. Useful for TM1ProcessError logs where you may not want to split to columns.
Configure the Datasource
Open the process again in Architect
Change the Datasource Type to text
Update the Data Source Name and Data Source Name on Server to the sample file you just uploaded. In my case I added the following:
..\model_upload\Drill to Process Errors.txt
Save and close the process. Expect a million error messages. Just OK all of them.
Reopen the process and ignore any error messages.
Go to Advanced then Prolog.
Update with the required delimiter or omit the delimiter to keep CSV:
DatasourceNameForServer='..\Logs\' | pFilename;
Save the process and close.
Testing in PAW
Open your book or refresh if already open.
Right click on one of the cells in the exploration then drill and select the process you created.
You should see your file displayed in a new window with an option to download.
Congratulations if you successfully navigated all the hoops and hurdles and got it working!
If you did not, typically it is a path problem to the source file - check the necessary slashes, output to a text file to confirm etc. etc.
Share the book with the relevant users and let them review Process Errors and other files without the need to ask an administrator or someone with FTP access to the server.
Hoping that in the future this configuration will be made more simple and can happen via PAW.
Let me have any feedback on this, especially if you pick up any mistakes so that I can correct.
Edit: The drills work via PAfE as well but you would need to ensure that the relevant sources are available and cubes updated.