Planning Analytics

 View Only

Import IBM PAoC users from a CSV file - an Excel formula approach converting }ClientsGroups cube data

By Wim Gielis posted Mon January 22, 2024 06:34 PM

Hello all,
As a new IBM Champion 2024, I am dedicated to serving the IBM PA Community even more than last year and the years before. This blog post will focus on how to generate an input CSV file to create PAW users in an IBM cloud model (both production and development environments). For those who might not know me, in the past I have also been recognized 4 times as a Microsoft Excel Most Valuable Professional. This means that I regularly bring those 2 worlds together in my solutions.
What we usually do, is export the current PAW users as a CSV text file. There's an icon to do so in the Administration part of PAW. Next, we add new lines to the CSV text file and offer the new file for import. We can often start from another PAL/TM1 model (be it on premise or similar). This implies that a manual dump from the }ClientGroups cube to Excel can be your starting point.
When we look at the below picture, the left-hand side is an example of what we can generate or, being a consultant, what the customer delivers us.
IBM PAoC expects a certain columnar format for the new users. You can see what columns are needed from first exporting the existing users.
But basically, we need to rework the output on the left-hand side to what IBM PAW expects.
Expected columns are:
- Login Id
- First Name
- Last Name
- Role
- Environment
- Directive
The email address, first name and last name were already added in the dump of the }ClientGroups cube. It could be attributes of the }Clients too.
Column K lists the output that can be used as-is for an import in PAW:
The output above is made up of 3 parts, neatly stacked together with a formula:
- First, we have the headers. This is a hardcoded list as we should not make things more difficult.
- Second, we have a block of imports for the production environment. Background color is green.
- Third, we have a block of imports for the development environment. Background color is orange.
The Excel VSTACK() function does the trick of vertically stacking the 3 intermediate blocks.
The list of users that we need to bring in, is found with the help of the FILTER() function to filter out empty cells.
The directive is always the word "ADD". the directive "REMOVE" is possible too but it is the opposite of what we try to obtain in this exercise.
I do like the use of the LET() function to have the components in our formulas only once. Using descriptive variable names like "users" and "directive" is an added benefit I find.
The real magic is in the BYROW() Lambda function. Basically, for every "row" of a user, we generate a string by joining several pieces: exactly, the columns that we require are taken from the row and also the environment name and the directive.
The production and development environment names are taken as simple cell references.
What is nice is the Dynamic Array Formula (DAF) in cell K2, which spills down as many cells are needed. How many cells do we expect in our output ?
- 1 row for the titles
- plus 1 row for every user in the production environment
- plus 1 row for every user in the development environment
Obviously, in real projects, we will have many more users to import at once.
The above approach extends nicely to how many users you need. Did you spot the light blue border around K2:K22 indicating the Dynamic Array Formula ?
You can simply copy/paste column K contents (K2:K...) and paste it in the CSV text file for import. For IBM PAW, the order of records is not important.
For ease of copy/paste, here is my formula in International format:
titles, "Login Id,First Name,Last Name,Role,Environment,Directive",
users, FILTER( D2:G171, D2:D171 <> "" ),
directive, "ADD",
      BYROW( users, LAMBDA(row, TEXTJOIN( ",", , row, I2, directive )))))
Note that the formula here only targets the production environment. When uploading users, you cannot upload to the development environment any user that is not yet active in the production environment. Hence, we have half of the output to store in the CSV and upload to PAW.