Planning Analytics

 View Only

A Personal History with TM1: Remembering Processing Worksheets

By Ann-Grete Tan posted 13 days ago

  

My first experience with TM1 (now IBM Planning Analytics, or “PA”) development was in 1996 which was also the year that Applix acquired TM1 Software Corporation (formerly known as Sinper Corporation). At the time the first Windows NT version of the TM1 Server (“Server 6”) had just come out, but most of our clients were still using the Spreadsheet Connector 4.0 server which ran on Novell Netware, the leading network operating system of the day. Rules were a brand-new feature with this release but a usable version of TurboIntegrator (TI) was still two years away. Even though the internet reports that Microsoft Excel was already the dominant spreadsheet by 1996, Lotus 123 was still very widely used among our client base, though not for much longer. As I recall the Perspectives 2.0 add-in was available for Lotus 123 before there was one for Excel.

And Perspectives mattered because it was the only end-user interface available for development, reporting and modeling.

This was a good thing because TM1 really lent itself to being owned by the business. Finance users who were comfortable working in spreadsheets could build and maintain dimensions, cubes, and calculations on their own, without any support from IT who – in those days – rarely understood the agility that finance needed to react to business situations in near-real-time. Often the TM1 Server ran on a machine that was literally placed under the desk of the TM1 administrator in the Finance department, hidden from IT’s controlling eyes.

As many people know, cubes were originally called “tables” and TM1 was originally TM/1, or “Table Manager 1”. In the early days cubes could have a maximum of 16 dimensions, and all names were limited to 8 characters with no spaces, following the MS-DOS operating system conventions.

But some user interface elements from Perspectives 2.0 still look familiar, such as Edit Formula:

And even though rules were still written in spreadsheets, the “wizard” had already taken shape:

The next version of the Excel add-in was Perspectives 2.5, and with it came the cube viewer and subset editor that we more-or-less recognize from Architect today (albeit not for much longer):

Fun fact: the " symbol (“forall”) was borrowed from mathematical logic to represent the “all” subset. Manny Perez, the Father of TM1, had a strong mathematical background, so " must have seemed an obvious choice for that icon!

TM1 Processing Worksheets

Before we had Turbo Integrator (TI) bulk data loads to TM1 were done using TM1 Processing Worksheets, which I still consider one of the most elegant and business-user-friendly mechanisms I have ever seen. Our clients relied on TM1 Processing Worksheets well into the 2000s for two reasons:

  • TI scripting required developing a technical skillset that the average finance person did not have (this was before business and accounting students routinely took Python etc. classes in college)
  • For the first few years after TI was released it was sold as an add-on to the core product which slowed down adoption until its value-proposition was established.

TM1 Processing Worksheets were built around the concept of DBS (Database-Send) formulas which each take one value and send it to the specified intersection in a cube. In those days you had to specify the dimensionality of the cube in your DBS (and DBR) formulas, so you would be using anything from DBS2 … DBS16 depending on the target cube!

Our Sample Data Source

Suppose you have a simple general ledger extract in comma-separated (CSV) format that looks like this, where the Account String segments represent the GL Account (e.g. 4000), the Subsidiary (e.g. 05 = ”Cadence 90 China”), and Department (e.g. 100):

These data are to be loaded into the Actual version, Baseline scenario, Local currency and Value measure of the Financial Summary cube which has the following dimensions:

For the purposes of this data load, Region is derived from the Subsidiary by looking up an attribute of the Subsidiaries dimension.

Building the Processing Worksheet

Now that we know how each column of the CSV extract is to be mapped to a cell in the Financial Summary cube, we can construct our Processing Worksheet. We begin by pasting the first data row of the extract (highlighted below) into the first row of a blank worksheet. Then we fill in everything we need elsewhere on the spreadsheet (anywhere BUT row 1), to build our DB Send (DBS9) formula:

Apart from the hardcoded assumptions for Version, Scenario, Currency and Measure, all the other dimension references are derived from the values on Row 1 using Excel or TM1 formulas.

Using the Processing Worksheet to Load Data

Once the processing worksheet had been built, all that was needed was to set an import in motion. This was done using an entry under the Perspectives TM1 menu called Import/Export à Process Data. From here you could select Flat File, point at your source file, and then sit back!

The Process Data mechanism would loop through the source file, reading each record one by one (this is similar to how TI loops through the data source in its Metadata and Data sections). At each loop, the record was pasted into row 1 of the processing worksheet, and then the sheet was calculated to activate the DBS formula(s). Here’s how it would look on the second record (though in reality screen updates were switched off for every 100 records to speed things up):

Considering that this was client-side processing, it was fast! A typical general ledger trial balance load would be done in a couple of seconds.

Processing worksheets could be reused over, and over, again for the same data sources, and they were very flexible allowing for tremendous creativity. Cube views could also be used as data sources instead of flat files.

All the following were commonly done:

  • Cumulative data loads (by adding a DBR formula to pull the prior value)
  • Clearing data (by sending 0s)
  • Updates of multiple values at the same time (e.g. imagine 12 months of data on the same row, or a mix of attribute values and data values)
  • Mappings using VLOOKUPS or HLOOKUPS (lookup tables could be stored on other sheets in the same workbook)
  • Batch calculations

All this, and no scripting skills were required! It truly put a lot of control in the hands of business users.

The Advent of Turbo Integrator

The first version of TI was released with TM1 Server 7, but it was not a stable product until TM1 Server 8 came out a year or so later. Though it took a while for TI to gain traction, it opened a whole new world of capabilities and possibilities, including:

  • Super-fast server-side processing, leveraging both more powerful server hardware, and eliminating network-latency between the client and server
  • Automated chore scheduling
  • Automated metadata creation and maintenance
  • Dynamic view and subset creation
  • And more!

There’s no question that TI was a game-changer that enabled TM1 to rise to the challenge of solving large-scale modeling problems at the enterprise level.

But reflecting on TM1 Processing Worksheets reminds us that TM1’s ultimate customers are business users, who can be affected by rapid changes in the business climate (think Pandemic! Climate disaster! Bridge collapse! New CFO!), and who need the ability to explore the impact of sudden changes on their business with immediacy.

At such times they will gravitate towards tools that they control, because there is simply no time to explain requirements to a developer. The TM1 of 1996 was certainly such a tool. I hope that as IBM Planning Analytics continues to add ever more technically sophisticated features, it doesn’t lose sight of its ultimate customer and what they need.

#IBMChampions #planninganalytics


#IBMChampion
1 comment
14 views

Permalink

Comments

12 days ago

Thanks for the trip down memory lane AG!

Recall some nightmares with Novell and loading those newtork drivers in DOS the Windows via batch files etc. even before you could get to TM1.

Things are definitely much better now and many things are more seamless but not quite there yet as we eagerly await JSON handling, arrays, dictionaries and other "tools" in Turbo Integrator and the rest of the product to keep PA ahead of the pack.