Planning Analytics

Planning Analytics

Get AI-infused integrated business planning

 View Only

Database procedures for more robust data process in Planning Analytics

By Pål Risa Zachariassen posted Tue October 15, 2024 05:14 PM

  

Background

I have worked with ETL and data warehouses for many years. With this experience, it is natural to use the database to reduce logic in planning Analytics. I have taken over solutions where there are long chains of information within planning analytics that can be very demanding to untangle and very difficult to troubleshoot if some information has become incorrect along the way. My examples are with Oracle, but the same can be solved with other technologies. I will talk about this topic in techbyte session at TechXchange 2024 - Cohesive planning of large construction projects, how can project and group management collaborate [2569]

 https://reg.tools.ibm.com/flow/ibm/techxchange24/sessioncatalog/page/sessioncatalog?tab.sessioncatalogtabs=option_1601178495160&search=2569

Connection to a store produce in a T1 process.

Planning analytics can run any SQL against a database. This is done by ODBC connection. How to set this up is vel documented.  This an example of how an Epilog could look like 

  

Information flow in planning analytics

A prognose process consist of many steps. A prerequisite for using this solution is that there are separate processes that can be solved with a process and not a rule. This is an example of my process steps for start collecting data to group prognose reported to the board.  

Very often I have seen a both rules and processes moving data between many cubes. This is easy to build but could be difficult to maintain and debug. If there are question like what was transferred during the night or what has been changed since yesterday it is challenges to go through logs. 

 

In order to be able to use what the database is good at, we have designed the flow like this

In this scenario a process is handled with Planning Analytics. Instead of built a process within Planning Analytics I am exporting data to the database as T1 process. When doing this I ad database prodcure in the epilog for doing different task like:

  •  Writing data to temp table, if no records the export are not copied to export table and data for last export are kept.
  •  After export we ran a store procedure that do update/insert of data.  That means we never loose any data and knows when a data first time occur in Planning Analytics. This corresponds to SCD2 and SCD1 from the data warehouse method of Kimbal. I have enclosed and example built in Oracle 01.Update/insert
  •           When exporting data, we add a batch number and then last 20 batched are kept in the database. This means we can see which number we transferred last day or day before. We can also see which day did the numbers change to easily find when something wrong happened. This is example 02.insert into
  •           Enrich data for compilations, mapping or other use of data. An export from planning Analytics is often 6 – 8 columns. When exporting we normally we would increase this to 12-14 columns.

Example 01.Update/insert This will create one new record for each day

First create a table :

select sysdate as FIELD_DATETIME, trunc (sysdate) as FIELD_DATE , '1' as current_version_flg

from dual

Then the procedure:

  CREATE OR REPLACE EDITIONABLE PROCEDURE "TM1"."P_DEMO_UPDATE_INSERT" authid current_user is

begin

update TM1.DEMO_UPDATE_INSERT set current_version_flg = '0'  ;

 MERGE INTO TM1.DEMO_UPDATE_INSERT a

USING (

select sysdate as FIELD_DATETIME, trunc (sysdate) as FIELD_DATE , '1' as current_version_flg

from dual

)   b

ON ( a.FIELD_DATE = b.FIELD_DATE )

WHEN MATCHED THEN

  UPDATE SET

         a.FIELD_DATETIME = b.FIELD_DATETIME ,   A.CURRENT_VERSION_FLG = B.CURRENT_VERSION_FLG

    WHEN NOT MATCHED THEN

  INSERT (  a.FIELD_DATE, a.FIELD_DATETIME  ,  A.CURRENT_VERSION_FLG   )

  VALUES (  b.FIELD_DATE, b.FIELD_DATETIME  , B.CURRENT_VERSION_FLG  ) ;

  end ;

/

Field_datetime is SCD1 and Field_date is SCD2. Since table not used for facts SCD keys and to/from dates are not added 

 Example 2  - insert into  creating batich versions

First create table :

create table TM1.DEMO_INSERT_INTO as

select

sysdate as FIELD_DATETIME,

trunc (sysdate) as FIELD_DATE ,

'1' as current_version_flg ,

1 as BATCHVERSION

from dual

Then the procedure

   CREATE OR REPLACE EDITIONABLE PROCEDURE "TM1"."P_DEMO_INSERT_INTO"

AS

-- declare local variables

LVN_LAST_VERSION INTEGER;

LVN_DELETE_VERSION INTEGER;

BEGIN

-- calculate local variables;

SELECT MAX(BATCHVERSION) +1   INTO LVN_LAST_VERSION from TM1.DEMO_INSERT_INTO;

select max (BATCHVERSION)  - 20    INTO LVN_DELETE_VERSION from TM1.DEMO_INSERT_INTO;

-- main

delete  FROM TM1.DEMO_INSERT_INTO where BATCHVERSION <  LVN_DELETE_VERSION ; 

commit ;

update TM1.DEMO_INSERT_INTO set current_version_flg = '0'  ;

commit ;

insert INTO TM1.DEMO_INSERT_INTO      (FIELD_DATETIME ,  FIELD_DATE ,  current_version_flg ,  BATCHVERSION )

   

select sysdate as FIELD_DATETIME, trunc (sysdate) as FIELD_DATE , '1' as current_version_flg , LVN_LAST_VERSION from dual a   ;

COMMIT;

END ;

/

 A new version is created for each export. This will always has  current_version_flg

#IBMChampion

0 comments
45 views

Permalink