The strict 3 layer approach is not necessary although it should work, possibly, with caveats such as I don't know what you have done and, from your description, it is at variance with how I mapped a data base, which had a known solution in an FM model, to an analogue model in CA modelling.
I am not immediately able to envisage how you have implemented your model and I am interested to learn more. It is possible that you have performed actions which are not actually necessary.
This is also the case for the problems with relationships which you report. It is possible that I could provide assistance in a private thread.
I've been working off and on on a document about the mapping between modelling in FM and the CA modelling tool so I think I need to take some time to finish it off and publish it Real Soon Now.
Here is some things from it. The style is still a bit rough.
Your approach to modelling should be governed by an understanding what the modelling actions represent and why they exist, and less so by a mirror mimic of a FM model.
The query subjects which are imported would be understood to be analogue to the query layer objects.
The primary purpose of a model is to create a model of the data sources with a query plan in which any object which is used in a query will have one, predictable, path to other objects, so that the SQL which is generated is always going to be the same and, thus, you would always get predictable results.
You achieve that by creating a structure which is a virtual normalization of the metadata, with all the fact grains identified.
Here is a picture of a CA module model of the Cognos great outdoors warehouse sample data base. There is an analogue FM model.
What I have done is to identify my dimensions and any data base tables which have roleplaying. I have created an alias for the region table as it is used in both the employee and retailer dimensions. You will notice that the dimensions only have relationships between the related tables of the dimension. The tables of each dimension have been put into a view. Each view is a dimension table, with all the lookups combined with the other tables of the dimension or, if the dimension is a snowflake. Each view has relationships to the relevant fact table.
The views have column dependency defined, which normalizes the grains of dimension detail.
You will see at on the left side and on the bottom, there are a series of tables for several dimensions.
On the right is a star of several fact tables moved on top of each other, for clarity, surrounded by the views which are the dimensions of this model.
In this case, the sales fact table is selected and the dimensions for which it is possible to have a relationship, time, time ship date, order method, products, employee, promotions, retailers, and organization are also highlighted, as are the relationships.
When you model the query layer/business layer /presentation layer in FM you are doing actions to accomplish the same objectives: to layout a query plan which has no ambiguity about the relationships of any object for any role and to normalize the metadata.
There isn't presentation layer functionality. I think it is one of the gaps between FM and MUI in the gaps doc. I am not in a position to give guidance about future functionality but I wish to draw your attention to this statement about addressing gaps, which Omar mentions in this posting.
https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=dmfm-framework-manager-features-not-supported-by-data-modules
https://community.ibm.com/community/user/businessanalytics/blogs/mohammed-omar-khan/2020/07/30/retail-calendar-and-fm-vs-data-module-gaps-whats-n
You can sort of fake it with copies in folders, with the rest of the module set as hidden from users but you need to keep in mind that a copy does not update itself when a change is made to the source object and there is no way to update it. You'd need to re-create the copy, with the same identifier as the previous version of the copy.
For more information about identifiers, in addition to our documentation there is this:
Identifiers and their use
https://community.ibm.com/community/user/businessanalytics/blogs/ian-henderson1/2020/12/21/identifiers-and-their-use
Hope that helps
Ian
------------------------------
IAN HENDERSON
------------------------------
Original Message:
Sent: Tue September 28, 2021 12:18 PM
From: Jorge Moura
Subject: Data Module Limitations / Best Practices
Hi Buddhi, very useful your recommendations.
I have some questions for you:
- We are using this Database Layer DM / Reporting Layer DM approach using the first one as source of the seconds. But we need to rename columns to business names in the Database Layer because renaming is not available in Reporting Layer. Is it correct? We are at 11.1.2 version.
- Where is the best place to put calculations? Database or Reporting Layers?
- When we deploy Reporting Layer DM from DEV to PROD links to all Dashboards are broken. We need to relink all Dashboards manually. Do you know if it is solved in newer versions? To avoid this issue we are redoing all changes directly in PROD.
Tks
Jorge Nelson
------------------------------
Jorge Moura
Original Message:
Sent: Tue August 11, 2020 02:31 PM
From: Buddhi Gupta
Subject: Data Module Limitations / Best Practices
Some of the inputs which may help:-
- You can add Table in existing view by using calculation and drag fields from other table.
- Prompt macros doesn't render in Data Module; however, such queries can be embedded in Data Module and use on reports.
- Use properties of Usage and Item List to define optimum SQL to be sent to DB. If Table has more columns, just select "Used" to send only required columns to DB. With Usage, define Bridge Table to have Fact to Fact Joins same as FM. You can change to summary to reduce data volume being returned.
- Set Data Caching property for non-frequently updated QS
- For synonyms, you may try Native SQl approach to generate QS in Data Module provided access is provided.
- You can always use Try It feature to troubleshoot unknown errors.
- Keep Data Module size smaller (In prior version, we had issues in performance with Data Module Size). You can have two layer approach - Database layer (data Modules) and reporting Layer Data modules based on Database Layer Data Modules.
- Just use reporting/self service DM to Users/Authors for reporting with required QS exposed. Database layer Data module to keep all Native SQLs, Excel, Data Server Tables and common filters.
------------------------------
Buddhi Gupta
Original Message:
Sent: Thu August 06, 2020 05:58 PM
From: Jeremy Aude
Subject: Data Module Limitations / Best Practices
Hi,
In addition to the data modeling documentation, are there any recommended materials on best practices and limitations using advanced features in data modules? These are just a few examples I have run across recently that would've been helpful to know ahead of time.
For example,
- Cannot add or remove tables to a view
- Cannot import synonyms from Oracle
- When to consider modifying the item list properties of a table
- How to troubleshoot "unknown errors" before having to open a support ticket
I don't think the first two examples are unique to our organization so it would benefit to know these things in advance.
Thank you.
------------------------------
Jeremy Aude
------------------------------
#CognosAnalyticswithWatson