Platform

Platform

A place for Apptio product users to learn, connect, share and grow together.

 View Only

Apptio One Global Configuration Standards (2022) 

Tue September 13, 2022 05:19 AM

Below are the configuration standards and best practices for the following 4 categories:
  • Data Organization
  • Data Tranformation
  • In Product Documentation
  • Configuration Guardrails

Required Data Organization Global Standards​​​​​​​

For complete details, please see Data Organization details.

DO:

  • Use the ​categorization structure detailed out below 
  • ​​​​​​Use US spelling of names
  • Name the table in a way that is meaningful to the customer

  • Follow the appropriate naming convention for transforms listed below

  • Use a pre-master dataset when you need to append multiple tables together and apply common logic

DO NOT:

  • Call any datasets "Master Data"
  • Use special characters, dashes, reserved words
  • Use the word RAW in the title of initial data uploads

Data Transformation Global Standards

For complete details, please see Data Transformation details.

DO:

  • Use map columns to the OOTB master data sets
  • Use mapped columns over append steps to OOTB master data sets
  • Apply all transform steps prior to map columns

DO NOT:

  • Change any columns, pipeline steps, formulas, or anything else in the master data set
  • Use formulas or hardcode values in the “Map your columns” window of the append step

Required In Product Documentation Global Standards

For complete details, please see In Product Documentation details.

DO:

Dataset

  • Set/pick Source System & Upload Frequency in the Upload Step
  • Fill in Dataset Owner, Available By & Summary as a minimum in the Source Step​​​​​​​

Allocation Rules

  • Fill in Allocation Description, Allocation Maturity (Good/Better/Best) and Agreed By/On as a minimum

DO NOT:

  • Leave the In Product Documentation BLANK
  • Forget to get sign off from the cus​​​​​​​tomer on the allocation decisions

Configuration Guardrails 

For complete details, please see Configuration Gaurdrails detail.

DO:

  • Check In working blocks of functionality with a comment associated with the check in
  • Action on Project Errors Columns type errors, Table Errors, Parsing errors, and Date Partition errors
  • Verify the data type in the import step and update where necessary
  • ​​​​​​​Group data early when possible and note that a Unique Identifier does not have to be unique
  • Configure Object Identifiers on all modeled objects and do not use, row() function, CurrentDate() function, include dates, or numbers
  • Utilize report collections
  • Customize reports in place when necessary

DO NOT:

  • Use more than one browser tab in DEVELOPMENT
  • Use more than 4 levels of Nested Ifs
  • Use more than 1 LookupEx, SplitEx, or Lookup_Wild function per transform
  • Have more than 5 branches open
  • Use “Send only remaining values" more than once per object and should always be approved by Tech Lead or Principal 
  • Use more than 10 pickers, slicers, or filters on a report
  • Do not create drills that take longer than 10 seconds to calculate unless designated ok by the customer

Additional detail on each of the above standards is outlined below.

Data Organization

Data Categories

Applying a consistent method of categorization structure helps end users (TBMAs, Support, Customer Success) more easily find the data they need and understand where it is being used.  

The below categorization structure allows for the most frequently updated datasets to appear at the top of Project Explorer, in order to make them easier to locate and to group the data into categories most likely to be of collective interest to specific users (e.g. Finance files are all grouped together, and appear at the top): 

010 - Finance Inputs 

020 - Finance Transforms 

030 - Finance Mapping Inputs 

040 - Finance Mapping Transforms 

050 - Infrastructure Inputs 

060 - Infrastructure Transforms 

070 - Infrastructure Mapping Inputs 

080 - Infrastructure Mapping Transforms 

090 - Applications and Services Inputs 

100 - Applications and Services Transforms 

110 - Applications and Services Mapping Inputs 

120 - Applications and Services Mapping Transforms 

130 - BU Inputs 

140 - BU Transforms 

150 - BU Mapping Inputs 

160 - BU Mapping Transforms 

Additional categories can be added following the same numbering pattern. For each conceptual layer of the model in Apptio, there are 4 categories:

  • Two Inputs categories (one for files from source systems, and one for additional mapping or reference files)
  • Two corresponding transform categories

Input Categories

  • These should only contain tables whose files are loaded into the system directly by the TBMA team (either manually or through Datalink). This allows the TBMA team to easily identify the files they need to update on a regular basis. The Input tables can have transform steps within their transform pipeline. 
  • Mapping tables are deemed to be any manually created or reference data that helps to augment data in the system, or drive allocations for example a Chart of Accounts, a manually created Labor to Tower mapping file, etc. 

Transform Categories

The Transform categories are for tables whose data is not loaded by the TBMA Team either manually or through Datalink, but instead are based off another existing table within the Project Explorer, such as for datasets that require additional formula steps that cannot be executed within the Input tables transform pipeline. Input data should never be loaded into the Transform categories. 

Each category can be broken down to further categorize the data by replacing the right side 0 with a number. Each subcategory can align to the OOTB object that it corresponds to or can be custom depending on how much data is being inputted and what will be meaningful to the customer (such as creating a category for ServiceNow data). For example, the 010 – Finance Inputs can be broken down further: 

011 - Cost Source Inputs 

012 - Fixed Assets Inputs

013 - Labor Inputs 

014 - *Custom Inputs  

If using subcategories for the Inputs, be consistent with the naming for any corresponding Transform categories. For example, for category 011 – Cost Source Inputs, its corresponding transform category is 021 – Cost Source Inputs 

You do not have to use all categories or subcategories on every project. Smaller, more streamlined projects may only require the Input categories with no subcategories. Larger, more complex projects may require Input and Transform categories with multiple subcategories. Work with the Tech Lead to define the level of granularity required for the categories within the project. 

Data Naming Convention​​​​​​​

General Notes:

Do not call any datasets “Master Data”. This is specific terminology to Apptio out of the box datasets. It should not be used in implementation as it can cause confusion as to which datasets were provided by Apptio.

Avoid using:

  • Special characters in table names (i.e. @;#,.|* etc.) as they can cause unexpected errors (it is ok to use "_")
  • Dashes (i.e.-) as they can be interpreted as mathematical operators and will require curly braces ({}) to reference the table
  • Reserved words (OR, IN, AND, RANGE, IF, THEN, ELSE, END, SUB, TRUE, FALSE, EXCLUDES) either followed by a space or after a space as the parser will view these words as part of a formula and can cause unexpected UI issues​​​​​​​

Use USA spelling in table/column names i.e. center or labor. This will match the out of the box components and avoid confusion when searching for documents.

Initial Data Load:

  • Name the table in a way that is meaningful to the customer. Actuals, Budget, and Forecast files should all use the Suffix “Actuals”, “Budget”, and “Forecast” respectively. For example:
    •  SAP OpEx Actuals
  • In the past it was commonplace to suffix the initial upload with the word RAW to enable you to validate the original data that was loaded. In R12, this is not necessary to create a separate RAW table, as the transform pipeline allows you to see what data has come in, and the steps that have been taken to transform it.

Transforms:

The product does not currently support a way to tell how a table has been used and what its dependencies are. The below naming conventions are used to help an end user more easily understand how a table has been used and transformed, making for easier troubleshooting:

Transform caused by needing additional transform steps beyond what is allowed in the transform pipeline of the initial data load table:

  • The transform table should have the same name as the initial data load table, followed by the suffix “Transform” and a letter starting with “A”. Each additional transform based on that previous transform will have the next corresponding letter. This allows end users to more easily trace back how data was transformed, what its dependencies are, and what is the source data. For example, transforms of the SAP OpEx Actuals data:

- SAP OpEx Actuals Transform A

- SAP OpEx Actuals Transform B​​​​​​​

Transform to create a new data set based off an existing table:

  • This type of transform is used to create a new data set out of the existing data from another dataset, usually through filtering and grouping. The name should reference the source data set and what the resulting data set is for followed by the suffix “Transform” and a letter starting with “A”.
  • For example, GL Actuals is transformed to the create the Vendor List. Call the resulting table “GL Actuals to Vendor List Transform A”

Pre-Master Data Sets:

  • Use a pre-master dataset when you need to append multiple tables together and apply common logic, before it populates the OOTB Master Data set using mapped columns. This prevents you from customizing the OOTB Master Data set with any additional formulas or transform steps in the OOTB Master Data set
  • A common use case for this is Cost Source, where you will append OpEx Actuals, OpEx Budget, CapEx Actuals and CapEx Budget. It is likely these files will share common lookups such as Cost Center Owner, Account Group etc. In this instance, a pre-master will allow you to append the 4 files together and do the lookup once, rather than doing it 4 times in each individual dataset. Note that this will not apply to 100% of use cases, and so the extent to which common logic may be applied should be considered for each implementation based on its own merits.
  • Pre-master data sets should share the same column names as the OOTB Master Data set. In this instance, upload the column headers of the OOTB Master Data set (pulled from the mapped tables or upload steps, with no row data) as the base file, and then append in the other data sets. Only to load the column headers that you plan to use with the Master Data set [Coming in product will be a drop down selection for Source Type which you will then select the “Other” option. You will then be able to filter on other data types in the Project Explorer].
  • Name the table with the corresponding Master Table Prefix (usually the Object name) followed by the suffix “Pre Master”. For example, the pre-master dataset for Cost Source Master Data is “Cost Source Pre Master”

Data Transformation

Master Data Sets

General Notes:

  • Do not append to or change any columns or formulas in the out of the box master data sets. Instead, only use map columns.
  • Do not add any additional transform steps to the out of the box master data sets. This will cause problems during any future content upgrade of the corresponding Apptio component.

Map Columns

  • Mapped columns can only be used to map data to OOTB master data sets, except for Cloud, and only if the master data sets are on content version 104 or higher.
  • Mapped columns should be used over append steps OOTB master data sets to streamline future content upgrades.
  • All transform steps should be applied to the data before applying the Map Columns step as no additional transforms should be applied to a master data set (the file name will be populated into the Source Table column of the master data set).
  • Additional columns from the source tabled that are not in the master data set can be mapped to the master data set via “Add Custom Column” on the Map Column step.
  • More information on Mapped Columns can be found here.

Appended Tables

  • When using an Append step to add a new table to an existing data set, use only column to column mappings. Using anything other than column to column mappings hides information and can make it more difficult to troubleshoot problems and trace through the model. It will also make any re-mapping that may be necessary after a content upgrade more straightforward.
  • Avoid formulas in the “Map your columns” window of the Append step. Instead, create an additional column in the table being appended with the formula and then map the column as usual.
  • Do not hard code values in the “Map your columns window” of the Append step. Instead, create an additional column in the table being appended with that value and then map the column as usual.
  • Be careful when removing columns from source tables that have been appended. Ensure that any mappings of those columns are removed first from the Append step on the target table before the column itself is removed from the source table. This will avoid errors showing on the Append step that reference the now non-existent column, errors which can be difficult to remove afterwards if not done in this order.
  • Add a "Source" column to the source table and use the syntax =GETINFO("table.transformName") to auto populate a source value. Append the Source column to the target table.
  • More information on appending data can be found here.

In Product Documentation

Datasets:

Typically a lot of datasets get loaded into Apptio and not always by the same users/team. To ensure there's an audit trail of where the datasets came from/can be found and to make sure everyone (TBMAs, Support, Customer Success) understands the purpose of the dataset, it's instrumental to populate the below in product documentation (ideally at the time of the initial upload, with help from the customer).

On the Source step in the Table description, include the below information:

Source System: Should match the source system on the Upload step (example: Oracle PeopleSoft)
Source File Location: Location where the source file is pulled from into Apptio (example: SharePoint Location X)
Dataset Owner/Point of Contact: Name of the owner/point of contact for the dataset (example: Joe Blog)
Dataset Owner Department: Name of the department the owner/point of contact belongs to (example: Storage Operational Team)
Available By: The time period the data should be available by, for upload into Apptio (example: BD+10)
Upload Frequency: How often the data should be uploaded into Apptio. Should match the Upload Step (example: Monthly)

Apptio Layer/Model: What Apptio Layer/Model the data is used for (example: Financial Layer/Cost)
Summary: Description of what the data is and what it is for (example: GL Data extracted from Oracle, provides the source of truth for the IT Expense Base as provided by IT Finance)

In the Upload step, ensure to pick the applicable data refresh cycle and select the source system from the options that appear as you begin typing in the “Source System” box.

The above documentation feeds directly into the OOTB Data Quality report: Summary, "Data Expiration" tab and allows for easy filtering and grouping of datasets by leveraging the above information.


Allocations:

Allocations are key to a successful model and hence it's very important to name and document each allocation with a useful description of the "how" & "why". The date at which the allocation rule was decided and agreed upon with the customer ensures there's an audit trail in place for when the allocation rules would be up for any revision required.

In the “Notes” within the allocation, provide the below user friendly information:

Allocation Description: Short description of what the purpose of the allocation is, the "why" (example: Allocates the OpEx Monthly costs (IT expense base) from the Cost Source object (which contains the Cost Pool mapping) into the Other Cost Pools object. The latter object is leveraging the Dept to Tower mapping and therefore this allocation merely moves the cost from the Cost Source onto the distinct departments with the objective of onwards allocating them into the IT Towers and Subtowers)
Relationship set: Describes the Data Relationship (example: Cost Center is the only element used to link the 2 objects)
Weighting used: Describes the “Weight By” (example: Cost from the Public Cloud Provider Bill)
Allocation Maturity: Describes the current allocation maturity (Good/Better/Best). Pivotal for roadmap conversations, in terms of which allocations can/should be matured down the line.
Agreed By/On: The name of the person on the customer side who confirmed the allocation strategy and the date it was agreed upon (example: Joe Blog - 13/02/2019)

Note: Objects that have multiple allocations to the same destination object should have additional information in the allocation description to define how the allocations differ; such as the particular type of data that is being allocated from the source object to the destination.

Config Guardrails

Workspace Management

Browser Tabs

Only one browser tab should be used for the Development environment, as entering configuration steps in more than one browser tab may cause corruption of the user’s workspace. Multiple browser tabs can create multiple sessions that can lead to unexpected behaviors by one session overriding the other. However, Staging and Production browser tabs may be referred to whilst working in the Development tab without issues.

Check Outs/Ins

Organize check outs and check ins in a reasonably structured way. Working “blocks” of functionality should be checked in, particularly before any next step that is expected to be complex is attempted, as it mitigates against having to choose whether or not to lose a significant amount of completed work or to plough on in the hope you can eventually achieve a working config.

A check in will initiate both a dev-calc and a stg-calc. Particularly in larger projects, a dev-calc may take upwards of one hour, during which time the documents checked in may not be checked out again. Be aware of the calc times when timing your check ins to best align with the workflow of the project.

Installing components should be checked in on their own without other configuration work, although multiple components can be installed and checked in together.

Comments should always be entered against a check in. The comment should give a short descriptive overview of the main aim of the check in – such as “installation of X, Y and Z components”, “configuration of Cost Source mapping files and initial population of values to Cost Source object”, “updates to X reference dataset”, or “fixes to achieve full allocation from X object”.

The easiest and safest way to undo configuration steps on a document that has been checked out is to revert it, and then re-do the failed configuration steps from scratch. The “undo” button is not particularly predictable in terms of exactly what it will undo, which is why revert is generally preferable. Use the “Save” regularly in tables, metrics, and reports, as switching away from the tab can result in recent configuration steps being lost unless they have been saved.

When reverting a dataset that contains loaded data, revert the table upload (listed separately in the revert dialogue) at the same time as the revert of the main table.

Project Errors

Regularly use the Errors button on the Project tab to review all the errors in the current project. It is important to understand these and ensure that they are remediated. Pay attention to situations where you see Parsing errors, or implied conversions as these may have unexpected knock-on effects.

For example, converting a column to numeric when it contains text will result in a parsing error, but things will appear to work correctly because there will be an implied conversion back to text. However, this will eventually break things when you later attempt to perform a content upgrade.

It should be noted that some errors are low level/information only errors and can be ignored.

Post 12.9.3, P&E is releasing a new error framework. Not only will the new framework change how errors are displayed within the UI, but it will also cut down on the types of errors displayed. P&E is working to eliminate the errors that can be ignored, and only show the ones that should be acted upon.

Currently, there is not a defined list of errors to ignore/action on. Example of errors that should be actioned on include:

  • Columns type errors (column expected type was numeric, but uploaded as label) 
  • Table: Mapping column X does not exist in base table
  • Parsing errors ie (was expecting one of: <EOF> <Additive_Operator> … <Multiplicative_operator> …&&
  • Date Partition errors: Data partition references a column that no longer exists

Branching

Be sure to read the Best Practices: Branching Projects document.

  • The limit of branches that can be open is 5
  • If you have multiple branches open and they are large projects, you could see performance issues in your workspace - see the Manage performance in your development environment
  • When merging, observe best practices and only merge from branches to the trunk - while it is technically possible to merge from the trunk to a branch or between branches you risk stranding check-ins when you do anything other than merge from a branch to the trunk
  • To manage data loads between trunk and branch, perform two individual data loads via datalink (1 to trunk, 1 to branch)

Data Loading

File Type Conversion

When uploading files into a table, check to make sure you are uploading a consistent file type and try to only use the file type that the customer plans to use long-term when initially configuring the tables. Switching file types, such as from XLSX to CSV and vise versa, can create a scenario where the table shows parsing errors.  Normally, this is seen when going from XLSX, where a sheet name is specified in the Upload step, to CSV/TSV.  Sometimes versioning the table will allow the change in file type and sometimes it does not. The best way to work around the issue is the upload the new file with a different file type to a new table and append it to the original table.  

Data Type Conversion

After the data is Uploaded, it is important to verify the data type in the Import step. Any data type that is initially read as the wrong data type must be overridden to the correct data type in the initial data upload table. This is a common occurrence for Account and Cost Center IDs. It is especially important for data that will be used in a data relationship starts as a Label at the source table.  If it is Numeric at the Import step in the initial data load table and later transformed to a Label, it will likely cause allocations that use it as a data relationship to show no values for certain table types.  This can also prevent slicers from working.   

Transform Pipeline

Grouping datasets and when to use

Group data as early as practical to the required granularity of the objects that it is used to back. This helps to reduce the amount of calculation that needs to be performed. Data should be grouped as granular as required to:

  • Allocate by data-based relationships
  • Group by (report on)
  • Filter by (slice by)
  • Trend by
  • Pivot by
  • Slice by (filter by)​​​​​​​

In many cases it is not required that every row be specifically unique. Group to a level that meets these requirements above.

Existing Columns/value overrides in the Formula step

Avoid editing existing columns to override the original value. Overriding the original values in an existing column makes a configuration difficult to follow and troubleshoot since a column can change value between different transform steps, and it is time-consuming to work through every formula step of a transform to check whether or not the existing columns have been edited. An exception to this rule is simple value overrides; for example, overriding a data format issues such as a numeric column replacing “-“ with “0” or “ ”.

Any changes required to column values should therefore be configured as a newly named custom column whenever possible. If it is not felt possible to avoid value overrides, they should be placed in the final formula step to make them more easily traceable.

Joins

The primary use case for joins is to enhance the number of columns for reporting capability without having to resort to multiple Lookups. If you find yourself having to configure many lookups (or one lookup to retrieve concatenated columns followed by numerous Trim & Split operations on the return) between the same two tables, only for the purposes of reporting, consider using the join step between the two tables using the key column used for lookups.

Functions

Nested Ifs

Nested Ifs should only be used for simple comparisons and mapping of values. Limit the level of nesting to 4 levels or less.  Beyond this point, the logic become difficult to understand and debug. You may wish to consider a TableMatch to replace complex nested ifs, or Boolean lookups.

Avoid function calls within an If statement unless the functions are simple and the logic is reasonably straightforward to read (e.g. =If(Left(X,1)=”Y”,”Yes”,”No”)). If more complicated functions are required for the If statement logic, create a column for that function, and then reference the column in the if stated. This especially applies to Lookup() functions, as having a lookup function in an If state can be difficult to understand and can be taxing on performance.

TableMatch

If a TableMatch is to be used across multiple transforms, ensure all the transform tables include all the matching columns, even if they are blank, or you will generate project errors.

Remember that if something meets the criteria of multiple rules, TableMatch will return the first match. Therefore, the most specific rules should be at the top of your TableMatch table, and wild card based rules should appear lower down.

 More information on the TableMatch function can be found here.

LookupEx/SplitEx/ Lookup_Wild

Only use one LookupEx, SplitEx, or Lookup_Wild function per transform, regardless of the number of steps in the transform pipeline per transform table. If more than one of the “expand” functions is required, it should be configured in a follow-up transform table based on the existing table of the first transform.

Configuring multiple expand functions in separate formula steps, that are separated by different transform steps (such as Grouping, Hide and Rename) was a working practice, and had previously been encouraged. However, it is a risky practice, in particular if the transform steps used to separate the expand functions are empty. It is not recommended because:

  1. Future use of the empty transform step may be rendered impossible because of the position it has been placed in the transform pipeline;
  2. Removing or moving the empty transform step will cause the expand functions to be automatically merged into the same formula step without significant intervention.

$_

Use of “$_” to mean “this column” as shorthand still works. However, it is no longer documented in TBM Connect. It only exists when used in examples for applying such functions as Currency (e.g. “Currency($_)”). Therefore, if using it within formula steps of the transform pipeline, it is essential to explain the function of “$_” to the customer TBMA(s).

Allocation and Metric Practices

Object Identifiers

Always configure Object Identifiers on modeled documents, regardless if they are being used in a model or not. If one does not exist, create an identifier in the model step using the object identifier feature of selecting columns found on the model objects.

 Object Identifiers should be as granular as required to:

  • Allocate by data-based relationships
  • Group by (report on)
  • Filter by (slice by)
  • Trend by
  • Pivot by
  • Slice by (filter by)

Other best practices are:

  • Must be consistent across time for all columns that you group or filter on
  • Should never use the row() function.
  • Should never use the CurrentDate() function.
  • Should not include dates.
  • Should not include numerics (amounts, quantities, consumptions, cost).
  • Should in almost all cases never be unique

Model Allocations Types:

Weighted is highly preferred as the main allocation type. Consumption type should also give reasonably predictable allocations if documentation is studied. Formula allocations of the form “=SOURCE * ({destination object table.weighting column}/~{destination object table.weighting column})” are acceptable and known to be the preferred method for ensuring allocations containing negative values work correctly.

Use of formula allocations containing anything other than the form described above should be considered with care – it is recommended that the tech lead at a minimum be consulted prior to completing the configuration. Use of functions such as LookupObjectUnitValue, LookupObjectTotalValue, LookupObjectUnitAllocated, and LookupObjectTotalAllocated(either in the allocation formula or in a metric used in the allocation formula), should only be used following approval from the designated Apptio Tech Lead.

Standard Value allocations should be used with care, as they can result in over-allocation in the model. Use a compensating allocation (NOT a remaining value allocation) to prevent the over-allocation. Advise with the Tech Lead or Principal Consultant before attempting this type of allocation for the first time.

Model Allocation Steps:

In the From step, the allocation strategy “Send only remaining values (after the other rules run) should not be used without approval from a Tech Lead or Principal Consultant. This type of allocation can only be used once per object. WARNING: The more this is used, the higher risk of performance issues, as well as a less consumptive model.

Reporting Standards

Report Creation and Naming

  • Try not to create child reports, as these are difficult to find in the navigator
  • Report names should not contain special characters
  • The name of a report should explain what it conveys, this can be shortened for ease of navigation as part of the alias in the collection it is assigned

Collections/Navigation

A report collection allows you to:

  • Group reports together in Project Explorer
  • Allow navigation from a menu by the collection name
  • Each new collection creates the link from the reports drop down to the group of reports using the name of the collection.
  • Add each report to a collection to make it navigable without using TBM Studio. Collections can be hidden using the “Hide in Report Collections menu” so they do not display in production.
  • If a report does not form part of the report menu, add the report to a collection (to organize it) but remember to uncheck “show in navigator” so they do not appear as links at the top of screen.

OOTB Reports

OOTB reports can be customized in place - we do not need to create a report copy. This is for ease of use by the customer.

  • Have a conversation with the customer to understand how important future content upgrades will be, and discuss the steps it will take in order to upgrade the content with report customizations
  • Utilize branching when doing a content upgrade with customized reports
  • How to check if a report has been changed: here

Presentation – look and feel

Reports should be consistent, so the user experience does not change between reports

  • When creating new reports (if possible) extend the standard set by the OOTB reports by creating new master reports note:
  • Compact slicers are currently shown at the top of the page
  • We currently do not put tables or charts in boxes
  • Titles of components are not separate pieces of HTML, they are the name of the component (found in properties of the component, where you can make it visible)
  • When adding multiple components to a report, add them within a group so that the group can be copied and moved easily. E.g. table and chart alongside each other. Note that OOB reports are created as groups within groups

Permissions/Visibility

Avoid the “Permissions” button to change which roles can access as report, as this can lead to role specific versions being created which you may not be able to remove. Instead, use the “Visibility” settings to control the components on the report, and hide them from users (i.e. hide the group that contains all the components). *This recommendation can be ignored if your version is 12.7.1 or later.

Master Reports

Whenever possible, create a master report that sets the default layout of how all reports should be laid out. This can include KPIs, groups, tables, slicers and logos

Additional global slicers can be added or amended in the master reports.

Blueprints

Can be used but not updated or created

Use Modal blueprint as the standard for pop up reports

Calculated Fields (Inserted Columns)

Simple calculations can be handled easily in Apptio on the report surface.

Most tables should only be grouped by two columns.  Any table grouped by more than two columns is calculated real time.  CE recommends concatenating columns to a Group ID and then hiding the table Group ID. Additionally, avoid complex IF functions or lookups in calculated fields or where the row count will impede the user experience. This is because transform calculations are more efficient than those on the reporting surface. Inserted columns are calculated in real time when the report is accessed and are NOT pre calculated. 

Additionally, there are certain calculations that must be done on the reporting surface as the data for the calculation requires the model. These calculations include:

  • YTD
  • Annual

Perspectives

Fields that must be published to a perspective:

(i) If the value is reported across multiple periods. Select “Show all rows in time-based query” in the Publish Field window when creating the perspective.
(ii) The field is used multiple times and column heading is specific to the customer (This helps keep custom reports standard)
(iii) A calculated field is required in a KPI

All perspectives are within a category by default. Create new perspective categories for custom new fields as required.

New perspective should be grouped within the perspective category (similar to OOTB) which will mitigate the number of perspective categories. Include in the note field where the perspective came from.

Calculated Metrics

Examples of when creating a calculated metric can be utilized:

  • Complex calculations combining different metrics vs. using a !NEWCOLUMN on the reporting surface to do the same
  • Referencing a column within a dataset to change formatting of the column referenced

With the exception of what is mentioned about Value Field Settings, if you create a column via the ribbon’s data tab in a report, then it calculates in real time. 

Drills

Drill reports are a simple to create but without understanding the drill path details can create a bad user experience. If the drill report takes longer than 10 seconds to display in full it must be investigated. 10 seconds is about the limit for keeping the user's attention focused on the dialogue (excerpt from “Usability Engineering”, Jakob Nielsen, 1993).

If a drill takes a significant time to load, things to analyze:

  • Take a look at the modeled objects behind the report – do you have many to many allocations, where 1 row going to more than 1 row. One to many and many to many allocations sometimes have to happen, but how the sparseness of those allocations is what really matters
  • Bad sparseness lower down in the model impacting drills higher up in the model. One to many & many to many allocations farther down the model create larger intermediary tables as you move up the model.

Ways to improve (reduce) the size of the .Drillto and improve drill time:

  • Group the source
  • Group the destination
  • Make the data relationships between the source and destination more 1:!​​​​​​​

Pickers/Filters/Slicers

Limit column pickers, filters, slicers that are used to make a table to what it required to action a particular use case. If the table is sufficiently large, consider splitting the table into multiple tables rather than having one report do multiple things (drills, column pickers, etc.). Reports with too many pickers/slicers/filter/drills can cause instability since not all combinations of the data can be precalculated. Limit to 10 or fewer per page where possible. When you see reports with a significant amount of pickers/slicers/filters on them, the best course of action is to:

  • Determine the business case behind how the report is being utilized
  • What are the expectations of the end users with this report
  • Look at how the report is utilized by users (what pickers/slicer/filter combinations are most commonly selected)

Things to analyze to come to a solution:

  • Does a column picker require a scroll bar? This speaks to the number of possible fields to select
  • Are the column pickers being grouped? This impacts real time performance due to changes to the !Groupby

#TBMStudio

Statistics
0 Favorited
19 Views
0 Files
0 Shares
0 Downloads