Upgrading to Cognos Analytics
Every process can be broken down to a methodology. After having help many customers upgrade to Cognos Analytics, I have established a repeatable process. As anecdotal as it may sound, I will include some specific facts that may help you in your journey. The steps include
- Planning the Upgrade
- Environment Setup
- Establishing a Baseline
- Content Migration
- Validation and Remediation
- Go Live
This session will cover the Validation phase.
Part of the migration is to upgrade the Report Specifications. Hopefully that was completed as mart if the migration phase. One step that should be done is the report spec validation. I have never (yet) seen a report spec validation fail as a result of the migration. In other words, if the report spec validation was successful in the source environment it will also be successful in the target environment. A recommendation then is to perform the report spec validation in the source environment and fix those that have failed there before the migration.
Generating Output for Comparison
Using the test cases from the Baseline we can execute the same tests in the target environment. Some decisions need to be made depending on the type of report and what you intend to compare to determine a successful upgrade. If a visual rendering of the report is important the best comparison currently is the PDF rendered output. PDF comparisons using the automated tools can detect differences in layout and flag reports that may require farther analysis to determine success or failure, the tolerance for difference needs to be part of the decision. If only the data is important then a data comparison can be performed. Lifecycle Manager will compare the data for each query in the report and MotioCI can be configured to compare the query data or the visual container data.
If you are not able to use an automated testing tool for this there are some options to aid a manual comparison process. I have used some open source PDF comparison applications as well as command line scripts to compare CSV output for data comparisons. The manual process is not as scalable therefor it is recommended to look at the automated tools in the planning phase.
If the migration is simple as defined in my planning blog then the success rate based on helping many customers with their migration is ~99%. There are however false negatives that you should be aware of when using the automated testing tools for comparison. The rate of false negatives averages 24% and can be categorized as Pixel Shift or Sorting Difference errors.
A Pixel Shift is when the data is consistent, but the rendering may result in an offset and is flagged by the automated comparison tools because they are using an overlay bit comparison process. These images are examples of the overlay where the rendering is just offset in either a text item or a simple list.
Here is a slightly more complex shift that results in the data wrapping differently and ultimately may result with a different number of pages.
Upon farther analysis using a manual comparison you could flag these reports as successful or if the rendering itself is not critical you might rely on the results of a data comparison.
Sorting differences also occur and would be flagged when using the rendering comparison. This would occur on reports where the sorting is not specified to a unique order for the data. This is a result of the database optimizing the query path differently based on the SQL generated between the two versions of Cognos Query Service and in some cases returning the data back to Cognos is a different order. These images are an example, on the left is the source report and the right is the target report where you can conclude with manual comparison that this is a successful migration. Here too if the rendering is not critical you might rely on the results of a data comparison.
Data Comparison is particularly useful when the data is of concern and the rendering is insignificant. Unfortunately, many reports are still being used for ETL or just to generate a CSV output. Those instances are candidates for data comparison and you should forego the rendering comparison to simplify the process.
Here are the results of data comparison using Life Cycle Manager
Here are the results of data comparison using MotioCI
Other factors may impact the comparison. You need to determine if the data is dynamic or static and plan accordingly. Dynamic can be in two forms, either the database is updated or the report is using relative time periods. If the data is dynamic then you need to execute the source and target reports within a time window where the data is consistent.
Additionally, if the reports have “run date” or “run time” these too could fail a comparison. MotioCI can be configured to ignore those type of data elements.
Results and Remediation
Having executed the test cases in the Target environment you should have a good understanding of the success of your migration efforts. After reviewing the reports that are flagged as failed you should also have been able to categorize them as false negatives as described above or as needing remediation. Hopefully at this point the number of reports that actually require remediation is low.
Here are the test results using Life Cycle Manager
Here are the test results using MotioCI
More on what to expect during a migration in a future blog, stay tuned.