Order Management & Fulfillment

 View Only

Key Go-Live learnings during roll out of IBM Sterling OMS (Part II) – Data Migration

By Jagadesh Hulugundi posted Thu October 22, 2020 08:16 AM


This is the second part of the Key Go-Live learnings during roll out of IBM Sterling OMS blog series. This blog focusses on use cases, challenges and considerations in data migration. 

Here are some typical business and IT use cases which necessitate data migration:

  • Transition from an existing legacy order management solution to a modern IBM Sterling OMS platform
  • Transformation of a large monolith OMS solution into a micro services architecture (example: IV carved out of OMS+GIV monolith)
  • Movement from a traditional on-premise IBM Sterling OMS to another cloud deployment model
  • Upgradation of the previous version of IBM Sterling OMS to the latest one with near zero down time during upgrade and deployment
  • Enterprise focus on AI infusion through moving data from IBM Sterling OMS to Data Lake solutions

By looking at the above use case scenarios, we can infer that data migration can be classified into two distinct categories

  1. In transition journeys, there are open-orders, often termed as in-flight orders which start their journey (born) on an existing as-is application and finish (close/die) on the new to-be applications. This data is handled through custom data migration scripts in the context of custom solution, data models and limitations of other applications in their respective architectural landscapes.
  2. Historical orders which are archived or have completed their life cycle on the existing as-is applications and are now needed on the new to-be applications. This data is often required for referencing and analytic purposes and is managed through ETL processes (Extract, Load and Transform).

Data migration can impact an enterprise with respect to the end customer experience, inter-department co-ordination, IT upstream and downstream handshakes and data consistency reference for the single source of truth.

Challenges in data migration: 

  • The applications involved in the architectural landscape need to understand the business and data transition and accommodate the changes to handle in-flight orders. This is often the single most painful area in data migration due to which data quality is often compromised and hence can have a direct impact on the data consumption by the new application.
  • Data quality is affected due to data sequencing issues, different data nomenclatures, relational data models wrongly interpreted by middleware applications, etc. This directly has an impact on algorithms that process the data for internal logic with a cascading effect on the downstream applications.

  • Data validation is an expensive process which requires accurate reconciliation between the source and target applications. Most data migration solutions overlook this aspect and don’t realize the importance of data validation which can result in an unsuccessful Go-live phase of an application. This step can make or break a successful roll-out and any compromises can have serious consequences.

  • Data migration is a time-consuming process to execute production that potentially needs large outage windows and hence is expensive to the business. Though this is true for historical data, it is also applicable for in-flight data which could be in the order of several GB’s per minute due to the live traffic generated from upstream and downstream applications.

  • Some of the data elements which need transformation may cost the business additional charges. For example, a credit card authorization taken for an order in a legacy application and waiting for payment release which got held due to some reason. Post data migration, the authorization may need to be reversed and require a fresh authorization request from IBM Sterling OMS, causing additional transaction charges for the business to hit the payment gateway.

    The above challenges indicate that data quality and validation are of utmost importance. Identifying key attributes mapped across applications, developing scripts to automatically check for their consistency and intended values fixed upfront before opening the live-traffic of data into the new application makes the Go-Live process error-free and smooth.


    To conclude, with appropriate data migration principles coupled with the right pilot model mitigate the Go-Live risks to a very large extent, making the migration successful. These two strategies must intersect well to avoid the new Sterling OMS from being a puppet dancing to the tunes of other third party applications.

    Stay tuned for the next edition to learn about another important technical area when you Go-Live.

    Read Part I of the blog - Key Go-Live learnings during roll out of IBM Sterling OMS – Pilot Modelling