Business Automation Insights (BAI)

Business Automation Insights

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

Business Automation Insights – Unleashing the power of Data

By Rachana Vishwanathula posted Thu May 19, 2022 03:53 AM

  

DBA which represents IBM Digital Business Automation has several automation controls and techniques to automate day-to-day business and technical operations. IBM DBA constitutes of several automation controls which are categorised as core automation controls (Capture, Content, Decision, Workflow) and automation accelerators (Process Mining & Modelling, RPA & Digital Labour, Operational Intelligence & AI). There are several tools that can be used to achieve a certain automation in a specific control that targets a unique purpose. For example –

  1. Within Capture, there are tools like Automation Document Processing(ADP) and Datacap
  2. Within Content, there are tools like FileNet Content manager, Enterprise Records, Content Collector
  3. Within Decision, there are tools like Operational Decision Manager (ODM), Automation Decision Services (ADS)
  4. Within Workflow, there are tools like Business Automation Workflow (BAW), Automation Workstream Services (AWS)
  5. For automation accelerators, there are tools like IBM RPA, Process Mining and Modelling with IBM Blue Works Live and Business Automation Insights (BAI) for Operational Intelligence

    Business Automation Insights is an IBM tool within Cloud Pak For Business Automation suite which focuses on leveraging the data generated while running business automation artifacts, be it core automation controls or automation accelerators and provides with no-code tools to generate dashboards with meaningful insights out of the raw data which are business automation events.

    The following flow covers the flow in which BAI operates –


    IBM Business Automation Insights (BAI) is a cloud capability to capture end to end business data (events) from Cloud Pak for Business Automation (CP4BA) platform components to operational data store and long-term store (data lake). BAI provides real-time operational visibility to Business Managers via custom or pre-built set of dashboards. Custom dashboards can be built by IT (using Kibana) or business users (using Business Performance Centre). The data collected by BAI and stored in the data lake can be used to inject AI into CP4A platform, for example it can be used to make recommendations to business managers and knowledge workers. Business Performance Centre is a no-code monitoring application native to IBM Business Automation Insights. You can design and share dashboards in minutes that capture business data in near real time and provide real time awareness of important business activities and processes.

    Collect:

    For BAI to be functional, BAI emitters should be configured on every automation control from which the data/events will be retrieved. There emitters are traditionally .war files which will be installed along with automation tool from where the data will be retrieved.
    For example, the following represents how a BAI emitter would work for a Content automation control.


    A CaseEventEmitter.war will be installed on Content Platform Engine within the WebSphere Application Server. This emitter will collect the events from the content service and store the events in Event Table within Content Engine Database.

    Similarly there are other emitters like process emitter, ODM emitter, content emitter for Workflow, Decision and Capture services. These emitters will be configured on respective automation services and stores the events generated in respective databases on BAI’s data lake.

    BAI Data Ingestion & Processing :

    Once the events are captured, before sending them to their respective databases, fair amount of processing takes place and then stored on Hadoop Distributed File System (HDFS) and then queried using Elastic Search. For the event processing, Apache Kafka is used.

    There are two Kafka topics that are used during the event processing namely Ingress topic and Egress topic. Ingress topic is used to stream the raw events. These raw events arrive in common JSON format from processes and cases. Apache Flink jobs are responsible for event processing and staging. The events after transformation are divided into three types –

    1. Raw Events: The events on which the transformation cannot be performed and hence will be processed as-is to the later stage. Raw events are deeply nested data structures and are not suitable for indexing or AI processing algorithms. Stateless operations in flink for raw events include passing them to the next stage of event processing and delivering the raw events unchanged to Egress topic of Kafka through which they will be delivered to HDFS.
    2. Time Series Events: Some raw events which are captured can be transformed using flink jobs. Time series events are the flattened version of raw events. These flattened data structures are suitable for indexing and hence can be used by data scientists to perform AI modelling. The stateless operations in flink for time series events includes the parsing & transforming of raw data to time series on the fly and passing them to next pipeline of event processing. The time series data then will be delivered to HDFS using Egress topic.
    3. Summary Transformation Events: These are the “time-aggregated” version of time series events. They are further classified into two types, Active and Completed. Active reflects the current state of process, task or a case (current state, current duration). Completed records the final state of process, task or a case (final state, final duration). The stateless operation in flink includes -
    • Since Raw events can be emitted out of order, the Time Series are likely to arrive out of order, a state is used for process summary and activity summary to store events that cannot be aggregated yet (for example activities or task that arrive before a process started event)
    • The current process/task event is recorded
    • The current process/task duration is recorded
    • The summary transformation events are then delivered to HDFS using Egress topic.

    Example of Event:

    The following shows a raw case event that is captured by Case emitter –


    From the raw case event, the highlighted data field contains business data which will be retrieved by flink jobs and stored as time series events on HDFS. The delivery to HDFS will be done using Egress topic of Kafka. While the events are classified into types and stored in HDFS, elastic search is used to index the events and also query the events.

    Visualize:

    All the data that’s collected will be used to create business dashboards. These dashboards are created –

    1. Business Performance Centre(BPC): BPC provides out of the box visualizations for the indexed data from elastic search. This includes charts and dashboards for supported runtimes. These dashboards can be categorized with business goals and alerts can be configured for the same.
    2. Kibana Dashboards: Kibana Process Dashboards, Case Dashboards, Decision Dashboards and Content Dashboards can be created based on the use case. These are custom dashboards not out of the box visualizations.

     Some of the sample visualizations can be as follows

     Out of the box dashboards that get generated are –


    Example: Out of the box dashboards on BPC for Workflow Case Events

    The above dashboard covers dashboards like – activities in progress, activities in each state, average age in-progress activities(seconds), average elapsed time of completed activities(in seconds), activity started statistics, activity completed statistics, average activity duration, total number of activities.

    The above diagram shows a component of overall dashboard listing down the activities in each state.

    Learn and Guide:

    1. Several events get generated by business automation resources. For example, if a project is using decision services, IBM Operational Decision Manager, detailed decision data can be captured which includes input received, output given, rules that are executed. Sequence in which rules are executed, execution errors (if any), etc. can be captured. And using Machine Learning, these data points can be modelled to build a use case like Decision Recommendation.
    2. Similarly, Process event data, BPMN & BPEL event data, ODM & ADS event data, Content event data, Case event data can be captured to create ML models for use cases like Intelligent Task Prioritization, Decision Recommendation, Workflow Insights, Next Best Task Business Value.

    Examples

    1. Data Scientist uses Jupyter Spark Notebook to extract data from BAI and prepare it for injection by out of the box ML Models


    2. Data Scientist uses Jupyter Spark Notebook to crate ML model and train it



    1 comment
    51 views

    Permalink

    Comments

    Tue March 07, 2023 04:02 AM

    Nice read