View Only

Managing Maximo Visual Inspection model lifecycle with MLOPs on AWS

By Ashish Patel posted Tue November 08, 2022 08:32 AM


Maximo application suite provides an intelligent asset management, monitoring, predictive maintenance, and reliability in a single platform. It’s a distinct, integrated cloud-based platform that uses AI, IoT and analytics to optimise performance, extend asset lifecycles and reduce operational downtime and costs. Maximo Visual Inspection (MVI) which is part of the Maximo Application Suite is a next generation cognitive infrastructure platform from Maximo that provides deep learning-based image and video analytics services such as classification, object detection, and anomaly detection. It has tools for annotating images and videos, which can be used to train a model, evaluate performance, and publish API with a single click, making model deployment simple.

Reflections about MVI's present scenario in which both dataset and model versioning are manual operations that should be documented by the initiator. API is used to ease model deployment. Model Tracking, however, is not; hence, if we need to retrain an older model, we must do it as a new model. Model Drift, Fairness, and Explainability are absent from the current version. In contrast, the Model Dashboard details the performance of the model's Inference component. MLOps enables these services to make ML systems more streamlined, scalable, credible, and resilient.

Significance of having MLOps for Maximo Visual inspection on AWS

MLOps Life Cycle

The computer vision's data sets should be versioned so that any modifications can be traced. Since a variety of external factors might cause data to shift or introduce anomalies into a computer vision model, the tool keeps track of and stores many versions of the model. Because of this, reverting to a prior version or pinpointing precisely where a fault was addressed is straightforward in the event of an issue.

MLOps will be performed on AWS by managing the machine learning life cycle with the open-source framework MLFlow. Experimentation, repeatability, deployment, and a centralised model registry are all part of this. AWS Fargate hosts a serverless MLFlow server and uses Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS) as the artifact and backend stores, respectively. Setting up the restapi, registering MVI-trained models in the MLFlow Model Registry, and deploying an MLFlow model onto an MVI endpoint are all required steps for tracking MVI experiments on MLFlow.

The next generation of developing technology is hybrid cloud services, which make it easier for businesses to implement computer vision systems that are safe, durable, and scalable via the use of MLOps engineering. In this article, the blueprint for future computer vision using MLOps has been addressed, along with a description of how it now appears.