Figure 1. WMLz architecture
The blue parts on z/OS are WMLz core components including Model Management UI, Model and Deployments Management Services, Online Scoring Service, User Management Service and Spark Integration Service. IzODA is the key bundled component on z/OS where Spark provides the best-of-breed analytics engine for large-scale data processing, Anaconda supplies a wide range of popular Python packages for model training and MDS connects those data processing engines to your enterprise data sources on Z, such as Db2®, IMS, VSAM, and SMF. The WMLz IDE is the optional part which provides the runtime environments and frameworks such as Jupyter Notebook, R studio and SPSS Modeler for model training and development. The services of WMLz IDE are provided through a Kubernetes cluster on s390x or x86 Linux servers. Users can choose to use one Linux node or 3 Linux nodes to configure their Kubernetes cluster.
The cluster features a web user interface (UI), an administration dashboard, and many APIs. The UI and dashboard simplify the machine learning workflow and enable project collaboration among key personas in your enterprise, including data scientists, data engineers, business analysts, application developers, machine learning engineers, and system administrators. For example, while system administrators can use the dashboard to configure systems, monitor services, and keep the UI up and running, data scientists can use the UI to build, train, and manage machine learning models, and application developers can then see and deploy those saved models. Deployed models are used to make predictions and can be optimized in real time as the data is ingested into the machine learning workflow in a feedback loop.
In short, either as a standalone machine learning solution or an AI-enabling infrastructure, WMLz provides the following unmatched capabilities:
- Secure IBM Z platform for running machine learning with data in place
- Full lifecycle management of models
- Enterprise grade performance and high availability
- Flexible choice of machine learning languages
- Intuitive self-guided model development
- Developer-friendly API interfaces for applications on the Z platform.
With these capabilities, WMLz helps you maximize the value of your mission-critical data. By keeping your data on Z, WMLz helps significantly reduce the cost, security risk, and time to turn your raw data into valuable insights. You can click here to visit the official WMLz documentation for more detail.
Why machine learning on z/OS?
Many of the data quality problems are introduced when data is in motion, for example, when it is moved off a platform for analytic processing. These problems can be avoided when analytics resources, such as machine learning, are moved to where the data resides. This approach is known as “data gravity”.
Data gravity is at the core of analytics on z/OS. Simply stated, data gravity prevents the need for movement of data at a large scale. Data movement increases the complexity, risk, and cost of managing the data. This is especially true for data that resides on IBM Z platform, the world’s most secure and resilient environment. Moving data off the Z system introduces security risks and operational costs. WMLz enables you to keep the sensitive data of your business in the secure Z environment while using the industry leading machine learning capabilities to extract actionable insights from the data. By keeping the data on your Z systems, WMLz helps significantly reduce the cost, security risk, and time to create, evaluate, and deploy machine learning models.
What ML algorithms and model types does WMLz support?
Algorithms
The integrated Jupyter Notebook Editor model development tool supports the following machine learning model algorithms:
- All classification and regression algorithms that the Apache Spark MLlib supports. See "z/OS Spark MLLib – Classification and regression 2.3 or 2.4" for a list of the supported classification and regression algorithms.
- All PySpark classification and regression algorithms that the Apache Spark MLlib supports.
- All clustering algorithms that the Apache Spark MLlib supports. See "z/OS Spark MLLib – Clustering 2.3 or 2.4" for a list of the supported clustering algorithms.
- All PySpark clustering algorithms that the Apache Spark MLlib supports.
- All Scikit-learn machine learning algorithms. See Scikit-learn machine learning algorithms for the list of supported Scikit-learn machine learning algorithms.
- All machine learning algorithms that XGBoost Python API supports, with exception of GPU algorithms in XGBoost. See XGBoost Python Package for details of supported XGBoostalgorithms.
Data sources
Data source support in WMLz is determined by whether you use JDBC or MDS as the data access method:
- With JDBC, WMLz supports access to the following data source in Scala and Python:
- With MDS, WMLz supports access to the following data sources in Scala and Python:
- Db2 for z/OS
- IMS
- SMF
- VSAM data sets
See The Data Service SQL solution and DS Studio Overview for more information about working with data sources through MDS.
Model types
The type of a machine learning model is determined by the scoring engine used for processing the model. WMLz supports the following model types:
- SparkML
- MLeap
- PMML
- Scikit-learn
- XGBoost
- ARIMA or Seasonal ARIMA
- ONNX
You can train, save, publish, and deploy MLeap, SparkML, Scikit-learn, XGBoost, ARIMA, and Seasonal ARIMA models in WMLz.
You can also use the WMLz PMML support to import a model that is developed on another platform. You can save and deploy the imported model as a PMML model type.
In addition, the WMLz support of the model types has certain limitations. See https://www.ibm.com/support/knowledgecenter/SS9PF4_2.2.0/src/tpc/mlz_algorithms.html for details.
zPET’s WMLz environment
As "IBM Z's first customer", one of our teams’ goals is to integrate the latest and greatest technologies and products into our test environment. To that end, we have installed, configured, and maintained the complete WMLz solution with the latest PTFs on both our sysplexes. We deployed the WMLz base on one of our z/OS V2R4 systems along with the WMLz IDE running on the 3-node x86 Linux servers installation mode serving our 15-way parallel sysplex which we call Plex1. We deployed the WMLz base on a second z/OS V2R4 system plus the WMLz IDE running on a 1 node s390x server running in a z15 Linux on Z LPAR serving our smaller Plex2 sysplex.
Since its GA of the first version V1.1.0, we have installed and started to use WMLz for testing various machine learning use cases. At the time of this writing, the latest version available and now installed in our environment is V2.2.0.
For the first time installation, we found it is necessary to spend extra time and care to fully read the installation roadmap here which gives a high-level introduction of all the installation steps. In our experience, there were 2 area that needed additional customization for our environment after installation since they were not mentioned for consideration in the installation guide.
- zFS storage capacity: After the installation we encountered multiple problems caused by “No space left on device” because we only had one zFS filesystem mounted on $IML_HOME_DIR initially. Upon further investigation, we isolated most of this to files written to $IML_HOME_DIR/imlpython and $IML_HOME_DIR/tmp that would grow to be very large. Our solution was to mount dedicated zFS filesystems that can dynamically increase their size on mountpoints $IML_HOME_DIR/imlpython and $IML_HOME_DIR/tmp.
- Choosing the IzODA environment: It is required to install IzODA when you install WMLz. If your environment does not have IzODA installed already, you can install the bundled IzODA following the official instructions directly. Our sysplex had IzODA installed already. In order to separate the environment of previous IzODA applications from WMLz, we had to do some additional customization. For Spark, we created a dedicated Spark directory in $IML_HOME_DIR which contains the Spark configuration files for WMLz and pointed to it using the SPARK_HOME environment variable for WMLz. For anaconda, we set up a fully dedicated anaconda environment for WMLz and made it as the ANACONDA_ROOT of WMLz.
After the first installation, we continued to upgrade WMLz on a regular cadence. We’ve found that during the upgrades, the WMLz base part only needs about 2 hours of down time while the Linux IDE part needs even less with about 1-2 hours downtime when everything goes smoothly. Upgrades of WMLz typically require IzODA to be upgraded first and then following the instructions here to upgrade the WMLz base part step by step. The Linux IDE upgrade is much simpler by just executing a shell script on the Linux server.
We have begun to deploy sample machine learning use cases as well as some that are specific to the types of data that we generate in our z/OS environments. We have attempted to import external models to our WMLz environment and also develop, save and deploy our own models using Jupyter Notebook IDE. We configured scoring services both in a dedicated Liberty server and in a CICS region. Most recently, we have implemented a machine learning use case using data from the SMF real time interface of z/OS to score against a model we built from historical SMF data. We will introduce more about this in our next blog about WMLz.
Reference:
IBM Knowledge Center, “Overview of WML for z/OS”,
https://www.ibm.com/support/knowledgecenter/SS9PF4_2.2.0/src/tpc/mlz_configurescoringservicecics.html
Kewei Wei, Guanjun Cai,“Bring Intelligence to Where Critical Transactions Run – An Update from Machine Learning for z/OS”, https://www.idug.org/p/bl/et/blogaid=763