AI on IBM Z & IBM LinuxONE

AI on IBM Z & IBM LinuxONE

AI on IBM Z & IBM LinuxONE

Leverage AI on IBM Z & LinuxONE to enable real-time AI decisions at scale, accelerating your time-to-value, while ensuring trust and compliance

 View Only

Adding Machine Learning Scoring Capabilities to Your z/OS Liberty Web Application

By James Taylor posted Tue January 13, 2026 09:45 AM

  

Introduction

Did you know that with IBM’s Machine Learning for z/OS product, you can access predictive scoring directly from your web application—without relying on REST calls? By using the product’s native Java API, available as a Liberty feature, you can achieve sub-millisecond scoring instead of paying the cost of HTTP communication.

In this blog, I’ll walk you through how I implemented this integration.



Step 1: Create a Minimal Web Application


Before testing the API, I needed a simple web app that printed something to the browser when its endpoint was called. This served as the foundation for adding predictive scoring later.

Since we’re in the world of AI, I decided to use IBM’s Project Bob (marketed as your AI software development partner) to generate this minimal app for me.


 

Here is my prompt:

generate me a basic hello world web application project.

- it must have a maven build I can run to create a war I can directly put into Liberty

- the functionality will just be to print "hello world" to the browser window when the endpoint is called.

- generate me a Liberty server.xml too.


Bob’s self generated to-do List:

A screenshot of a black screen

AI-generated content may be incorrect.


Bob generated a README and build instructions

A black screen with white text

AI-generated content may be incorrect.


Before celebrating the fact it had finished…

A screenshot of a computer program

AI-generated content may be incorrect.


Lets explore the assets Bob has created...

A screen shot of a computer

AI-generated content may be incorrect.

Java code for the testing endpoint:

A screen shot of a computer program

AI-generated content may be incorrect.

A web.xml application configuration file:

A screen shot of a computer program

AI-generated content may be incorrect.

A maven build configuration pom file:


A Readme:

 

A Liberty server.xml file to configure the war in a Liberty server:


Building the web application with maven


It has built correctly. 



Testing the standalone web application

Next, we upload the artifacts to z/OS and set up a Liberty server to verify that our minimal web application runs correctly.

First we upload the server.xml and MLTest.war using ftp



And next create a new Liberty server on z/OS:

In the image below, you can see that z/OS is correctly set up with:

  •  WLP_INSTALL_DIR pointing to /usr/lpp/liberty_zos/current, which is the location of the Liberty installation available on z/OS.

  • The Liberty bin directory included in the system path.
  • The export command to set the variable that tells Liberty where to store its server definitions.
  • Finally, I ran the server create command to define a new Liberty server that will host and test Bob’s generated web application.

Here we can see the server create worked and we have a Liberty server to customize:

A computer screen shot of a computer

AI-generated content may be incorrect.

We copy the server.xml we uploaded over the top of the default one:
A screenshot of a computer

AI-generated content may be incorrect.

Use vi to edit the path in server.xml to locate the war file where I uploaded it to:

Start the server

A screen shot of a computer

AI-generated content may be incorrect.

See if we can view it from a browser:

It worked!

That completes Part 1 of my journey—a straightforward task made error-free thanks to Bob’s assistance.




Step 2: Create a Scoring Service in the ML Portal

The Machine Learning portal allows you to define a scoring service, which automatically creates a Liberty server configured to provide RESTful access to deployed models. For demonstration purposes, we’ll use the Churn sample model, though any model can be used.

Here’s what I did:

1. From the portal menu, select Scoring Services.



2. Click Add standalone scoring service.



3. Provide a name, host, and port details for the new service, and then pressed ‘Advanced’

  4. Under Advanced, reduce memory from 16GB to 4GB (sufficient for the churn model and my LPAR limits).

   5. Pressed ‘Add’


My newly defined scoring service is now listed, but not started




    6.  Press on the three dots to the right of my scoring service and choose ‘start’



     7. Confirm I wish it to start



I got a progress spinner for a while:
A screenshot of a computer

AI-generated content may be incorrect.



After a short wait I could see confirmation that the server had started.




Deploy a model to the scoring service endpoint

   1. Next I pressed the title at the top to go back to the home page



     2.  Pressed models



    3. Pressed the three dots to the right of the churn model 
(which I had already uploaded to the server).  And choose ‘deploy’



    4.   I gave the deployment a name, my convention is to use the name of the scoring service, followed by the word ‘Deployment’

I selected the scoring service I wanted to deploy it to

and then pressed the ‘Create button’


we can now see it has been deployed



Test the REST endpoint of the scoring service endpoint

1.  I pressed the three dots to the right of the deployment entry and chose ‘Test API call’




I was presented with a page to allow me to enter the data to send for scoring

    2.     I enter values I know to work with the churn model and press submit


after pressing submit we see the scoring service response:


Adding the Hello World Endpoint to the Same Liberty Server

The portal actions I had taken had created a Liberty server under ML_HOME/usr/servers/<service-name>.
I now edit its server.xml to include my Hello World app. Since restarting from the portal would overwrite changes, I start the server manually from the shell after adjusting WLP_USER_DIR.


The steps I followed are shown below:

    1. I return to the scoring services list and stop the service before making changes



     2. I stop the service


Here we can see the Liberty server the portal has created: ML_HOME/usr/servers/test1

A black screen with white text

AI-generated content may be incorrect.



    3. I edit server.xml with VI  and add in the code for the hello world web application



At this point, we can’t restart the server through the Machine Learning for IBM z/OS UI or the aln-services.sh script, as both methods would overwrite our custom settings.
I’ve requested an enhancement to add a “do not overwrite” option when defining a server. Until that becomes available, we’ll use a simple workaround and start the server directly from the UNIX command line.

Before doing so, we need to update the WLP_USER_DIR environment variable to point to the directory where the server resides.



    4. I alter the environment variable to point at the wlp home of MLz:
A screenshot of a computer program

AI-generated content may be incorrect.

    5. I issue the server start command (to start the server, outside of MLz control)
A black screen with white text

AI-generated content may be incorrect.



    6. I repeated the steps from the “Test the REST endpoint” section to verify that the scoring service still functions correctly alongside our newly added web application—and it does:






    7. Now I test to see if the hello world endpoint is there too

..and it is:

 


Step 4: Enhance the Web App to Call the ML Java API


The ML Java API class is part of Machine Learning for z/OS (MLz) and provides native scoring capabilities.

The ML Java API class is:

com.ibm.ml.scoring.online.service.api.NativeAPIs

 

Its key method:

doScore(String deploymentId, Map<String, Object> input)

What this method does:

  • Accepts a deployment ID and an input map of feature names to values.
  • Returns a map containing the scoring result.

About MLz and NativeAPI:

  • MLz (Machine Learning for z/OS) enables scoring of deployed machine learning models directly by other Java classes running in the same Liberty JVM.
  • The NativeAPIs class exposes the core scoring functionality.
  • The primary method, doScore, is the native API entry point for scoring requests.

Method Signature:

public Map<String, Object> doScore(String deploymentId, Map<String, Object> input)

  • deploymentId: The identifier of the deployed model.
  • input: A map of feature names and their corresponding values.
  • Returns: A map containing prediction results and associated metadata.

Steps to call the API:

  1. Build the input map.
  2. Use reflection to load NativeAPIs and its doScore method.
  3. Instantiate the class and invoke doScore.
  4. Extract the prediction from the returned map.

Below is the input map required to replicate the data we previously submitted through the TEST UI:

 final Map<String, Object> input = new HashMap<>();

        input.put("ACTIVITY", 1);

        input.put("AGE", 30);

        input.put("EDUCATION", 1);

        input.put("INCOME", 100000);

        input.put("NEGTWEETS", 8);

        input.put("SEX", "F");

        input.put("STATE", "CA");

Here is simplified code to call the endpoint:

final Class<?> mlzClass = Class.forName(ML_CLASS_NAME);
final Method mlMethod = mlzClass.getMethod("doScore", String.class, Map.class);

final Object mlObj = mlzClass.getDeclaredConstructor().newInstance();

Thread.currentThread().setContextClassLoader(mlzClass.getClassLoader());

final Map<String, Object> invocationResult =
         (Map<String, Object>) mlMethod.invoke(mlObj, deploymentId, input);

A complete Method for Integration into my Web Application:

private String doScore() {

        Map<String, Object> result = null;

        String toReturn = "empty";

        final Map<String, Object> input = new HashMap<>();

        input.put("ACTIVITY", 1);

        input.put("AGE", 30);

        input.put("EDUCATION", 1);

        input.put("INCOME", 100000);

        input.put("NEGTWEETS", 8);

        input.put("SEX", "F");

        input.put("STATE", "CA");

        final ClassLoader originalClassLoader = Thread.currentThread().getContextClassLoader();

        try {

            final Class<?> mlzClass = Class.forName(ML_CLASS_NAME);

            final Method mlMethod = mlzClass.getMethod("doScore", String.class, Map.class);

            final Object mlObj = mlzClass.getDeclaredConstructor().newInstance();

            // Set the context class loader to the ML class's class loader for any downstream resolution.

            Thread.currentThread().setContextClassLoader(mlzClass.getClassLoader());

            @SuppressWarnings("unchecked")

            final Map<String, Object> invocationResult =

                    (Map<String, Object>) mlMethod.invoke(mlObj, SERVING_ID, input);

            result = invocationResult;

            if (result != null) {

                final Object prediction = result.get("prediction");

                if (prediction != null) {

                    toReturn = prediction.toString();

                } else {

                    toReturn = "No prediction field returned from ML service";

                }

            } else {

                toReturn = "No result returned from ML service";

            }

        } catch (ClassNotFoundException e) {

            return "ML Error: class not found: " + ML_CLASS_NAME;

        } catch (NoSuchMethodException e) {

            return "ML Error: expected method 'doScore(String, Map)' not found on " + ML_CLASS_NAME;

        } catch (ReflectiveOperationException e) {

            // Covers InstantiationException, IllegalAccessException, InvocationTargetException, etc.

            final String causeMsg = (e.getCause() != null && e.getCause().getMessage() != null)

                    ? e.getCause().getMessage() : e.getMessage();

            return "ML Error: reflective invocation failed: " + causeMsg;

        } catch (Exception e) {

                return "ML Error"+e.getMessage();

        } finally {

            // Always restore the original loader to avoid contaminating the thread for subsequent requests.

            Thread.currentThread().setContextClassLoader(originalClassLoader);

        }

        return toReturn;

    }

Final Integrated Method in the Web Application:

 

    



Rebuild the Application and Verify Integration

First we rebuild with maven:

And then reupload the war to the same place on the z/OS file system:

And then restart the Liberty server

A screenshot of a computer

AI-generated content may be incorrect.

Next, let’s visit the web application’s endpoint to verify that it can retrieve a score and display it:

Success, it has worked!

Next, I make a small adjustment to display the entire result instead of just the prediction:
A screen shot of a computer program

AI-generated content may be incorrect.

I stop the server, start it again, and reload the page to verify the update:


Summary

Here’s what we accomplished:

    1. Built a minimal Liberty web app using AI assistance.
    2. Created and deployed a scoring service in the ML portal.
    3. Combined both into a single Liberty server.
    4. Integrated the ML Java API for direct scoring—no REST overhead.

    This approach delivers fast, secure, and efficient scoring for z/OS-hosted applications, enabling real-time AI-driven decisions and reducing latency compared to REST-based calls.

    0 comments
    47 views

    Permalink