Cloud Pak for Data

Cloud Pak for Data

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

How to deploy a function as web service in Cloud Pak for Data 3.5 simulating AI model scoring

By Harris Yang posted Wed August 11, 2021 06:50 AM

  
How to deploy a function as web service in Cloud Pak for Data 3.5 simulating AI model scoring

IBM Cloud Pak™ for Data is a fully-integrated data and AI platform that modernizes how businesses collect, organize and analyze data and infuse AI throughout their organizations. Built on Red Hat® OpenShift® Container Platform, IBM Cloud Pak for Data integrates market-leading IBM Watson® AI technology with IBM Hybrid Data Management Platform, DataOps, and governance and business analytics technologies. More and more enterprise data scientists are developing AI models in IBM Cloud Pak for Data for various use cases across industries. In IBM Cloud Pak for Data, users can easily deploy the trained AI models into web services for business application to invoke for scoring. Moreover, the users can also develop a function and deploy the function into a web service to simulate AI model scoring, which gives data scientists the approach to wrap up some common purpose utilities into RESTFul APIs for above business applications along with AI model scoring. In this blog, you will learn the procedure to deploy a function into web service in IBM Cloud Pak for Data 3.5.
cpd35-function-aas.png

1. Create a Jupyter notebook with default Python 3.7 runtime

2. Define a function with a nested function score
The input variable payload is in json format with the following format and the function can parse input payload to get input data.
{"input_data": [{'fields': ['col1','col2',...], 'values': [['value1','value2',...], ...]}]}
The nested function score returns data in json format with the following format and the invoker can parse the returned json data to get result.
{'predictions': [{'fields': ['col1','col2',...], 'values': [['value1','value2',...], ...]}]}
#wml_python_function
def churn_function():
    
    def score( payload ):
        
        message_from_input_payload = payload.get("input_data")[0].get("values")[0][0]
        response_message = "Recieved message - {0}".format(message_from_input_payload)
       
        # Score using the pre-defined model
        score_response = {
            'predictions': [{'fields': ['Response_message_field'], 
                             'values': [[response_message]]
                            }]
        } 
        return score_response
    
    return score​


3. Init WML Client
from ibm_watson_machine_learning import APIClient

from project_lib.utils import environment
url = environment.get_common_api_url()

import sys,os,os.path
token = os.environ['USER_ACCESS_TOKEN']

wml_credentials = {
     "instance_id": "openshift",
     "token": token,
     "url": url,
     "version": "3.5"
}

client = APIClient(wml_credentials)​


4. Create a deployment space
space_name = 'churn-analysis-space'
space_uid = ''

for space in client.spaces.get_details()['resources']:
    if space['entity']['name'] ==space_name:
        space_uid=space['metadata']['id']

if space_uid == '':
    space_meta_data = {
        client.spaces.ConfigurationMetaNames.NAME : space_name
        }
    stored_space_details = client.spaces.store(space_meta_data)
    space_uid = stored_space_details['metadata']['id']

client.set.default_space(space_uid)​


5. Save the function into deployment space
# Fucntion Metadata
software_spec_uid = client.software_specifications.get_uid_by_name('default_py3.7')

meta_props={
    client.repository.FunctionMetaNames.NAME: "Churn-analysis-function",
    client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: software_spec_uid
}

function_artifact = client.repository.store_function(meta_props=meta_props, function=churn_function)
function_uid = client.repository.get_function_uid(function_artifact)
print("Function UID = " + function_uid)

function_details = client.repository.get_details(function_uid)
from pprint import pprint
pprint(function_details)​


6. Deploy the function as web service
deploy_meta = {
     client.deployments.ConfigurationMetaNames.NAME: "Churn-analysis-function",
     client.deployments.ConfigurationMetaNames.ONLINE: {}
}

deployment_details = client.deployments.create(function_uid, meta_props=deploy_meta)

deployment_uid = client.deployments.get_uid(deployment_details)
print("Deployment UID = " + deployment_uid)

​

You should be able to see the output like this:
#######################################################################################

Synchronous deployment creation for uid: 'cf7b3684-6212-4efe-84de-695d682a19c4' started

#######################################################################################


initializing.....
ready


------------------------------------------------------------------------------------------------
Successfully finished deployment creation, deployment_uid='3e763294-31a0-4621-a811-6f0774818f65'
------------------------------------------------------------------------------------------------

​


7. Test the web service
job_payload = {"input_data": [{'fields': ['message'],
                               'values': [['Hello Churn Analysis!']]
                              }]
              }
pprint(job_payload)

job_details = client.deployments.score(deployment_uid, job_payload)
pprint(job_details['predictions'][0]['values'][0][0])

#CloudPakforDataGroup
0 comments
12 views

Permalink