Cloud Native Apps

 View Only

OpenFaaS on RHOCP 4.x – Part 4: MAX Models

By Alexei Karve posted Mon August 09, 2021 04:53 PM

  

OpenFaaS Functions for MAX Models on OpenShift ppc64le

 

Introduction

 

The Model Asset eXchange (MAX) on IBM Developer is a place for developers to find and use free, open source, state-of-the-art deep learning models for common application domains. These include Audio, Image, Video, NLP, and Weather that can be consumed using multiple mechanisms. The MAX framework is a Python library that wraps deep learning models implemented in different deep learning frameworks and provides programming interfaces in a uniform style. It effectively enables developers to use deep learning models without the need to dive into deep learning programming frameworks. Each implementation of a deep learning model runs in isolated containers. MAX provides a standardized deep learning programming framework-agnostic programming interface as RESTful APIs. For each deep learning model, MAX’s output is in a JSON format following a standardized specification. MAX also integrates Swagger to make a graphical user interface automatically available to all wrapped deep learning models.

 

Time to value can be quite long if you need to train a model from scratch due to the amount of data, labor, time, and resources required to complete the tasks. Pretrained models can be ready to use right away, or they might take less time to train. The Model Asset eXchange is configured with pre-trained or custom-trainable state-of-the-art deep learning models to solve common business problems that have been reviewed and tested. All models in MAX are available under permissive open-source licenses, making it easier to use them for personal and commercial purposes and reducing the risk of legal liabilities. MAX models supporting deployment are published as public images on Docker Hub. However, these are not available for ppc64le. The Docker image source is published on GitHub and can be downloaded and customized as needed.

 

In this blog, we look at the instructions and scripts needed to run machine learning prediction models from MAX on OpenFaaS. Specifically, the remaining sections show how to build, deploy and invoke sample models such as: Object Detector, Audio Classifier, Optical Character Recognition, Max Image Segmenter, and Human Pose Estimator as functions on OpenShift Power ppc64le.

 

Changes to Code and Dockerfile for ppc64le

 

The images need to be augmented to include the of-watchdog for ppc64le used as an entrypoint by OpenFaaS and the base image needs to be changed. OpenFaaS functions provide the HTTP services to implement the prediction APIs. The of-watchdog acts as a reverse proxy for running functions. Changes are required to allow the additional path "/function/modelname" exposed by OpenFaaS gateway that is prepended to the "/model/predict" endpoint for MAX. Additional changes are required in the Dockerfile to install required dependencies.

 

Some of the changes in the code include:

1. Installing v2 of tensorflow instead of v1

Old:

import tensorflow as tf

New:

import tensorflow.compat.v1 as tf

tf.disable_v2_behavior()

 

2. Replacing flask-restplus to flask-restx in the api/predict.py and core/model.py to import fields, abort

Old:

from flask_restx import fields

New:

from flask_restx import fields

 

3. Importing tf_slim as a package because tensorflow.contrib exists only in TensorFlow 1.x.

Old:

slim = tf.contrib.slim

New:

import tf_slim as slim


We may use the Swagger UI or curl commands to test the invocation of the functions. Also Jupyter notebooks (
Object Detector, Optical Character Recognition, Max Image Segmenter, Human Pose Estimator) and Web Apps (Object Detector Web App, Audio Classifier Web App, Image Segmenter Web App, Human Pose Estimator Yogait) provide a convenient way to invoke and visualize the output from the OpenFaaS functions. These were run on x86-64 and did not require any changes.

In the following sections, we continue to use the gateway-external route http://gateway-external-openfaas.apps.test-cluster.priv that we had exposed for OpenFaaS setup in Part 1. The scripts and Dockerfile also show how to use the outbound proxy for building and pushing the images. You can change them as per your environment.

 

Object Detection - Localize and identify multiple objects in an image

The max-object-detector.yml uses the Dockerfile to build the ppc64le image from the base image ibmcom/powerai:1.7.0-tensorflow-cpu-ubuntu18.04-py37-ppc64le that is CPU only (not GPU). Minor changes were required to the original python code to use the newer version of tensorflow from the base image. The of-watchdog invokes the fprocess to invoke the app.py. The model recognizes the objects present in an image from the 80 different high-level classes of objects in the COCO Dataset. The input to the model is an image, and the output is a list of estimated class probabilities for the objects detected in the image. The "/model/predict" endpoint takes an image as input and returns as a response a list of objects that were detected in the image, along with bounding box coordinates that identify where the detected object is located. The "/model/labels" and "/model/metadata" provide information such as the objects that can be detected and the deep learning model used to derive the answer given the input. Each endpoint accepts application-friendly inputs, such as an image in JPG, PNG, or GIF format, instead of a model-specific data structure. Each endpoint also generates application-friendly outputs, such as standardized JSON. When an application invokes the prediction endpoint with a user selected image in a web application, the prediction endpoint is invoked, and the image is uploaded. The image is prepared for processing and the deep learning model identifies objects in the image, generates a response using the prediction results, and returns the result to the application. The application renders the results by drawing bounding boxes and labels.

 

Script to build the image and push it to OpenShift built-in container image registry

The script deletes the old function and builds the image. Next, we login to the image registry and push the image to the local registry. Finally, we deploy the function using the OpenFaaS stack.yml and look at the function logs. We could have also pushed the image to an external registry but used the local registry to avoid the "ERROR: toomanyrequests".

faas-cli delete max-object-detector
PROXY_URL="//10.3.0.3:3128";export http_proxy="http:$PROXY_URL";export https_proxy="http:$PROXY_URL";export no_proxy=localhost,127.0.0.1,.test-cluster.priv,10.3.158.61
docker build --build-arg model=faster_rcnn_resnet101 -t max-object-detector .
if [ $? -eq 0 ]; then
 docker tag max-object-detector default-route-openshift-image-registry.apps.test-cluster.priv/openfaas-fn/max-object-detector
 unset http_proxy;unset https_proxy
 oc whoami -t > /tmp/oc_token
 docker login --tls-verify=false -u kubeadmin default-route-openshift-image-registry.apps.test-cluster.priv -p `cat /tmp/oc_token`
 docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas-fn/max-object-detector --tls-verify=false
 faas-cli deploy -f ../max-object-detector.yml
 for i in {1..10}; do
  oc get deployment/max-object-detector -n openfaas-fn | grep "1/1"
  if [ $? -eq 0 ]; then
   break
  fi
  sleep 2
 done
 oc logs deployment/max-object-detector -n openfaas-fn -f
fi

Invoking the max-object-detector function

export OPENFAAS_URL=http://gateway-external-openfaas.apps.test-cluster.priv

for filename in baby-bear.jpg; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'image=@'$filename "$OPENFAAS_URL/function/max-object-detector/model/predict?threshold=0.7"& done

 

Output

{"status": "ok", "predictions": [{"label_id": "1", "label": "person", "probability": 0.9993144273757935, "detection_box": [0.2448517382144928, 0.2695222795009613, 0.6507590413093567, 0.5651800632476807]}, {"label_id": "88", "label": "teddy bear", "probability": 0.996898889541626, "detection_box": [0.279220312833786, 0.5684117674827576, 0.6384284496307373, 0.8272980451583862]}]}

real   0m12.426s
user   0m0.006s
sys    0m0.011s

 

Testing max-object-detector with Jupyter notebook

Run the following in the MAX-Object-Detector directory

docker run -p 8888:8888 -v $(pwd):/home/jovyan/work jupyter/scipy-notebook

Go to the URL shown in the logs and select the notebook in the work directory.

Replace the url in the cell as shown below:

        url = 'http://gateway-external-openfaas.apps.test-cluster.priv/function/max-object-detector/model/predict'

We can run the notebook and invoke the function to see the objects within the bounding boxes with object type and probability for each.

 

Audio Classifier – Identify sounds in audio clips

The max-audio-classifier.yml uses the Dockerfile to build the ppc64le image from the base image ibmcom/powerai:1.7.0-tensorflow-cpu-ubuntu18.04-py37-ppc64le that is CPU only (not GPU). Minor changes were required to the original python code to use the newer version of tensorflow from the base image. The of-watchdog invokes the fprocess to invoke the app.py. The "/model/predict" endpoint loads the wav audio file. The input is a 10 second signed 16-bit PCM wav audio file. Files longer than 10 seconds will be clipped so that only the first 10 seconds will be used by the model. Files shorter than 10 seconds will be repeated to create a clip 10 seconds in length. The model recognizes a signed 16-bit PCM wav file as an input, generates embeddings, applies PCA transformation/quantization, uses the embeddings as an input to a multi-attention classifier and outputs top 5 class predictions and probabilities as output. The model currently supports 527 classes which are part of the Audioset Ontology. The script to deploy the max-audio-classifier function is similar to the previous section.

 

Invoking the max-audio-classifier function

for filename in birds1.wav train.wav; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'audio=@'"$filename;type=audio/wav" "$OPENFAAS_URL/function/max-audio-classifier/model/predict"& done

 

Output

{"status": "ok", "predictions": [{"label_id": "/m/015p6", "label": "Bird", "probability": 0.48041069507598877}, {"label_id": "/m/020bb7", "label": "Bird vocalization, bird call, bird song", "probability": 0.24115890264511108}, {"label_id": "/m/07pggtn", "label": "Chirp, tweet", "probability": 0.19373749196529388}, {"label_id": "/m/0jbk", "label": "Animal", "probability": 0.12905840575695038}, {"label_id": "/m/09xqv", "label": "Cricket", "probability": 0.12162928283214569}]} 

real   0m2.768s
user   0m0.007s
sys    0m0.016s 

{"status": "ok", "predictions": [{"label_id": "/m/07jdr", "label": "Train", "probability": 0.9008505940437317}, {"label_id": "/m/06d_3", "label": "Rail transport", "probability": 0.8276365399360657}, {"label_id": "/m/01g50p", "label": "Railroad car, train wagon", "probability": 0.8081554770469666}, {"label_id": "/m/07rwm0c", "label": "Clickety-clack", "probability": 0.5290097594261169}, {"label_id": "/m/07yv9", "label": "Vehicle", "probability": 0.4573462903499603}]} 

real   0m3.016s
user   0m0.007s
sys    0m0.018s

 

OCR – Predict Text from an Image

The max-ocr.yml uses the Dockerfile to build the ppc64le image from the base image ibmcom/powerai:1.7.0-tensorflow-cpu-ubuntu18.04-py37-ppc64le that is CPU only (not GPU). Minor changes were required to the original python code to use the newer version of tensorflow from the base image. The of-watchdog invokes the fprocess to invoke the app.py. The model takes an image of text as an input using the "/model/predict" endpoint and returns the predicted text. This model was trained on 20 samples of 94 characters from 8 different fonts and 4 attributes (regular, bold, italic, bold + italic) for a total of 60,160 training samples.

 

Invoking the max-ocr function

for filename in text_with_numbers.png; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'image=@'$filename "$OPENFAAS_URL/function/max-ocr/model/predict"& done

 

Output

{"status": "ok", "text": [["Setting default log level to \"WARN\"."], ["To adjust logging level use sc.setLogLevel(newLevel)."], ["Spark context Web UI available at http://rally1.fyre.ibm.com:4040", "Spark context available as 'sc'"], ["(master = local[*], app id = local-1531752157593)."], ["Spark session available as 'spark'."]]}

real   0m0.975s
user   0m0.006s
sys    0m0.009s

 

Testing max-ocr with Jupyter notebook

Run the following in the MAX-OCR directory

docker run -p 8888:8888 -v $(pwd):/home/jovyan/work jupyter/scipy-notebook

Go to the URL shown in the logs and select the notebook in the work directory.

Replace the url in the cell as shown below:

        url = 'http://gateway-external-openfaas.apps.test-cluster.priv/function/max-ocr/model/predict'

We can run the notebook and invoke the function to see the text for the image.

 

MAX Image Segmenter - Get a segmentation map to crop objects from an image

The max-ocr.yml uses the Dockerfile to build the ppc64le image from the base image ibmcom/powerai:1.7.0-tensorflow-cpu-ubuntu18.04-py37-ppc64le that is CPU only (not GPU). Minor changes were required to the original python code to use the newer version of tensorflow from the base image. The of-watchdog invokes the fprocess to invoke the app.py. Most images that are shared online depict one or many objects, usually in some setting or against some kind of backdrop. When editing images, it can take considerable time and effort to crop these individual objects out, whether they are to be processed further elsewhere or used in some new composition. This model automates the process. The model takes an image file as an input for the "/model/predict" endpoint. The image is resized, and it returns a segmentation map containing a predicted class ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv') for each pixel in the resized image. The segmentation map returns an integer between 0 and 20 that corresponds to one of the labels for each pixel in the input image. The first nested array corresponds to the top row of pixels in the image and the first element in that array corresponds to the pixel at the top left-hand corner of the image.

 

Invoking the max-image-segmenter function

for filename in stc.jpg; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'image=@'$filename "$OPENFAAS_URL/function/max-image-segmenter/model/predict"& done

 

Output

{"status": "ok", "image_size": [513, 256], "seg_map": [[0, 0, ...

… 0, 0]]}

 

real   0m2.449s
user   0m0.006s
sys    0m0.015s

 

Testing max-image-segmenter with Jupyter notebook

Run the following in the MAX-Image-Segmenter directory

docker run -p 8888:8888 -v $(pwd):/home/jovyan/work jupyter/scipy-notebook

Go to the URL shown in the logs and select the notebook in the work directory.

Replace the url in the cell as shown below:

    r = requests.post(url='http://gateway-external-openfaas.apps.test-cluster.priv/function/max-image-segmenter/model/predict', files=file_form)

We can run the notebook and invoke the function to see the predicted segmentation map for the image.

 

Human Pose Estimator - Detect and visualize the human poses from an image

The max-human-pose-estimator.yml uses the Dockerfile to build the ppc64le image from the base image ibmcom/powerai:1.7.0-tensorflow-cpu-ubuntu18.04-py36-ppc64le that is CPU only (not GPU). Minor changes were required to the original python code to use the newer version of tensorflow from the base image. The of-watchdog invokes the fprocess to invoke the app.py. The Human Pose Estimator model detects humans and their poses in each image provided to the "/model/predict" endpoint. The model first detects the humans in the input image and then identifies the body parts, including nose, neck, eyes, shoulders, elbows, wrists, hips, knees, and ankles. Next, each pair of associated body parts is connected by a "pose line"; for example, as the following image shows, a line may connect the left eye to the nose, while another may connect the nose to the neck. Each pose line is represented by a list [x1, y1, x2, y2], where the first pair of coordinates (x1, y1) is the start point of the line for one body part, while the second pair of coordinates (x2, y2) is the end point of the line for the other associated body part. The pose lines are assembled into full body poses for each of the humans detected in the image.

 

Invoking the max-human-pose-estimator function

for filename in Pilots.jpg; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'file=@'$filename';type=image/jpeg' "$OPENFAAS_URL/function/max-human-pose-estimator/model/predict"& done

 

Output

{"status": "ok", "predictions": [{"human_id": 0, "pose_lines": [{"line": [444, 269, 392, 269]}, {"line": [444, 269, 503, 274]}, {"line": [392, 269, 367, 330]}, {"line": [367, 330, 364, 392]}, {"line": [503, 274, 511, 348]}, {"line": [511, 348, 469, 399]}, {"line": [444, 269, 397, 410]}, {"line": [444, 269, 464, 410]}, {"line": [444, 269, 428, 205]}, {"line": [428, 205, 417, 197]}, {"line": [417, 197, 411, 202]}, {"line": [428, 205, 439, 195]}, {"line": [439, 195, 464, 197]}], "body_parts": [{"part_id": 0, "part_name": "Nose", "score": "0.83899", "x": 428, "y": 205}, {"part_id": 1, "part_name": "Neck", "score": "0.71769", "x": 444, "y": 269}, {"part_id": 2, "part_name": "RShoulder", "score": "0.75556", "x": 392, "y": 269}, {"part_id": 3, "part_name": "RElbow", "score": "0.56429", "x": 367, "y": 330}, {"part_id": 4, "part_name": "RWrist", "score": "0.51554", "x": 364, "y": 392}, {"part_id": 5, "part_name": "LShoulder", "score": "0.56893", "x": 503, "y": 274}, {"part_id": 6, "part_name": "LElbow", "score": "0.66824", "x": 511, "y": 348}, {"part_id": 7, "part_name": "LWrist", "score": "0.48784", "x": 469, "y": 399}, {"part_id": 8, "part_name": "RHip", "score": "0.25196", "x": 397, "y": 410}, {"part_id": 11, "part_name": "LHip", "score": "0.24573", "x": 464, "y": 410}, {"part_id": 14, "part_name": "REye", "score": "0.85231", "x": 417, "y": 197}, {"part_id": 15, "part_name": "LEye", "score": "0.88991", "x": 439, "y": 195}, {"part_id": 16, "part_name": "REar", "score": "0.21390", "x": 411, "y": 202}, {"part_id": 17, "part_name": "LEar", "score": "0.81776", "x": 464, "y": 197}]}, {"human_id": 1, "pose_lines": [{"line": [294, 174, 228, 177]}, {"line": [294, 174, 350, 179]}, {"line": [228, 177, 203, 246]}, {"line": [350, 179, 358, 253]}, {"line": [358, 253, 281, 297]}, {"line": [294, 174, 261, 320]}, {"line": [294, 174, 317, 312]}, {"line": [317, 312, 319, 392]}, {"line": [294, 174, 289, 113]}, {"line": [289, 113, 278, 102]}, {"line": [278, 102, 264, 110]}, {"line": [289, 113, 300, 102]}, {"line": [300, 102, 314, 105]}], "body_parts": [{"part_id": 0, "part_name": "Nose", "score": "0.81226", "x": 289, "y": 113}, {"part_id": 1, "part_name": "Neck", "score": "0.69899", "x": 294, "y": 174}, {"part_id": 2, "part_name": "RShoulder", "score": "0.71737", "x": 228, "y": 177}, {"part_id": 3, "part_name": "RElbow", "score": "0.20880", "x": 203, "y": 246}, {"part_id": 5, "part_name": "LShoulder", "score": "0.64237", "x": 350, "y": 179}, {"part_id": 6, "part_name": "LElbow", "score": "0.63078", "x": 358, "y": 253}, {"part_id": 7, "part_name": "LWrist", "score": "0.48116", "x": 281, "y": 297}, {"part_id": 8, "part_name": "RHip", "score": "0.19580", "x": 261, "y": 320}, {"part_id": 11, "part_name": "LHip", "score": "0.38278", "x": 317, "y": 312}, {"part_id": 12, "part_name": "LKnee", "score": "0.28024", "x": 319, "y": 392}, {"part_id": 14, "part_name": "REye", "score": "0.81515", "x": 278, "y": 102}, {"part_id": 15, "part_name": "LEye", "score": "0.76774", "x": 300, "y": 102}, {"part_id": 16, "part_name": "REar", "score": "0.78858", "x": 264, "y": 110}, {"part_id": 17, "part_name": "LEar", "score": "0.93217", "x": 314, "y": 105}]}, {"human_id": 2, "pose_lines": [{"line": [139, 259, 72, 271]}, {"line": [139, 259, 200, 251]}, {"line": [72, 271, 58, 346]}, {"line": [58, 346, 117, 410]}, {"line": [200, 251, 228, 330]}, {"line": [228, 330, 217, 407]}, {"line": [139, 259, 106, 407]}, {"line": [139, 259, 178, 402]}, {"line": [139, 259, 158, 197]}, {"line": [158, 197, 144, 187]}, {"line": [144, 187, 119, 197]}, {"line": [158, 197, 167, 187]}, {"line": [167, 187, 178, 195]}], "body_parts": [{"part_id": 0, "part_name": "Nose", "score": "0.85492", "x": 158, "y": 197}, {"part_id": 1, "part_name": "Neck", "score": "0.68328", "x": 139, "y": 259}, {"part_id": 2, "part_name": "RShoulder", "score": "0.55995", "x": 72, "y": 271}, {"part_id": 3, "part_name": "RElbow", "score": "0.57399", "x": 58, "y": 346}, {"part_id": 4, "part_name": "RWrist", "score": "0.56761", "x": 117, "y": 410}, {"part_id": 5, "part_name": "LShoulder", "score": "0.53250", "x": 200, "y": 251}, {"part_id": 6, "part_name": "LElbow", "score": "0.46755", "x": 228, "y": 330}, {"part_id": 7, "part_name": "LWrist", "score": "0.62695", "x": 217, "y": 407}, {"part_id": 8, "part_name": "RHip", "score": "0.42303", "x": 106, "y": 407}, {"part_id": 11, "part_name": "LHip", "score": "0.26686", "x": 178, "y": 402}, {"part_id": 14, "part_name": "REye", "score": "0.80411", "x": 144, "y": 187}, {"part_id": 15, "part_name": "LEye", "score": "0.88401", "x": 167, "y": 187}, {"part_id": 16, "part_name": "REar", "score": "0.74573", "x": 119, "y": 197}, {"part_id": 17, "part_name": "LEar", "score": "0.13491", "x": 178, "y": 195}]}]}

 

real   0m0.992s
user   0m0.006s
sys    0m0.009s

 

Testing human-pose-estimator with Jupyter notebook

Run the following in the samples directory

docker run -p 8888:8888 -v $(pwd):/home/jovyan/work jupyter/scipy-notebook

Go to the URL shown in the logs and select the notebook in the work directory.

Add a cell with the following to install the cv2

!pip install opencv-python

When you run the notebook, replace the URL in the “Detect all human poses from the test imagex”. For example:

url = 'http://gateway-external-openfaas.apps.test-cluster.priv/function/max-human-pose-estimator/model/predict'

The notebook with invoke the function and finally show the original image, the detected poses and poses overlaid on original image.

 

Prebuilt MAX OpenFaaS Images on dockerhub for ppc64le architecture

Prebuilt images for the five MAX models used in this article are pushed to dockerhub.

karve/max-object-detector:ppc64le

karve/max-audio-classifier:ppc64le

karve/max-ocr:ppc64le

karve/max-image-segmenter:ppc64le

karve/max-human-pose-estimator:ppc64le

 

We can run and test pods directly using the above images using podman on ppc64le. For example, for the MAX-OCR model:

cd MAX-OCR
podman run --rm -p 5000:5000 --name alexei-ocr -d karve/max-ocr:ppc64le
for filename in samples/text_with_numbers.png; do time curl -H 'accept: application/json' -H 'Content-Type: multipart/form-data' -F 'image=@'$filename http://localhost:5000/model/predict& done 

Output

{"status": "ok", "text": [["Setting default log level to \"WARN\"."], ["To adjust logging level use sc.setLogLevel(newLevel)."], ["Spark context Web UI available at http://rally1.fyre.ibm.com:4040", "Spark context available as 'sc'"], ["(master = local[*], app id = local-1531752157593)."], ["Spark session available as 'spark'."]]}

 

real    0m0.634s
user    0m0.007s
sys     0m0.001s

 

Conclusion

In this blog we showed how to build, deploy, and test OpenFaaS functions for MAX on RedHat OCP 4 and podman for IBM Power ppc64le. Specifically, this was tested on IBM® Power® System E880 (9119-MHE) based on POWER8® processor-based technology with OpenShift version 4.6.23. We saw how easy it was to adapt the MAX code to be invoked from OpenFaaS with minor changes in the Dockerfile for ppc64le.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your serverless applications on OpenShift using OpenFaaS and if you would like to see something covered in more detail.


References

Deploying OpenFaaS on Red Hat OpenShift Container Platform for IBM Power ppc64le https://community.ibm.com/community/user/publiccloud/blogs/alexei-karve/2021/07/06/openfaas-on-rhocp-1
OpenFaaS Function Custom Resource with HPA on OpenShift for IBM Power ppc64le https://community.ibm.com/community/user/publiccloud/blogs/alexei-karve/2021/07/06/openfaas-on-rhocp-2
OpenFaaS Asynchronous Functions and Function Chaining on OpenShift for IBM Power ppc64le https://community.ibm.com/community/user/publiccloud/blogs/alexei-karve/2021/07/12/openfaas-on-rhocp-3
An introduction to the internals of the Model Asset eXchange https://developer.ibm.com/blogs/an-introduction-to-the-internals-of-model-asset-exchange/


#Automation
#Cloud
#Edge
#Featured-area-1
#Featured-area-1-home
#ibmpower
#openfaas
#Openshift
#Python
0 comments
421 views

Permalink