Data and AI on Power

 View Only

How to Run Batch Inferencing with OnnxRuntime on IBM Power10 Using a ResNet Model

By Daniel Schenker posted Wed February 28, 2024 03:46 PM

  

This blog is a further improvement on the previous OnnxRuntime Resnet blog. This blog details the steps required to run batch inferencing with OnnxRuntime on IBM Power10 systems using a resnet model.

Prerequisites

This blog assumes the user already has conda installed. Utilize the following blog post by Sebastian Lehrig to get conda setup on power if needed.

Environment Setup

Create a new conda environment.

conda create --name your-env-name-here python=3.11

This will create a new environment and install python version 3.11 and its required dependencies.

Activate the newly created environment.

conda activate your-env-name-here

Once the environment is active, install openblas, onnxruntime, and their dependencies.

conda install libopenblas -c rocketce

conda install onnxruntime -c rocketce

conda install pillow -c rocketce

When using the conda install command with the -c argument, packages will attempt be installed from a specified channel. Packages installed via the rocketce channel will have MMA optimizations.

Project Setup

Navigate to a desired project directory and download the model from the ONNX Model Zoo.

wget https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-v1-12.onnx

Download the ImageNet Labels.

wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt

Create a new python script inside the project directory.

touch resnet_batch.py

Open the python script with any text editor or IDE (vi, vim, nano, vscode, etc…) and paste the following code.

import numpy as np
import onnxruntime
import os
import argparse
from PIL import Image
from os import listdir

# Classify an image using a pretrained resnet model
def classifyImage(image_path, batch_size, threads, debug):
    # Ensure that the provided image path exists
    if os.path.exists(image_path) == False:
        print('Image path not found. Check the provided path.')
        exit()
    
    # Set default batch size if not provided
    if batch_size is None:
        batch_size = 1

    # Create session options
    opts = onnxruntime.SessionOptions()
    if threads is not None:
        if debug: print(f'Setting intra_op_num_threads and inter_op_num_threads to {threads}')
        opts.intra_op_num_threads = int(threads)
        opts.inter_op_num_threads = int(threads)

    # Create inference session
    if debug: print(f'Creating inferencing session.')
    sess = onnxruntime.InferenceSession('resnet50-v1-12.onnx', sess_options=opts)
    input_name=sess.get_inputs()[0].name
    input_tensor = createInputBatch(image_path, int(batch_size))
 
    # Read ImageNet class labels
    with open('imagenet_classes.txt') as f:
        categories = [s.strip() for s in f.readlines()]

    # Run inferencing
    if debug: print(f'Running inferencing session.')
    for i in range(len(input_tensor)):
        pred_onnx=sess.run([], {input_name: input_tensor[i]})

        # Extract and print results
        for i in range(input_tensor[i].shape[0]):
            index = np.argmax(pred_onnx[0][i])
            print(categories[index])

# Combine images into a batch
def createInputBatch(image_path, batch_size):
    images = []
    input = []
    # Check image_path for all image files
    for image in os.listdir(image_path):
        if (image.endswith(".png") or image.endswith(".jpg") or image.endswith(".jpeg")):
            images.append(image)
    # Run preprocessing on all images in provided directory
    processed_images = [preprocess(image_path, image) for image in images]
    # Calculate number of batches needed
    num_batches = len(processed_images) // batch_size
    # Create batches
    for i in range(num_batches):
        # Calculate the start and end indices for the current batch
        start_index = i * batch_size
        end_index = (i + 1) * batch_size
        # Grab images for the current batch
        batch_images = processed_images[start_index:end_index]
        # Add batch to list
        input.append(np.concatenate(batch_images, axis=0))
    # If the total number of images is not divisible by the batch size, handle the remaining images
    remaining_images = len(processed_images) % batch_size
    if remaining_images > 0:
        start_index = num_batches * batch_size
        end_index = start_index + remaining_images
        remaining_batch = processed_images[start_index:end_index]
        input.append(np.concatenate(remaining_batch, axis=0))
    return input

# Preprocess image according to resnet guidelines
def preprocess(image_path, img):
    img = Image.open(os.path.join(image_path, img))
    img = img.resize((256, 256), Image.BILINEAR)
    img = np.array(img)
    img = img / 255.
    h, w = img.shape[0], img.shape[1]
    y0 = (h - 224) // 2
    x0 = (w - 224) // 2
    img = img[y0 : y0+224, x0 : x0+224, :]
    img = (img - [0.485, 0.456, 0.406]) / [0.229, 0.224, 0.225]
    img = np.transpose(img, axes=[2, 0, 1])
    img = img.astype(np.float32)
    img = np.expand_dims(img, axis=0)
    return img

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--image_path', help='Absolute path to image directory', required=True)
    parser.add_argument('-b', '--batch_size', help='Size of input batches', required=False)
    parser.add_argument('-t', '--threads', help='Thread count for execution. Sets intra_op_num_threads and inter_op_num_threads args of session_options.', required=False)
    parser.add_argument('-d', '--debug', help='Enable debug mode', required=False, action='store_true')
    args = parser.parse_args()

    classifyImage(args.image_path, args.batch_size, args.threads, args.debug)

This script utilizes command line arguments to specify the location of images to classify, the batch size, and the number of threads to use during execution. The script parameters work as follows.

  • -I/--image_path is a required parameter and should be followed by the path to a directory containing the images to classify.
  • -b/--batch_size is an optional parameter and should be followed by the desired batch size. If this parameter is not provided the batch size defaults to 1.
  • -t/--threads is an optional parameter and should be followed by the number of threads to use during execution. If this parameter is not provided the thread count defaults to 1.
  • -d/--debug enables debug mode to print out intermediate results

Execution

Once the script is complete, run the model and view the results.

python3 resnet_batch.py -i ./image_root/ -b 1 -t 1 -d

The script will output the classification for each image based on the ImageNet labels.

Conclusion

This blog detailed the steps required to run batch inferencing with OnnxRuntime on IBM Power10 systems using a resnet model. This blog further improved upon the previous OnnxRuntime resnet blog by implementing batch image processing to increase the overall efficiency of the script.

Permalink