BPM, Workflow, and Case

Benchmarking OpenShift environments for IBM Cloud Pak for Business Automation

By Stephan Volz posted 8 days ago


0) Introduction

When setting up an OpenShift environment for IBM Cloud Pak for Business Automation installation the question can arise, can my environment deal with the needed resource requirements?
An option to get a quick result can be a simple benchmark. With such a benchmark it is also possible to compare different environments. There exist a number of different benchmarks which can be considered for doing such a task. I will introduce here a few benchmarks which I think could be beneficial in a number of situations.

1) Some available benchmarks

The first thing which mattered to me was a quick and easy setup. Besides this you might have specific needs, e.g. you want to test your network speed or your disk performance as part of your OpenShift setup. Therefore we will also have some special benchmarks which focus on specific tuning areas. One also needs to mention that much more benchmarks exist (the ones listed here is a small selection, which can be extended in the future). It is possible to adjust or write complete own test cases.

1.1) K-Bench

A benchmark which can be easily installed and executed is the K-Bench benchmark. With only 3 commands it is possible to get a result:
  • Clone the Git repo
  • Run install.sh script (Installation)
  • Run run.sh script (Basic test execution)

1.2) Network performance benchmark - ripsaw

When you want to measure your network performance you can consider the ripsaw benchmark. The installation is described in the linked article. Main focus here is the network performance.

1.3) Disk benchmark tools - etcd performance with fio

You might already know the fio tool for disk performance measurements. In this example we will measure the disk performance for the etcd. A good disk performance is a necessary requirement for the etcd. What is required to perform the measurement is described in the Red Hat support article.

2) Examples and comparison of the benchmark results

How does the results look like and what can you learn from the data collected? We will have a look at some examples from our earlier described test procedures.

2.1 Example data for the K-Bench benchmark

With the easy to install K-Bench benchmark performance data can be easily generated. In the following table we will have a look at the Service API performance:
System1 System2
Activity median min max 99% median min max 99%
update service latency 14.306 6.547 25.634 25.634 4.107 3.458 20.141 20.141
delete service latency 55.017 31.792 73.294 73.294 4.023 2.616 7.814 7.814
create service latency 74.069 74.069 115.575 115.575 8.389 6.714 11.684 11.684
list service latency 4.978 3.28 14.772 14.772 49.601 19.708 61.809 61.809
get service latency 4.383 3.161 6.467 6.467 57.478 16.765 90.711 90.711

You can see two different systems, as you see System1 shows larger latency values than System2 for write intensive activities while System2 shows the reverse for read intensive activities. The benchmark results provide the different statistics information for the executed tests, like min and max values etc.

2.2 Example ripsaw data

With the ripsaw benchmark we showed earlier it is possible to measure the network performance. You can see the network throughput numbers in the below screenshot:

Ripsaw result data

2.3 Example etcd disk performance

As mentioned at the beginning a good disk performance is critical for a proper working of etcd. Therefore we have seen a testcase based on the fio tool. Here we will now see an extract of the result output. The final output shows that the disk performance is sufficient for etcd. Without proper checks it can be difficult to debug issues caused e.g. by slow disk performance.

fio testcase for etcd disk performance

3) Outlook

For the future we plan to provide some more guidance how to tune the system and use the benchmark results to determine if the environment is sufficient to handle Cloud Pak Business Automation workloads.