Figure 3. Resource utilization – comparing Burstable QoS with BestEffort QoS
In this particular example, the throughput of this single pod increased 70% as a result of switching from BestEffort to Burstable.
The resource limits are easily configurable using OpenShift Container Platform user interface.
Figure 4. Edit resource limits
For a pod to be placed in the Burstable QoS, every container in the pod should have an associated request. and Limits fields must be set for CPU and memory and they need not be equal. And then containers are qualified as Burstable QoS. This class of QoS has 2nd level priority.
In OpenShift Container Platform here is how Burstable QoS can be set. If you look in Figure 5, the value for the Request and Limit fields are not equal for CPU and memory.
From this exercise we found that Burstable QoS was the one which yields best throughput for this workload. So, it is worth trying different QoS options to see which is the best fit.
Building or pushing the Docker images to the OpenShift Docker registry
OpenShift provides an integrated Docker registry to place Docker images and to provision the new image as needed. Perform the following steps to build and push a new custom image in to the OpenShift registry.
-
Run the following commands on the OpenShift Container Platform master node.
$oc login -u admin
( Key-in Password. i.e. admin/admin) ( https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html#basic-setup-and-login )
$oc new-project harsha
(“harsha” is the project name)
$oc project harsha
(Switch to project harsha)
$cd /home/harsha/CombinedImage/
(This folder contains the Dockerfile and the files and packages required to build the image)
$docker build -t vkh-mongodb-ppc64le:v6 .
$docker tag vkh-mongodb-ppc64le:v6 docker-registry.default.svc:5000/harsha/vkh-mongodb-ppc64le:v6
-
Generate a token-based login from the web console.
$oc login https://p136n143.pbm.ihost.com:8443 --token=<TOKEN>
- Use the previous token to log in to the OpenShift registry.
$docker login -u admin -p 'oc whoami -t' docker-registry.default.svc:5000
- Push the image to the OpenShift private registry on the master node.
$docker push docker-registry.default.svc:5000/harsha/vkh-mongodb-ppc64le:v6
- Install the helm chart.
$helm install -n <chart name>
Deploying your application using helm and tiller on OpenShift Container Platform
Helm and tiller is used to perform packaging and deployment in OpenShift Container Platform. Operators could also be used for packaging and deployment in OpenShift Container Platform, because operators lifecycle manager is in technology preview for OpenShift Container Platform 3.11 on IBM Power Systems. We have used the helm approach for packaging and deployment in this study. Because OpenShift Container Platform do not spin up tiller services using POD which needs to compliment helm functionality, install tiller using the steps listed at: https://blog.openshift.com/getting-started-helm-openshift/ (Note OpenShift >=3.6 )
You can find the downloads specific to IBM ppc64le at: https://github.com/helm/helm/releases
Building or installing helm and tiller
In this study, helm and tiller version 2.9.1 binaries were built on the system and the tiller is started natively on the host. HELM_HOST is pointed to \<ip_addr:port\> where tiller is running. The steps are detailed below in order to build the binaries natively on system.
- Download the Go distribution (https://golang.org/doc/install).
-
Run the following command to build helm and tiller on the master node.
$mkdir -p $GOPATH/src/k8s.io
$cd $GOPATH/src/k8s.io
$git clone https://github.com/kubernetes/helm.git
$cd helm
$git checkout v2.9.1
$make bootstrap build
$cd bin
$export PATH=$PWD:$PATH
$GOPATH is the directory where the Go packages are installed.
- Run the following command to bootstrap helm and tiller.
$helm init
$nohup tiller 2>&1 &
$lsof -i:44134
$export HELM_HOST=0.0.0.0:4413
Deploying cloud-native applications on IBM Power Systems
Refer to the following video to understand how simple it is to build or deploy a cloud-native application developed with Node.js and MongoDB. Also, this video enables you to realize how IBM Power Systems is tailor-made to deliver a cost-effective solution.
https://www.youtube.com/watch?v=OVzM05PF3sg
Conclusion
The contents of this study give a good perspective and reference on how to install OpenShift Container Platform on IBM Power. The report mainly explains the implementation part, and the commands and scripts used for configuration of workload on Openshift Container Platform. The workload is run with SMT 8 configuration in order to get the highest container density.
The study describes important aspects like using helm and tiller with Openshift Container Platform, building and pushing Docker images to the OpenShift docker registry, optimizations at different stages which can be incorporated while configuring a workload running on OpenShift Container Platform on IBM Power Systems.
Appendix
This section provides additional material for users to better understand about the study.
Dockerfile
Refer to the following Dockerfile that was used to build a Docker image on ppc64le.
#
# MongoDB Dockerfile
# -- modified by Bruce Semple for MongoDB Proofpoint Exercise on ICp
# -- modified by Krishna Harsha Voora for MongoDB Proofpoint Exercise on # OCP
#
# https://github.com/dockerfile/mongodb
#
# FROM dockerfile/ubuntuaa
FROM ppc64le/centos
ENV NODE_ENV production
ENV PORT 3000
#
# Pre-Defined System Configuration..
#
COPY linux/limits.conf /etc/security/limits.conf
COPY linux/defrag /sys/kernel/mm/transparent_hugepage/defrag
# Install MongoDB Enterprise from RPM downloaded from
# https://repo.mongodb.com/yum/redhat/7/mongodb-enterprise/4.0/ppc64le/RPMS/
#
WORKDIR /install
ADD *.rpm /install/
ADD datasets/*.json /install/
ADD *.sh /install/
# pick up the library that Calvin built with the ATC
ADD *.so /install/
# Run as root #
USER root
#
# add a plugin to yum so that it will automatically download dependencies
#
RUN yum -y -v install yum-utils yum-plugin-ovl
#
# Set LD_PRELOAD environment Variable
#
RUN yum -y install /install/mongodb-enterprise-server-4.0.2-1.el7.ppc64le.rpm
RUN yum -y install /install/mongodb-enterprise-shell-4.0.2-1.el7.ppc64le.rpm
RUN yum -y install /install/mongodb-enterprise-tools-4.0.2-1.el7.ppc64le.rpm
# This might be the license file
#RUN yum -y install /install/mongodb-enterprise-4.0.2-1.el7.ppc64le.rpma
#
# **** Replace the Default MONGOD.CONF file with ours -- specifically to #allow outside connections
#
ADD mongodb/mongod.conf /etc
#
# Adjust the ulimits for Mongo
#
ADD linux/30_adjustlimits.conf /etc/security/limits.d
RUN yum -y install wget
#
# Now Install Node -- pull Binary from nodejs.org
#
WORKDIR "/nodejs"
RUN wget https://nodejs.org/dist/v8.14.1/node-v8.14.1-linux-ppc64le.tar.xz
RUN tar -xvf node-v8.14.1-linux-ppc64le.tar.xz
ENV PATH=/nodejs/node-v8.14.1-linux-ppc64le/bin:$PATH
#
# Check if Node is working
RUN node --version
#
#
# Copy in Node Application
# ADD will pull the whole directory
# - Including package.json file (needed by npm package install)
# - Including creategeoindexes.js File
WORKDIR "/app"
ADD ./nodeAppV4 /app/
ADD nodeAppV4/createGeoIndexes.js /app/
RUN cd /app;
RUN rm -rf ./node_modules;
#
# Run as user root
USER root
RUN yum -y install openssl-devel.ppc64le yum-plugin-ovl.noarch
RUN yum -y install gcc
#
# Now install the various NODE Modules
#
RUN npm install async --unsafe-perm
RUN npm install body-parser --unsafe-perm
RUN npm install ejs --unsafe-perm
RUN npm install passport --unsafe-perm
RUN npm install passport-http --unsafe-perm
RUN npm install router --unsafe-perm
RUN npm install express --unsafe-perm
RUN npm install mongodb --unsafe-perm
RUN npm install mongoose --unsafe-perm
# Create directories.
RUN mkdir -p /data/db
RUN chmod 777 /data/
RUN chmod 777 /data/db/
RUN chmod -R 777 /var/
VOLUME ["/data/db"]
RUN chmod +x /install/runit.sh
# Define default command.
USER mongod
ENTRYPOINT ["/bin/bash","-c","/install/runit.sh"]
# Set the LD_PRELOAD environment
ENV LD_PRELOAD=/install/libmass.so
#EXPOSE 27017
#EXPOSE 28017
EXPOSE 3000
Helm chart
Sample values.yaml file
replicaCount: 2
#platform: ppc64le
# platform: ppc64le
platform: amd64
image:
repository: docker-registry.default.svc:5000/mithun1/mhr-mongodb-x86
#repository: mycluster.icp:8500/default/bps-combined-ppc64le
tag: v3
pullPolicy: Always
service:
externalPort: 3000
internalPort: 3000
#ingress:
# path: /ingP9/
# rewrite: /
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 5
resources:
limits:
cpu: 1
memory: 1024Mi
requests:
cpu: 1
memory: 512Mi
Runit.sh script
This file is used as an entry point file in Dockerfile.
#! /bin/bash
#
# Set some Operating system configuration options prior to starting #MongoDB deamon
#
rm -rf /var/log/mongodb/mongod.log
#
# Start the MongoDB deamon
#
# Updated by Krishna Harsha Voora to see if vkh-mongodb-ppc64le:v6 Docker-#Image works
mongod --bind_ip_all &
#
# wait a few minutes for Mongo to start up
#
sleep 15s
#
# Import the datasets
#
mongoimport --uri mongodb://localhost:27017/proofpoint01 -c restaurants /install/restaurants_idFixed.json
mongoimport --uri mongodb://localhost:27017/proofpoint01 -c neighborhoods /install/neighborhoods_idFixed.json
mongoimport --uri mongodb://localhost:27017/proofpoint01 -c companies /install/companies_noID.json
mongoimport --uri mongodb://localhost:27017/proofpoint01 -c inspections /install/city_inspections_FixID.json
#
# Create the GeoIndexes
#
mongo proofpoint01 /app/createGeoIndexes.js
# Start up node via node package manager
#
npm start
#
CreateGeoIndexes.js
This is a JavaScript code snippet that is invoked by the runit.sh script to build the following geospatial indexes.
db.restaurants.createIndex({ location: "2dsphere" })
db.neighborhoods.createIndex({ geometry: "2dsphere" })
Microservices used
The application program provided a variety of microservices with the following characteristics:
- Geospatial based
- Non-geospatial, with a simple data structure
-
Non-geospatial, with a complex data structure (queries against the companies’ collection)
However, for this set of tests, only five of the geospatial-based microservices were used. These microservices used the geospatial capabilities of MongoDB and resulted in a processor-intensive workload.
The following five microservices were used for this test:
- /api/get10neighborhoodsV2Iterate
- /api/getMyNeighborhoodv2Iterate
- /api/getNeighborhoodsIntersectingTriangleIterate
- /api/getRestaurantRingIteration
- /api/getMyNeighborhoodRestaurantsV3Iterate
Refer to the “Microservices available in the driver” section in Appendix for more details.
Microservices available in the driver
The Node.js application supported the following Representational State Transfer (REST)-based microservices.
Note: You can notice a :loopnum
parameter on many of the geospatial-related APIs. Remember that the purpose of this set of microservices is to generate processor load. If it was determined that the transaction did not present enough load, or simply more load was needed, the :loopnum
parameter could be incremented.
- Health / PING related (all HTTP get requests)
/
This returns a JSON object with the message “First test of returning a JSON object”.
/xx
This returns a JSON object with the message text “command xxx”.
/yy
This returns a JSON object with the message text “command yyy”.
/health
This returns a JSON object with the message text “Ready”. This is used by OpenShift to check the health or readiness of the container.
- To Do List application support
(get) /api/activities
This returns a JSON object with a list of activities.
(post) /api/activities
This creates an activity.
(get) /api/activities/:activity_id
This returns an activity by activity ID.
(put) /api/activities/:activity_id
This updates an activity by activity ID.
(delete) /api/activities/:activity_id
This deletes an activity by activity ID.
(get) /api/activities/date/:date
This finds the activity by date.
(put) /api/activities/addtodate/:activity_id/:date
This updates the name and quantity of the activity identified by ID and date.
- Geospatial-related (operate on neighborhood and restaurants collections) (all HTTP get requests)
/api/oneNeighborhood
This returns a neighborhood JSON object.
/api/neighborhoodByName/:name
This finds a neighborhood by name and returns the neighborhood JSON object.
/api/neighborhoodByID/:id
This looks up a neighborhood by ID and returns the JSON neighborhood object.
/api/countRestaurants
This returns a JSON object with a count of restaurants in the Restaurants collection.
/api/countNeighborhoods
This returns a JSON object with a count of neighborhoods in the Neighborhood collection.
/api/restaurantsNearMe
This returns a JSON object with a list of restaurants within a 15-mile radius around this point [-73.93414657, 40.82302903] (longitude first, then latitude). This point is in the Denver, CO area
/api/restaurantsInRangeV2/:range
This returns a JSON object with a list of restaurants within the passed radius around the fixed point [-73.93414657, 40.82302903] (longitude, latitude)
/api/get10neighborhoods
This returns the list of 10 neighborhoods that MongoDB returned after it was told to skip the first 10.
/api/get10neighborhoodsV2Iterate/:loopnum
This microservice does the following tasks:
- Asks MongoDB for 10 neighborhoods after skipping the first 10.
- Asks MongoDB for 10 neighborhoods after skipping the first 50.
- Asks MongoDB for 10 neighborhoods after skipping the first 100.
- Asks MongoDB for 10 neighborhoods after skipping the first 150.
- Then returns a JSON object with a list of the 10 neighborhoods that MongoDB returned on the last call.
/api/getMyNeighborhood
This returns the neighborhood that contains the point [-73.93414657, 40.82302903] (longitude, latitude).
/api/getMyLocalRestaurantsV3Iterate/:loopnum
This microservice performs the following tasks:
- It first queries the neighborhood collection to find the neighborhood that contains this point [longitude: -73.93414657, latitude: 40.82302903].
- It then uses the polygon associated with the returned neighborhood to locate from the restaurant collection by looking for those restaurants that are in the identified neighborhood.
- A loop number parameter is then passed to cause the microservice to loop through the previous steps n times before returning.
- It returns a JSON object with the list of restaurants.
/api/getRestaurantsRingIteration/:long/:lat/:mindistance/:distance/:loopnum
This returns a JSON object with a list of restaurants that are located between an inner and outer ring (donut shape) around the :long, :lat
point passed in the call. The inner ring is defined by the :mindistance
parameter while the outer ring is defined by the :distance
parameter. As before, the microservice supports a :loopnum
parameter to support repeating the geospatial call for a specific number of times before returning.
/api/getMyNeighborhoodV2Iterate/:long/:lat/:loopnum
This API accepts a longitude and latitude geospatial point and a loop counter. It looks up the neighborhood containing the point that was passed in. Optionally, it will repeat the query n times depending on the setting of :loopnum
before returning a JSON object with the name of the neighborhood containing the point that was passed in.
/api/getNeighborhoodsIntersectingTriangleIterate/:long/:lat/:loopnum
The base of the triangle was fixed at these two points [-73.70,40.50], [-73.70, 40.9]. You passed in the apex of the triangle. Varying the apex point would vary the neighborhoods that the triangle intersected. In addition, you had the option of repeating the query n times before returning. The API would return a JSON object with a list of neighborhoods that intersected the triangle.
- Companies and Inspections collections (all gets)
/api/getCompaniesByEmployeesWIteration/:ltnum/:number
This returns a JSON object with the count and a list of the companies that have greater :number
than the number of employees passed in to the API call. The list of companies returned has the following fields: name, number of employees, year founded, and number of products. Similar to other APIs, an option is provided to repeat this query n times (:number
) before returning.
/api/getInspectionsByZipCodeIteration/:gtzip/:ltzip/:number
This API returns a list of inspections that occurred in a range of zip code areas. If you want a single zip code area, set the :gtzip
and :ltzip
values to the same zip code. As before, you have the option of repeating the query n times (:number
) before returning. The JSON object returned lists the business name, address zip code, certificate number, and pass/fail indication.
Data set or collection schemas
This section specifically describes the schemas (data structures) for each of the collections.
Neighborhood
This is the schema for the Neighborhood collection.
const neighborhoodSchema = new Schema({
geometry: {
coordinates: {
type: [[[Number]]], // Array of arrays of arrays of numbers
required: true
},
type: {
type: String,
enum: ['Polygon'],
required: true
}
},
name: String,
}
Restaurants
This is the schema for the Restaurants collection.
const restaurantSchema = new Schema({
name: String,
location: {
type: {
type: String,
enum: ['Point'],
required: false
},
coordinates: {
type: [Number],
required: false
}
}
}
Inspections
This is the schema for the Inspections collection.
const inspectionsSchema = new Schema({
id: {
type: 'String'
},
certificate_number: {
type: 'Number'
},
business_name: {
type: 'String'
},
date: {
type: 'Date'
},
result: {
type: 'String'
},
sector: {
type: 'Date'
},
address: {
city: {
type: 'String'
},
zip: {
type: 'Number'
},
street: {
type: 'String'
},
number: {
type: 'Number'
}
}
Companies
This is the schema for the Companies collection.
const companiesSchema = new Schema({
name: {
type: 'String'
},
permalink: {
type: 'String'
},
crunchbase_url: {
type: 'String'
},
homepage_url: {
type: 'String'
},
blog_url: {
type: 'String'
},
blog_feed_url: {
type: 'String'
},
twitter_username: {
type: 'String'
},
category_code: {
type: 'String'
},
number_of_employees: {
type: 'Number'
},
founded_year: {
type: 'Number'
},
founded_month: {
type: 'Number'
},
founded_day: {
type: 'Number'
},
deadpooled_year: {
type: 'Number'
},
tag_list: {
type: 'String'
},
alias_list: {
type: 'String'
},
email_address: {
type: 'String'
},
phone_number: {
type: 'String'
},
description: {
type: 'String'
},
created_at: {
$date: {
type: 'Number'
}
},
updated_at: {
type: 'Date'
},
overview: {
type: 'String'
},
image: {
available_sizes: {
type: [
'Array'
]
}
},
products: {
type: [
'Mixed'
]
},
relationships: {
type: [
'Mixed'
]
},
competitions: {
type: [
'Mixed'
]
},
providerships: {
type: 'Array'
},
total_money_raised: {
type: 'String'
},
funding_rounds: {
type: [
'Mixed'
]
},
investments: {
type: 'Array'
},
acquisition: {
price_amount: {
type: 'Number'
},
price_currency_code: {
type: 'String'
},
term_code: {
type: 'String'
},
source_url: {
type: 'String'
},
source_description: {
type: 'String'
},
acquired_year: {
type: 'Number'
},
acquired_month: {
type: 'Number'
},
acquired_day: {
type: 'Number'
},
acquiring_company: {
name: {
type: 'String'
},
permalink: {
type: 'String'
}
}
},
acquisitions: {
type: 'Array'
},
offices: {
type: [
'Mixed'
]
},
milestones: {
type: [
'Mixed'
]
},
video_embeds: {
type: 'Array'
},
screenshots: {
type: [
'Mixed'
]
},
external_links: {
type: [
'Mixed'
]
},
partners: {
type: 'Array'
}
}
Activity schema (for to do list microservices)
const activitySchema = new Schema({
activity_name: String,
quantity: Number,
date: {type: Date, default: Date.now}
})
Get more information
To learn more about IBM POWER9 processor-based servers, contact your IBM representative, IBM Business Partner, or visit the following website: https://www.ibm.com/it-infrastructure/power/power9