Containers, Kubernetes, OpenShift on Power

 View Only

Running MongoDB and Node.js on Red Hat OpenShift container platform

By Mithun H R posted Fri December 13, 2024 07:38 AM

  

Co-authored by: Bruce Semple, Calvin Sze, Krishna Harsha Voora, Lilian Romero

Introduction

This tutorial chronicles our experiences configuring Red Hat® OpenShift® Container Platform on the IBM® Power® platform, building an application to deploy, and then deploying the application on OpenShift Container Platform. Helm chart is used to define, install, and upgrade the geospatial workload application running on MongoDB on the OpenShift Container Platform cluster. Docker images for the geospatial application running on MongoDB is built and pushed in to the OpenShift Docker registry, deployed using helm. The tutorial also describes the tunings that are used to get the maximum containers with optimum performance to be hosted on the OpenShift Container Platform cluster on the IBM Power platform.

For the workload, a compute-intensive geospatial workload was chosen, which is based on the MongoDB tutorial available at https://docs.mongodb.com/manual/tutorial/geospatial-tutorial/. This workload is used in a Dev Ops scenario to determine the platform which can host the maximum number of containers at a given performance level. This tutorial provides the steps to configure the workload OpenShift Container Platform running IBM Power nodes.

The objective of this tutorial is to share and document the experiences and recommendations for configuring OpenShift Container Platform on IBM Power architecture and the considerations that could be incorporated while deploying Docker images on an OpenShift Container Platform (hoping it saves the reader time during first time OpenShift Container Platform application install).

Installation and configuration of OpenShift Container Platform on Power

IBM in partnership with Red Hat has made Red Hat OpenShift Container Platform available on IBM Power Systems.

Topology description

The infrastructure consisted of a master node, compute nodes, and a load generator. The IBM Power System L922 servers are used as compute nodes, the IBM Power S822LC server is used as the load generator, the IBM RackSwitch model G8264-T switch is used to connect all the systems.

Server topology
Figure 1. Server topology

IBM Power

The IBM Power System L922 (9008-22L) server used for the SUT has two sockets with 10 cores per socket of typically 2.9 GHz, two 388 GB solid-state drives (SSDs), two 10 Gb two-port network, and 256 GB of memory. The Power server was divided into two logical partitions (LPARs) of equal size. Each LPAR was configured in a dedicated mode running SMT8 mode. The LPARs were used as compute nodes. The IBM Power System S822LC server used for the load generator has two sockets with 10 cores per socket with 256 GB of memory and 10 Gb two-port network. The IBM Power System 822LC server used for the control plane has two sockets with 10 cores per socket with 256 GB of memory and 10 Gb two-port network. IBM Power Systems used Red Hat Enterprise Linux 7.6.

Network configuration

The switch used is IBM RackSwitch model G8264-T. All the systems used a 10 Gb network. Each system had a 10 Gb adapter card with two ports that were bonded and had a throughput of 20 Gbps.

Note: Actual maximum bandwidth from two 10 GB link aggregated network interface is about 18.8 GBps.

Software stack

The software stack includes:

  • RHEL 7.6 (3.10.0-957.12.1.el7.ppc64le)
  • OpenShift v3.11.98
  • MongoDB Enterprise 4.0.2
  • Node.js v 8.14.1 (REST APIs)
  • Node modules
    • express
    • mongoose
    • async
    • ejs
    • body-parser
    • passport
    • passport-http
    • router
    • mongodb (driver)

External storage

The compute-intensive workload used in this study did not require any significant amount of persistent storage. The testbed was able to use the local storage within the servers and no additional external storage subsystem was required.

System build and deployment on OpenShift

Each instance of the container would have a MongoDB instance, the four collections, and the microservice Node.js application. The microservice JavaScript application was configured to use at most 20 connections to the database instance in its container.

Figure 2. Docker image build and deploy

Figure 2. Docker image build and deploy

A single helm chart was built to configure and deploy the Docker images to their respective OpenShift compute nodes.

Configuration steps

As a prerequisite ensure that you have Red Hat Enterprise Linux operating system 7.6 or later installed on the IBM Power system. Perform the following steps to install and configure OpenShift on IBM Power Systems for this study with geospatial workload running on the MongoDB database.

  1. Configure password less ssh between the (master and compute) nodes in the cluster.
  2. Configure the OpenShift Enterprise repository containing packages for OpenShift installation. You can configure repositories using the subscription manager or point to a local repository containing the OpenShift packages.
  3. Install ansible preferably 2.6.X for OpenShift 3.11 on the master node present at: https://releases.ansible.com/ansible
    $yum install ansible-2.6.9-1.el7.ans.noarch.rpm -y
  4. The choice of container runtime engine for this study is Docker. Install the respective yum packages and follow the steps described at https://docs.openshift.com/container-platform/3.11/install/host_preparation.html on the master node.
    $ yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -y
  5. Ensure OSE (OpenShift Enterprise) repo is configured on master node along with local-rhn-server-extras and local-rhn-server-epel and is enabled as well. Now install openshift-ansible using yum.
    $yum install openshift-ansible -y
    In order to enable the OSE repository, follow the steps outlined at:
    https://docs.openshift.com/container-platform/3.11/install/host_preparation.html#host-registration
  6. Once openshift-ansible is installed, use the sample host files located at /usr/share/doc/openshift-ansible-docs-3.11.92/docs/example-inventories/ as reference for the configuration of hosts on the cluster.
  7. Use openshift_deployment_type = ‘openshift-enterprise’ in the hosts configuration file as this is the deployment mode recommended for IBM Power Systems pp64le.
  8. Ensure you hold a valid Red Hat account (no subscription needs to be associated), in order to pull the Docker images for ppc64le from redhat-registry, validate using:
    $docker login https://registry.redhat.io
    In case the user does not have a Red Hat account, the user can register for a valid account at:
    https://www.redhat.com/wapps/ugc/register.html
  9. Copy the most suited inventory and stage it under /etc/ansible, and use this to install the prerequisites on the master node.
    $cd /usr/share/ansible/openshift-ansible/
    $ansible-playbook -i /etc/ansible/hosts.12 playbooks/prerequisites.yml
    Where, /etc/ansible/hosts.12 is the hosts file used.
  10. Deploy a cluster using the following command on master node:
    $ansible-playbook -i /etc/ansible/hosts.12 playbooks/deploy_cluster.yml

Verifying the installation

Verify if the master node in the cluster is started and all the compute nodes are running and are in ready state using the following command:

$oc get nodes

For example:
$ oc get nodes
NAME                                   STATUS                     ROLES          AGE       VERSION
p136n135.pbm.ihost.com   Ready                      compute        9d        v1.11.0+d4cacc0
p230n134.pbm.ihost.com   Ready                     infra,master   9d        v1.11.0+d4cacc0

Status should be ready for all the nodes in cluster.

Steps to access GUI

To verify and access the GUI console of OpenShift Container Platform, use the web console port number with the host name of the master node.

The console URL is https://<master>.openshift.com:8443/console.

Sample hosts file

You can use the following hosts file as a reference for cluster deployment.

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
#openshift_deployment_type='origin'
openshift_deployment_type='openshift-enterprise'

system_images_registry="registry.access.redhat.com/openshift3/"

ansible_user=root

openshift_disable_check=package_version,disk_availability,docker_storage,memory_availability

openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]
openshift_auth_type=allowall

oreg_auth_user=<username>
oreg_auth_password=<password>
#oreg_test_login=false

debug_level=5

[masters]
<IP address>

[etcd]
<IP address>

[nodes]
<IP address> openshift_node_group_name="node-config-master-infra"
<IP address> openshift_node_group_name="node-config-compute"
<IP address> openshift_node_group_name="node-config-compute"

Tuning OpenShift Container Platform/Linux on Power

The following parameters were used to tune the system for optimal performance of your system. Depending on your environment, other tuning options might be favorable. Unless specified, the tuning options are applied to all the systems involved.

Operating system tuning

Perform the following steps to tune the system for optimal performance:

  1. Stop and disable irqbalance service on all the nodes of cluster
    $systemctl stop irqbalance.service
    $systemctl disable irqbalance.service
  2. Security Enhanced-Linux (SELinux) should be enforcing if not run below command after changing SELinux to enforcing before rebooting the LPAR.
    $touch /.autorelabel
  3. Set tuning for compute nodes:
    Set simultaneous multithreading (SMT) snooze delay:
    $ppc64_cpu --smt-snooze-delay=0

Socket tuning

In /etc/sysctl.conf, set the maximum number of backlogged sockets (net.core.somaxconn=32768) and use the full range of ports (net.ipv4.ip_local_port_range=1024 65535).

Set the socket for reuse using the following commands:

$echo "1" > /proc/sys/net/ipv4/tcp_tw_recycle
$echo "1" > /proc/sys/net/ipv4/tcp_tw_reuse

Network tuning

OpenShift Container Platform uses software-defined networking (SDN) for communication between the pods. Pod network is established and maintained by OpenShift SDN, which configures an overlay network using Open vSwitch (OVS).

It is recommended to study the following important network considerations while installing and configuring OpenShift:

  1. Make sure that you have NM_CONTROLLED=yes in the /etc/sysconfig/network-scripts/ifcfg-ethxxx file.
  2. Ensure that you have the correct DNS entries for all the participating nodes in the cluster.
  3. Ensure that the firmware used in the network switches is the latest and supported. In /etc/sysconfig/network-scripts/ifcfg-ethxx, add the following entries as OpenShift will set up dnsmasq and overwrite /etc/resolv.conf.
  4. For Example :
    DNS1=
    DNS2=
    DOMAIN=
  5. Ensure that the network manager is enabled.
    $yum install NetworkManager
    $ systemctl enable NetworkManager

Bonding was used for the 10g network. The following table shows the parameter settings.

System Receive/Transmit queue size Receive/Transmit combined
Control plane 4078 25
Compute nodes 4078 20

For example:

$ethtool -G enP1pos1 rx 4078 tx 4078
$ethtool -L enP1p0s1 combined 20

The application uses math functions from MASS libraries. Refer: https://public.dhe.ibm.com/software/server/POWER/Linux/xl-compiler/eval/ppc64le/rhel7/ibm-xl-compiler-eval.repo

System-specific tunings

You can use the following system-specific settings to maximize the throughput:

  1. Set socket reuse on the workload generator server.
    $echo “1” >/proc/sys/net/ipv4/tcp_tw_recycle
    $echo “1" >/proc/sys/net/ipv4/tcp_tw_reuse
  2. Turn off the snooze delay option on all the nodes.
    $ppc64_cpu --smt-snooze-delay=0

Configure quality of service for pods

The team was surprised by the significance of the effect of changing the Resources stanza in the deployment YAML of the workload. The OpenShift documentation (https://docs.openshift.com/container-platform/3.11/admin_guide/overcommit.html#qos-classes) describes how these resource settings determine the quality of service (QoS) – essentially the CPU allocation algorithm that is used to determine the amount of physical CPU power that needs to be allocated to the pod.

Summarizing the content, OpenShift has established the following three tiers of service:

  • Guaranteed QoS
  • Burstable QoS
  • BestEffort QoS
QoS Name
Priority
Resource stanza parameters
Description / Use
Guaranteed 1-highest limits = requests High priority, time sensitive tasks
Burstable 2 limits > requests Most common workloads – maximize instantaneous access to vCPU access
BestEffort 3-Lowest Not set Low priority tasks – first to be terminated if the system runs out of resources - background system housekeeping

An example (from a different test bed and workload) of the CPU resource management is shown in Figure 3. In the screen capture provided, the Y axis is CPU utilization and the X axis represents 1 column for each vCPU. This particular server had 192 vCPUs. On the left side is a pod running in Burstable QoS while on the right side the pod is running in BestEffort QoS (no resource parameters specified). In the BestEffort QoS (diagram on the right side), the vCPU utilization is throttled by OpenShift Container Platform presumably to ensure that all pods running with this QoS service are given access to the vCPU resources. Switching over to Burstable (diagram on the left) OpenShift Container Platform did not throttled the pods allowing them to use up to their limits.

Figure 3. Resource utilization – comparing Burstable QoS with BestEffort QoS

Figure 3. Resource utilization – comparing Burstable QoS with BestEffort QoS

In this particular example, the throughput of this single pod increased 70% as a result of switching from BestEffort to Burstable.

The resource limits are easily configurable using OpenShift Container Platform user interface.

Click Applications -> Deployments. Click the Actions drop-down menu and click Edit Resource Limits.

Figure 4. Edit resource limits

Figure 4. Edit resource limits

Burstable QoS

For a pod to be placed in the Burstable QoS, every container in the pod should have an associated request. and Limits fields must be set for CPU and memory and they need not be equal. And then containers are qualified as Burstable QoS. This class of QoS has 2nd level priority.

In OpenShift Container Platform here is how Burstable QoS can be set. If you look in Figure 5, the value for the Request and Limit fields are not equal for CPU and memory.

Figure 5. Burstable QoS

From this exercise we found that Burstable QoS was the one which yields best throughput for this workload. So, it is worth trying different QoS options to see which is the best fit.

Building or pushing the Docker images to the OpenShift Docker registry

OpenShift provides an integrated Docker registry to place Docker images and to provision the new image as needed. Perform the following steps to build and push a new custom image in to the OpenShift registry.

  1. Run the following commands on the OpenShift Container Platform master node.

    $oc login -u admin
    ( Key-in Password. i.e. admin/admin) ( https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html#basic-setup-and-login )
    $oc new-project harsha
    (“harsha” is the project name)
    $oc project harsha
    (Switch to project harsha)
    $cd /home/harsha/CombinedImage/
    (This folder contains the Dockerfile and the files and packages required to build the image)
    $docker build -t vkh-mongodb-ppc64le:v6 .
    $docker tag vkh-mongodb-ppc64le:v6 docker-registry.default.svc:5000/harsha/vkh-mongodb-ppc64le:v6
  2. Generate a token-based login from the web console.
    $oc login https://p136n143.pbm.ihost.com:8443 --token=<TOKEN>

  3. Use the previous token to log in to the OpenShift registry.
    $docker login -u admin -p 'oc whoami -t' docker-registry.default.svc:5000
  4. Push the image to the OpenShift private registry on the master node.
    $docker push docker-registry.default.svc:5000/harsha/vkh-mongodb-ppc64le:v6
  5. Install the helm chart.
    $helm install -n <chart name>

Deploying your application using helm and tiller on OpenShift Container Platform

Helm and tiller is used to perform packaging and deployment in OpenShift Container Platform. Operators could also be used for packaging and deployment in OpenShift Container Platform, because operators lifecycle manager is in technology preview for OpenShift Container Platform 3.11 on IBM Power Systems. We have used the helm approach for packaging and deployment in this study. Because OpenShift Container Platform do not spin up tiller services using POD which needs to compliment helm functionality, install tiller using the steps listed at: https://blog.openshift.com/getting-started-helm-openshift/ (Note OpenShift >=3.6 )

You can find the downloads specific to IBM ppc64le at: https://github.com/helm/helm/releases

Building or installing helm and tiller

In this study, helm and tiller version 2.9.1 binaries were built on the system and the tiller is started natively on the host. HELM_HOST is pointed to \<ip_addr:port\> where tiller is running. The steps are detailed below in order to build the binaries natively on system.

  1. Download the Go distribution (https://golang.org/doc/install).
  2. Run the following command to build helm and tiller on the master node.

    $mkdir -p $GOPATH/src/k8s.io
    $cd $GOPATH/src/k8s.io
    $git clone https://github.com/kubernetes/helm.git
    $cd helm
    $git checkout v2.9.1
    $make bootstrap build
    $cd bin
    $export PATH=$PWD:$PATH
    $GOPATH is the directory where the Go packages are installed.
  3. Run the following command to bootstrap helm and tiller.
    $helm init
    $nohup tiller 2>&1 &
    $lsof -i:44134
    $export HELM_HOST=0.0.0.0:4413

Deploying cloud-native applications on IBM Power Systems

Refer to the following video to understand how simple it is to build or deploy a cloud-native application developed with Node.js and MongoDB. Also, this video enables you to realize how IBM Power Systems is tailor-made to deliver a cost-effective solution.

https://www.youtube.com/watch?v=OVzM05PF3sg

Conclusion

The contents of this study give a good perspective and reference on how to install OpenShift Container Platform on IBM Power. The report mainly explains the implementation part, and the commands and scripts used for configuration of workload on Openshift Container Platform. The workload is run with SMT 8 configuration in order to get the highest container density.

The study describes important aspects like using helm and tiller with Openshift Container Platform, building and pushing Docker images to the OpenShift docker registry, optimizations at different stages which can be incorporated while configuring a workload running on OpenShift Container Platform on IBM Power Systems.

Appendix

This section provides additional material for users to better understand about the study.

Dockerfile

Refer to the following Dockerfile that was used to build a Docker image on ppc64le.

#
#  MongoDB Dockerfile
#   -- modified by Bruce Semple for MongoDB Proofpoint Exercise on ICp
#   -- modified by Krishna Harsha Voora for MongoDB Proofpoint Exercise on # OCP
#
#  https://github.com/dockerfile/mongodb
#

# FROM dockerfile/ubuntuaa
FROM ppc64le/centos
ENV NODE_ENV production
ENV PORT 3000

#
# Pre-Defined System Configuration..
#
COPY linux/limits.conf  /etc/security/limits.conf
COPY linux/defrag      /sys/kernel/mm/transparent_hugepage/defrag
# Install MongoDB Enterprise from RPM downloaded from
# https://repo.mongodb.com/yum/redhat/7/mongodb-enterprise/4.0/ppc64le/RPMS/
#
WORKDIR /install
ADD  *.rpm  /install/
ADD datasets/*.json /install/
ADD *.sh /install/
# pick up the library that Calvin built with the ATC
ADD *.so /install/

# Run as root #
USER root
#
# add a plugin to yum so that it will automatically download dependencies
#
RUN yum -y -v  install yum-utils yum-plugin-ovl
#
# Set LD_PRELOAD environment Variable
#
RUN  yum -y  install /install/mongodb-enterprise-server-4.0.2-1.el7.ppc64le.rpm
RUN  yum  -y install /install/mongodb-enterprise-shell-4.0.2-1.el7.ppc64le.rpm
RUN  yum -y  install /install/mongodb-enterprise-tools-4.0.2-1.el7.ppc64le.rpm

# This might be the license file
#RUN  yum -y  install /install/mongodb-enterprise-4.0.2-1.el7.ppc64le.rpma
#
# **** Replace the Default MONGOD.CONF file with ours -- specifically to #allow outside connections
#
ADD mongodb/mongod.conf /etc
#
# Adjust the ulimits for Mongo
#
ADD linux/30_adjustlimits.conf /etc/security/limits.d
RUN yum -y install wget

#
# Now Install Node -- pull Binary from nodejs.org
#

WORKDIR "/nodejs"
RUN  wget  https://nodejs.org/dist/v8.14.1/node-v8.14.1-linux-ppc64le.tar.xz
RUN  tar -xvf node-v8.14.1-linux-ppc64le.tar.xz
ENV PATH=/nodejs/node-v8.14.1-linux-ppc64le/bin:$PATH

#
# Check if Node is working
RUN node --version
#

#
# Copy in Node Application
# ADD will pull the whole directory
#      - Including package.json file (needed by npm package  install)
#      - Including creategeoindexes.js File
WORKDIR "/app"
ADD ./nodeAppV4 /app/
ADD nodeAppV4/createGeoIndexes.js  /app/
RUN cd /app;
RUN rm -rf ./node_modules;
#
# Run as user root
USER root
RUN yum -y install openssl-devel.ppc64le yum-plugin-ovl.noarch
RUN yum -y install gcc
#
# Now install the various NODE Modules
#
RUN npm install   async --unsafe-perm
RUN npm install   body-parser --unsafe-perm
RUN npm install   ejs  --unsafe-perm
RUN npm install  passport --unsafe-perm
RUN npm install  passport-http   --unsafe-perm
RUN npm install   router --unsafe-perm
RUN npm install  express --unsafe-perm
RUN npm install   mongodb --unsafe-perm
RUN npm install  mongoose --unsafe-perm

# Create directories.
RUN mkdir -p /data/db
RUN chmod 777 /data/
RUN chmod 777 /data/db/
RUN chmod -R 777 /var/
VOLUME ["/data/db"]
RUN chmod +x /install/runit.sh

# Define default command.
USER mongod
ENTRYPOINT  ["/bin/bash","-c","/install/runit.sh"]

# Set the LD_PRELOAD environment
ENV LD_PRELOAD=/install/libmass.so


#EXPOSE 27017
#EXPOSE 28017
EXPOSE 3000

Helm chart

Sample values.yaml file

replicaCount: 2

#platform: ppc64le
# platform: ppc64le
platform: amd64

image:
  repository: docker-registry.default.svc:5000/mithun1/mhr-mongodb-x86
  #repository: mycluster.icp:8500/default/bps-combined-ppc64le
  tag: v3
  pullPolicy: Always
service:
  externalPort: 3000
  internalPort: 3000
#ingress:
#  path: /ingP9/
#  rewrite: /
livenessProbe:
  initialDelaySeconds: 60
  periodSeconds: 60
  timeoutSeconds: 10
  failureThreshold: 5
resources:
  limits:
    cpu: 1
    memory: 1024Mi
  requests:
    cpu: 1
    memory: 512Mi

Runit.sh script

This file is used as an entry point file in Dockerfile.

#! /bin/bash

#
# Set some Operating system configuration options prior to starting #MongoDB deamon
#
rm -rf /var/log/mongodb/mongod.log

#
# Start the MongoDB deamon
#

# Updated by Krishna Harsha Voora to see if vkh-mongodb-ppc64le:v6 Docker-#Image works
mongod --bind_ip_all &

#
# wait a few minutes for Mongo to start up
#
sleep 15s
#
# Import the datasets
#

mongoimport  --uri mongodb://localhost:27017/proofpoint01  -c restaurants    /install/restaurants_idFixed.json
mongoimport  --uri mongodb://localhost:27017/proofpoint01  -c neighborhoods /install/neighborhoods_idFixed.json
mongoimport  --uri mongodb://localhost:27017/proofpoint01  -c companies   /install/companies_noID.json
mongoimport  --uri mongodb://localhost:27017/proofpoint01  -c inspections    /install/city_inspections_FixID.json
#
# Create the GeoIndexes
#
mongo proofpoint01 /app/createGeoIndexes.js

# Start up node via node package manager
#
npm start
#

CreateGeoIndexes.js

This is a JavaScript code snippet that is invoked by the runit.sh script to build the following geospatial indexes.

  • db.restaurants.createIndex({ location: "2dsphere" })
  • db.neighborhoods.createIndex({ geometry: "2dsphere" })

Microservices used

The application program provided a variety of microservices with the following characteristics:

  • Geospatial based
  • Non-geospatial, with a simple data structure
  • Non-geospatial, with a complex data structure (queries against the companies’ collection)

    However, for this set of tests, only five of the geospatial-based microservices were used. These microservices used the geospatial capabilities of MongoDB and resulted in a processor-intensive workload.

The following five microservices were used for this test:

  • /api/get10neighborhoodsV2Iterate
  • /api/getMyNeighborhoodv2Iterate
  • /api/getNeighborhoodsIntersectingTriangleIterate
  • /api/getRestaurantRingIteration
  • /api/getMyNeighborhoodRestaurantsV3Iterate

Refer to the “Microservices available in the driver” section in Appendix for more details.

Microservices available in the driver

The Node.js application supported the following Representational State Transfer (REST)-based microservices.

Note: You can notice a :loopnum parameter on many of the geospatial-related APIs. Remember that the purpose of this set of microservices is to generate processor load. If it was determined that the transaction did not present enough load, or simply more load was needed, the :loopnum parameter could be incremented.

  • Health / PING related (all HTTP get requests)
    • / This returns a JSON object with the message “First test of returning a JSON object”.
    • /xx This returns a JSON object with the message text “command xxx”.
    • /yy This returns a JSON object with the message text “command yyy”.
    • /health This returns a JSON object with the message text “Ready”. This is used by OpenShift to check the health or readiness of the container.
  • To Do List application support
    • (get) /api/activities
      This returns a JSON object with a list of activities.
    • (post) /api/activities
      This creates an activity.
    • (get) /api/activities/:activity_id
      This returns an activity by activity ID.
    • (put) /api/activities/:activity_id
      This updates an activity by activity ID.
    • (delete) /api/activities/:activity_id
      This deletes an activity by activity ID.
    • (get) /api/activities/date/:date
      This finds the activity by date.
    • (put) /api/activities/addtodate/:activity_id/:date
      This updates the name and quantity of the activity identified by ID and date.
  • Geospatial-related (operate on neighborhood and restaurants collections) (all HTTP get requests)
    • /api/oneNeighborhood
      This returns a neighborhood JSON object.
    • /api/neighborhoodByName/:name
      This finds a neighborhood by name and returns the neighborhood JSON object.
    • /api/neighborhoodByID/:id
      This looks up a neighborhood by ID and returns the JSON neighborhood object.
    • /api/countRestaurants This returns a JSON object with a count of restaurants in the Restaurants collection.
    • /api/countNeighborhoods
      This returns a JSON object with a count of neighborhoods in the Neighborhood collection.
    • /api/restaurantsNearMe
      This returns a JSON object with a list of restaurants within a 15-mile radius around this point [-73.93414657, 40.82302903] (longitude first, then latitude). This point is in the Denver, CO area
    • /api/restaurantsInRangeV2/:range
      This returns a JSON object with a list of restaurants within the passed radius around the fixed point [-73.93414657, 40.82302903] (longitude, latitude)
    • /api/get10neighborhoods
      This returns the list of 10 neighborhoods that MongoDB returned after it was told to skip the first 10.
    • /api/get10neighborhoodsV2Iterate/:loopnum
      This microservice does the following tasks:
      • Asks MongoDB for 10 neighborhoods after skipping the first 10.
      • Asks MongoDB for 10 neighborhoods after skipping the first 50.
      • Asks MongoDB for 10 neighborhoods after skipping the first 100.
      • Asks MongoDB for 10 neighborhoods after skipping the first 150.
      • Then returns a JSON object with a list of the 10 neighborhoods that MongoDB returned on the last call.
    • /api/getMyNeighborhood
      This returns the neighborhood that contains the point [-73.93414657, 40.82302903] (longitude, latitude).
    • /api/getMyLocalRestaurantsV3Iterate/:loopnum
      This microservice performs the following tasks:
      1. It first queries the neighborhood collection to find the neighborhood that contains this point [longitude: -73.93414657, latitude: 40.82302903].
      2. It then uses the polygon associated with the returned neighborhood to locate from the restaurant collection by looking for those restaurants that are in the identified neighborhood.
      3. A loop number parameter is then passed to cause the microservice to loop through the previous steps n times before returning.
      4. It returns a JSON object with the list of restaurants.
    • /api/getRestaurantsRingIteration/:long/:lat/:mindistance/:distance/:loopnum
      This returns a JSON object with a list of restaurants that are located between an inner and outer ring (donut shape) around the :long, :lat point passed in the call. The inner ring is defined by the :mindistance parameter while the outer ring is defined by the :distance parameter. As before, the microservice supports a :loopnum parameter to support repeating the geospatial call for a specific number of times before returning.
    • /api/getMyNeighborhoodV2Iterate/:long/:lat/:loopnum
      This API accepts a longitude and latitude geospatial point and a loop counter. It looks up the neighborhood containing the point that was passed in. Optionally, it will repeat the query n times depending on the setting of :loopnum before returning a JSON object with the name of the neighborhood containing the point that was passed in.
    • /api/getNeighborhoodsIntersectingTriangleIterate/:long/:lat/:loopnum
      The base of the triangle was fixed at these two points [-73.70,40.50], [-73.70, 40.9]. You passed in the apex of the triangle. Varying the apex point would vary the neighborhoods that the triangle intersected. In addition, you had the option of repeating the query n times before returning. The API would return a JSON object with a list of neighborhoods that intersected the triangle.
  • Companies and Inspections collections (all gets)
    • /api/getCompaniesByEmployeesWIteration/:ltnum/:number
      This returns a JSON object with the count and a list of the companies that have greater :number than the number of employees passed in to the API call. The list of companies returned has the following fields: name, number of employees, year founded, and number of products. Similar to other APIs, an option is provided to repeat this query n times (:number) before returning.
  • /api/getInspectionsByZipCodeIteration/:gtzip/:ltzip/:number
    This API returns a list of inspections that occurred in a range of zip code areas. If you want a single zip code area, set the :gtzip and :ltzip values to the same zip code. As before, you have the option of repeating the query n times (:number) before returning. The JSON object returned lists the business name, address zip code, certificate number, and pass/fail indication.

Data set or collection schemas

This section specifically describes the schemas (data structures) for each of the collections.

Neighborhood

This is the schema for the Neighborhood collection.

const neighborhoodSchema = new Schema({
    geometry: {
     coordinates: {
            type: [[[Number]]], // Array of arrays of arrays of numbers
            required: true
        },
  type: {
           type: String,
           enum: ['Polygon'],
           required: true
        }
     },
   name: String,
  }

Restaurants

This is the schema for the Restaurants collection.

const restaurantSchema = new Schema({
        name: String,
    location: {
    type: {
    type: String,
    enum: ['Point'],
    required: false
    },
    coordinates: {
    type: [Number],
    required: false
    }
    }
    }

Inspections

This is the schema for the Inspections collection.

const inspectionsSchema = new Schema({
  id: {
  type: 'String'
  },
  certificate_number: {
  type: 'Number'
  },
  business_name: {
  type: 'String'
  },
  date: {
  type: 'Date'
  },
  result: {
  type: 'String'
  },
  sector: {
  type: 'Date'
  },
  address: {
  city: {
  type: 'String'
  },
  zip: {
  type: 'Number'
  },
  street: {
  type: 'String'
  },
  number: {
  type: 'Number'
  }
  }

Companies

This is the schema for the Companies collection.

const companiesSchema = new Schema({
  name: {
  type: 'String'
  },
  permalink: {
  type: 'String'
  },
  crunchbase_url: {
  type: 'String'
  },
  homepage_url: {
  type: 'String'
  },
  blog_url: {
  type: 'String'
  },
  blog_feed_url: {
  type: 'String'
  },
  twitter_username: {
  type: 'String'
  },
  category_code: {
  type: 'String'
  },
  number_of_employees: {
  type: 'Number'
  },
  founded_year: {
  type: 'Number'
  },
  founded_month: {
  type: 'Number'
  },
  founded_day: {
  type: 'Number'
  },
  deadpooled_year: {
  type: 'Number'
  },
  tag_list: {
  type: 'String'
  },
  alias_list: {
  type: 'String'
  },
  email_address: {
  type: 'String'
  },
  phone_number: {
  type: 'String'
  },
  description: {
  type: 'String'
  },
  created_at: {
  $date: {
  type: 'Number'
  }
  },
  updated_at: {
  type: 'Date'
  },
  overview: {
  type: 'String'
  },
  image: {
  available_sizes: {
  type: [
  'Array'
  ]
  }
  },
  products: {
  type: [
  'Mixed'
  ]
  },
  relationships: {
  type: [
  'Mixed'
  ]
  },
  competitions: {
  type: [
  'Mixed'
  ]
  },
  providerships: {
  type: 'Array'
  },
  total_money_raised: {
  type: 'String'
  },
  funding_rounds: {
  type: [
  'Mixed'
  ]
  },
  investments: {
  type: 'Array'
  },
  acquisition: {
  price_amount: {
  type: 'Number'
  },
  price_currency_code: {
  type: 'String'
  },
  term_code: {
  type: 'String'
  },
  source_url: {
  type: 'String'
  },
  source_description: {
  type: 'String'
  },
  acquired_year: {
  type: 'Number'
  },
  acquired_month: {
  type: 'Number'
  },
  acquired_day: {
  type: 'Number'
  },
  acquiring_company: {
  name: {
  type: 'String'
  },
  permalink: {
  type: 'String'
  }
  }
  },
  acquisitions: {
  type: 'Array'
  },
  offices: {
  type: [
  'Mixed'
  ]
  },
  milestones: {
  type: [
  'Mixed'
  ]
  },
  video_embeds: {
  type: 'Array'
  },
  screenshots: {
  type: [
  'Mixed'
  ]
  },
  external_links: {
  type: [
  'Mixed'
  ]
  },
  partners: {
  type: 'Array'
  }
  }

Activity schema (for to do list microservices)

const activitySchema = new Schema({
        activity_name: String,
    quantity: Number,
    date: {type: Date, default: Date.now}
          })

Get more information

To learn more about IBM POWER9 processor-based servers, contact your IBM representative, IBM Business Partner, or visit the following website: https://www.ibm.com/it-infrastructure/power/power9

Permalink