Cloud Native Apps

OpenFaaS on RHOCP 4.x – Part 1: Install

By Alexei Karve posted Wed July 07, 2021 08:42 AM

  

Deploying OpenFaaS on Red Hat OpenShift Container Platform for IBM Power ppc64le

Introduction

OpenFaaS is one of the open-source Function-as-a-Service (FaaS) frameworks that provides the boilerplate components to allow setting up a FaaS architecture on top of an OpenShift Kubernetes cluster. OpenFaaS makes it simple to deploy existing code, event-driven functions and microservices without repetitive, boiler-plate coding.

This recipe will describe the OpenFaaS architecture and components, then show how to build, deploy, and test OpenFaaS on RedHat OCP 4 for IBM Power ppc64le. Specifically, this was tested on IBM® Power® System E880 (9119-MHE) based on POWER8® processor-based technology with OpenShift version 4.6.23.

Definitions

Serverless is a cloud-computing code execution model, where a certain cloud provider takes over the responsibility for servers running along with computing resources management. Physical or virtual servers are automatically deployed in the cloud by the third-party vendors. Serverless abstracts infrastructure concerns like managing or provisioning servers and resource allocation from developers and gives it to a platform (like Red Hat OpenShift), so developers can focus on writing code and delivering business value.

Kubernetes (also known as k8s) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications. Kubernetes is a core component of OpenShift. OpenShift has stricter security policies. It is forbidden to run a container as root. RedHat OpenShift is designed to help IT development and operations teams work together to deliver and manage microservices-based applications.

Microservices architecture is an approach of building an application out of a set of modular components. In contrast to monolithic architecture, in which all the code is interwoven into one large system, dividing an application into microservices allows developers to create and modify small pieces of code. With large monolithic systems, even a minor change to the application requires a substantial deploy process. FaaS eliminates this complexity. Developers are looking for solutions that support building serverless microservices and stateless containers. Applications can be composed of many functions.

Function-as-a-Service (FaaS) is a kind of cloud computing service that allows developers to build, compute, run, and manage application packages as functions without having to maintain their own infrastructure. It is a serverless way to execute modular pieces of code on the edge. FaaS lets developers write and update a piece of code on the fly, which can then be executed in response to an event. A FaaS framework makes it easy to bundle and manage functions. Using serverless code, web developers can focus on writing application code, while the serverless provider takes care of server allocation and backend services. FaaS enables execution of modular pieces of code on the edge. This typically means a much faster development turnaround. The serverless provider will handle the scaling concerns.

Functions are event handlers, where the event could be an HTTP call or an asynchronous event triggered from a broker or source. Functions are a unit of compute like Virtual Machines and containers. Functions are easily scalable because they are event-driven, not resource-driven. That scalability allows for increased efficiency and value.

The ppc64le is a pure little-endian mode that has been introduced with the POWER8 as the prime target for technologies provided by the OpenPOWER Foundation, aiming at enabling porting of the x86 Linux-based software with minimal effort.

OpenFaaS Architecture

In the OpenShift Kubernetes world, OpenFaaS functions are deployments. Each function creates a Docker image which when deployed through the OpenFaaS API creates a Deployment and Service API object and that in turn creates pods. The functions thus run in Kubernetes pods. Invocations to the functions are routed by a gateway, defined by the FaaS system, or by an OpenShift Route. The load-balancing of the requests between function instances is implemented by leveraging service instances that implement random load-balancing of requests between function instances. The API gateway allows for functions to be monitored and scaled.

The OpenFaaS Operator comes with an extension to the Kubernetes API that allows you to manage OpenFaaS functions in a declarative manner. An OpenFaaS operator implements a control loop that tries to match the desired state of the OpenFaaS functions, defined as a collection of custom resources, with the actual state of the cluster. The CRD format is only needed to use kubectl/oc command to create functions. The use of Kubernetes primitives means we can use kubectl/oc command to check logs, debug and monitor OpenFaaS functions in the same way as any other Kubernetes resources. We can still use the existing YAML with the faas-cli. Functions created by the faas-cli can still be managed through kubectl/oc commands. There are a few differences between the YAML used by the faas-cli and the YAML used by Openshift Kubernetes.

The OpenFaaS CLI has a template store for pre-made functions, most of which are multi-architecture. The templates contain Dockerfiles and boiler-plate code which doesn’t have to be repeated for each function. You can create your own function in Python, NodeJS, Golang, Perl, etc. The code is overlaid with a template to produce a container image, which is then pushed into a container registry. When the function is deployed, the target cluster will schedule a pod onto a node, and then pull the container image from the registry. The container images can be decomposed into different layers, making them efficient to push and pull.

OpenFaaS defines a rule in the Prometheus AlertManager that is responsible for scaling the function instances. The AlertManager monitors the Prometheus data-store that collects runtime metrics, and fires scaling events according to the collected metrics. If Prometheus triggers an alert, OpenFaaS will scale the function and thus launch other pods. We can observe the latency of the gateway, the execution time of the functions, the number of replicas. OpenFaaS may instead use the Horizontal Pod Autoscaling (HPA) of OpenShift. Custom scaling mechanisms can be implemented too, such as auto-scaling based on the number of concurrent in-flight requests.

OpenFaaS Components

When we deploy OpenFaaS on OpenShift, we get a number of components installed on the cluster. The core components are:

  • Gateway - The API Gateway provides an external route into our functions and collects Cloud Native metrics through Prometheus. The gateway has a UI built-in which can be used to deploy our own functions or functions from the OpenFaaS Function Store, then invoke them. The gateway will scale functions according to demand by altering the replica count in the OpenShift deployment. Custom alerts generated by AlertManager are received on the /system/alert endpoint.
  • Basic Auth Plugin - The API Gateway will delegate authentication of the /system/ routes to a microservice. By default, the basic auth plugin is used.
  • Queue worker - dequeues asynchronous requests from NATS and executes them

The following CNCF projects are installed as dependencies:

  • Prometheus provides metrics and enables auto-scaling through AlertManager. It is used to monitor Rate, Error and Duration metrics for each function and for the core services.
  • AlertManager is used to monitor functions for high rates of invocation, and then triggers auto-scaling. When any alert is resolved, the function will scale back to its original minimum replica count.
  • NATS provides a way to execute tasks in the background through a queue. is the Cloud Native Messaging System that provides asynchronous execution and queuing. OpenFaaS enables long-running tasks or function invocations to run in the background through the use of NATS Streaming. This decouples the HTTP transaction between the caller and the function. The queue-worker processes asynchronous function invocation requests.

Configuration items such as timeouts, concurrency levels, retries, feature flag, log verbosity can be altered through the helm chart, or through the arkade.

The Function Watchdog embedded in every container, allows any container to become serverless. It acts as an entry point which enables HTTP requests to forward to the target process via Standard Input (STDIN) and sent back to the caller by writing to Standard Output (STDOUT) from our application. The watchdog agent redirects web traffic to your function.

The faas-cli is used to build and deploy functions to OpenFaaS. We only need to build the OpenFaaS functions from a set of language templates such as CSharp, Node.js, Python, Ruby, Go. We write the handler and the CLI creates a Docker image. The Container registry holds each immutable artifact that can be deployed on OpenFaaS via the API.

Build and Deploy OpenFaaS for Power ppc64le

These instructions assume you have the OpenShift 4.x cluster installed. The steps for deploying and using OpenFaaS include:

1. Install OpenFaaS CLI

2. Build required images and binaries

3. Deploy OpenFaaS via arcade or helm

4. Find your OpenFaaS gateway address

5. Retrieve your gateway credentials

6. Log in, deploy a function, and try out the UI

Building the images and binaries

Expose the OpenShift Container Platform registry manually so that we can login from outside the cluster using the route address and tag and push images using podman/docker commands. We may use an external repository like dockerhub for these images. These instructions use the built-in container image registry default-route-openshift-image-registry.apps.test-cluster.priv as shown below.

oc get routes -n openshift-image-registry

NAME            HOST/PORT                                                  PATH   SERVICES         PORT    TERMINATION   WILDCARD
default-route   default-route-openshift-image-registry.apps.test-cluster.priv          image-registry   <all>   reencrypt     None

Within OpenShift, these images are accessible at image-registry.openshift-image-registry.svc:5000.

The list of Images that we will use for openfaas install on ppc64le are:

  • alertmanager image: prom/alertmanager-linux-ppc64le
  • prometheus image: prom/prometheus-linux-ppc64le
  • basic-auth-plugin image: image-registry.openshift-image-registry.svc:5000/openfaas/basic-auth:latest-dev
  • gateway image: image-registry.openshift-image-registry.svc:5000/openfaas/gateway:latest-dev and image: image-registry.openshift-image-registry.svc:5000/openfaas/faas-netes:latest-dev
  • nats image: image-registry.openshift-image-registry.svc:5000/openfaas/nats-streaming:latest-dev
  • queue-worker image: image-registry.openshift-image-registry.svc:5000/openfaas/nats-queue-worker:latest-dev

When using podman to build the images, you will need to prepend the alpine and golang images with docker.io/library in the in FROM lines in Dockerfile. Also you may need to install golang and use docker login to login to your local registry so you can push the images. The alertmanager and prometheus images are available for ppc64le at https://hub.docker.com/r/prom/alertmanager-linux-ppc64le and https://hub.docker.com/r/prom/prometheus-linux-ppc64le respectively. You can skip to the next paragraph or optionally build them from sources as follows:

# install golang
wget https://golang.org/dl/go1.17.linux-ppc64le.tar.gz
tar -C /usr/local -xzf go1.17.linux-ppc64le.tar.gz
export PATH=$PATH:/usr/local/go/bin
oc create ns openfaas # Our images will be pushed to this namesape
docker login default-route-openshift-image-registry.apps.test-cluster.priv # Use the kubeadmin and the token from `oc whoami -t`


# image: image-registry.openshift-image-registry.svc:5000/openfaas/alertmanager:latest-dev

git clone https://github.com/prometheus/alertmanager.git
cd alertmanager
make
mkdir -p .build/linux-ppc64le
cp alertmanager .build/linux-ppc64le/alertmanager
cp amtool .build/linux-ppc64le/amtool
docker build --build-arg ARCH=ppc64le --build-arg OS=linux --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/alertmanager:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/alertmanager:latest-dev --tls-verify=false
cd ..

# image: image-registry.openshift-image-registry.svc:5000/openfaas/prometheus:latest-dev
git clone https://github.com/prometheus/prometheus.git
cd prometheus
#apt-get -y install yarn
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
dnf -y install nodejs
# install npm
make build

make npm_licenses
mkdir -p .build/linux-ppc64le
cp prometheus .build/linux-ppc64le/prometheus
cp promtool .build/linux-ppc64le/promtool
docker build --build-arg ARCH=ppc64le --build-arg OS=linux --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/prometheus:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/prometheus:latest-dev --tls-verify=false
cd ..

Note that in the alertmanager Dockerfile, you may need to add the ARG ARCH and OS right before the COPY commands:

ARG ARCH="ppc64le"
ARG OS="linux"
COPY .build/${OS}-${ARCH}/prometheus        /bin/prometheus
COPY .build/${OS}-${ARCH}/promtool          /bin/promtool

Build the gateway and basic-auth images from the openfaas/faas github repository.

git clone https://github.com/openfaas/faas.git
cd faas

# image: image-registry.openshift-image-registry.svc:5000/openfaas/gateway:latest-dev
cd gateway
docker build --build-arg BUILDPLATFORM=linux/ppc64le --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/gateway:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/gateway:latest-dev --tls-verify=false

# image: image-registry.openshift-image-registry.svc:5000/openfaas/basic-auth:latest-dev
cd ../auth/basic-auth
docker build --build-arg BUILDPLATFORM=linux/ppc64le --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/basic-auth:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/basic-auth:latest-dev --tls-verify=false
cd ../..

Build the faas-netes from the openfaas/faas-netes github repository.

# image: image-registry.openshift-image-registry.svc:5000/openfaas/faas-netes:latest-dev
git clone https://github.com/openfaas/faas-netes.git
cd faas-netes
docker build --build-arg BUILDPLATFORM=linux/ppc64le --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/faas-netes:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/faas-netes:latest-dev --tls-verify=false
cd ..

Build the nats-streaming image from the nats-io/nats-streaming-server github repository. This requires using a docker/Dockerfile and docker-entrypoint.sh to build the image.

# image: image-registry.openshift-image-registry.svc:5000/openfaas/nats-streaming:latest-dev
git clone https://github.com/nats-io/nats-streaming-server.git
cd nats-streaming-server
mkdir docker

# Create docker/Dockerfile and docker-entrypoint.sh
docker build --build-arg TARGETPLATFORM=linux/ppc64le --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/nats-streaming:latest-dev -f docker/Dockerfile .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/nats-streaming:latest-dev --tls-verify=false
cd ..

Build the nats-queue-worker image from the openfaas/nats-queue-worker github repository.

# image: image-registry.openshift-image-registry.svc:5000/openfaas/nats-queue-worker:latest-dev
git clone https://github.com/openfaas/nats-queue-worker.git
# Comment out the following line in Dockerfile if present, it gives error on ppc64le because the /scratch-tmp is empty

#COPY --from=golang --chown=app:app /scratch-tmp /tmp
docker build --build-arg http_proxy=http://10.3.0.3:3128 --build-arg https_proxy=http://10.3.0.3:3128 -t default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/nats-queue-worker:latest-dev .
docker push default-route-openshift-image-registry.apps.test-cluster.priv/openfaas/nats-queue-worker:latest-dev --tls-verify=false
cd ..

For installation of OpenFaaS, we can use helm or optionally the arcade installer. If you want to install using arcade, you need to build the arcade for ppc64le.

git clone https://github.com/alexellis/arkade.git
cd arkade
# modify Makefile under dist, remove the rest of the builds
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build -ldflags $(LDFLAGS) -a -installsuffix cgo -o bin/arkade
make
# Creates bin/arkade
sudo mv bin/arkade /usr/local/bin
cd ..

Build the faas-cli for ppc64le and copy to /usr/local/bin

git clone https://github.com/openfaas/faas-cli.git
cd faas-cli
# You amy need to change Dockerfile and optionally Dockerfile.redist
-FROM teamserverless/license-check:0.3.9 as license-check
+FROM teamserverless/license-check:latest-ppc64le as license-check

# modify build.sh to add –build-arg BUILDPLATFORM=linux/ppc64le in both docker build lines
#./build.sh
# or
docker build --build-arg BUILDPLATFORM=linux/ppc64le --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -target release -t openfaas/faas-cli:latest-dev .
docker create --name faas-cli openfaas/faas-cli:latest-dev && docker cp faas-cli:/usr/bin/faas-cli . && docker rm -f faas-cli
sudo mv faas-cli /usr/local/bin
cd ..

Build the watchdog: The watchdog is used in Dockerfiles used by functions. You do not need to build this. Instead, you can use the docker.io/powerlinux/classic-watchdog:latest-dev-ppc64le as watchdog. However, if you build it, then you can copy over the bin/fwatchdog-ppc64le into the template folder with the Dockerfile for functions. This will be explained in later section when we build functions.

git clone https://github.com/openfaas/classic-watchdog.git
cd classic-watchdog
# modify Makefile under dist
GOARCH=ppc64le CGO_ENABLED=0 GOOS=linux go build -mod=vendor -a -ldflags $(LDFLAGS) -installsuffix cgo -o bin/fwatchdog-ppc64le
make
ls bin/fwatchdog-ppc64le
cd ..

git clone https://github.com/openfaas/of-watchdog.git
cd of-watchdog
# modify Makefile under dist
GOARCH=ppc64le CGO_ENABLED=0 GOOS=linux go build -mod=vendor -a -ldflags "-s -w -X main.Version=0.8.4-1-g989ac5f-dirty-1623851326 -X main.GitCommit=989ac5f0d2b4560d7b1d9f18d0231449527cc47c" -installsuffix cgo -o bin/fwatchdog-ppc64le
make
ls bin/fwatchdog-ppc64le
cd ..

The classic watchdog has historically been used for all of the official OpenFaaS templates, but the of-watchdog that provides an alternative to STDIO for communication between the watchdog and the function is now becoming more popular.

We can deploy OpenFaaS to OpenShift using the CLI installer arkade or with the standard helm chart. Both mechanisms are described in the following sections. This will be followed by the Examples section where we will build and deploy functions. We follow the recommended install of faas-netes which means that OpenFaaS is deployed into two namespaces:

  1. openfaas for the core components (ui, gateway, etc)
  2. openfaas-fn for the function deployments

In order to allow working with Function CRDs we deploy with the alternative controller to faas-netes named OpenFaaS Operator that offers a tighter integration with Kubernetes through CustomResourceDefinitions.

Install using arkade

Create the secret for dockerhub - You may want to create a secret for dockerhub to prevent the “Dockerhub limit reached” error. You will need to retrieve the token from https://hub.docker.com/settings/security Account Settings – Security – New Access Token

oc create secret docker-registry docker --docker-server=docker.io --docker-username=$user --docker-password=$token --docker-email=$email -n openfaas

The following install-with-arkade.sh may be used to install OpenFaaS. The arkade binary installs OpenFaaS to OpenShift Kubernetes using its official helm chart and is the easiest and quickest way to get up and running. The TIMEOUT in the script should be set to the maximum time you expect the functions to service requests. I have set it to 600 seconds. The clusterRole may be set to true to work with multiple namespaces as described at https://docs.openfaas.com/reference/namespaces/#configure-openfaas-with-additional-permissions. The memory request for alertmanager is set to higher value to prevent the error cause by tight container limits: "read init-p: connection reset by peer". The --operator installs OpenFaaS with the Operator. The OpenFaaS Operator runs as a sidecar in the gateway pod. The secrets link to the service accounts is required if you get “Dockerhub limit reached” error pulling image from dockerhub. The openfaas-controller serviceaccount is created for default faas-netes controller while the openfaas-operator is for the Operator.

install-with-arkade.sh

#!/bin/bash
export TIMEOUT=600s # Increase this if your function takes more time
arkade install openfaas --set securityContext=false --set gateway.upstreamTimeout=$TIMEOUT  \
 --set gateway.writeTimeout=$TIMEOUT \
 --set gateway.readTimeout=$TIMEOUT  \
 --set faasnetes.writeTimeout=$TIMEOUT  \
 --set faasnetes.readTimeout=$TIMEOUT  \
 --set queueWorker.ackWait=$TIMEOUT \
 --operator \
 --set gateway.directFunctions=false  \
 --set nats.image=image-registry.openshift-image-registry.svc:5000/openfaas/nats-streaming:latest-dev \
 --set gateway.image=image-registry.openshift-image-registry.svc:5000/openfaas/gateway:latest-dev \
 --set basicAuthPlugin.image=image-registry.openshift-image-registry.svc:5000/openfaas/basic-auth:latest-dev \
 --set faasnetes.image=image-registry.openshift-image-registry.svc:5000/openfaas/faas-netes:latest-dev \
 --set operator.image=image-registry.openshift-image-registry.svc:5000/openfaas/faas-netes:latest-dev \
 --set queueWorker.image=image-registry.openshift-image-registry.svc:5000/openfaas/nats-queue-worker:latest-dev \
 --set prometheus.image=image-registry.openshift-image-registry.svc:5000/openfaas/prometheus:latest-dev \
 --set alertmanager.image=image-registry.openshift-image-registry.svc:5000/openfaas/alertmanager:latest-dev \
 #
--set prometheus.image=prom/prometheus-linux-ppc64le \

 # --set alertmanager.image=prom/alertmanager-linux-ppc64le \
 --set alertmanager.resources.requests.memory=250Mi \
 --set alertmanager.resources.limits.memory=500Mi
# --set clusterRole=true
sleep 3
oc get serviceaccounts -n openfaas | grep openfaas
# ignore any serviceaccounts not found errors below

oc secrets link openfaas-prometheus docker --for=pull -n openfaas
oc secrets link openfaas-operator docker --for=pull -n openfaas
oc secrets link openfaas-controller docker --for=pull -n openfaas
oc secrets link default docker --for=pull -n openfaas

Run the script install-with-arkade.sh

chmod +x install-with-arkade.sh
./install-with-arkade.sh

If it uses the wrong platform for helm, then download the ppc64le version into the .arkade/bin directory and run the above command again.

wget https://get.helm.sh/helm-v3.6.3-linux-ppc64le.tar.gz
tar -zxvf ../helm-v3.6.3-linux-ppc64le.tar.gz
mv linux-ppc64le/helm /root/.arkade/bin/helm

We can run the following command which will give the commands to retrieve the password. Also expose the route and get the openfaas url.

arkade info openfaas
oc expose service/gateway-external -n openfaas # Expose the route
oc get routes -n openfaas

oc edit role openfaas-operator-rw -n openfaas-fn # Edit the role and mentioned in Installation Issues 1 below


Install using helm

Export the PASSWORD to a suitable value. It will be used to create a secret basic-auth and will be required with the faas-cli to connect to openfaas later. Modify the file chart/openfaas/templates/operator-rbac.yaml to add the rule as mentioned in issues section. Use the values.yaml for detailed configuration. Package the chart and install it using the helm upgrade command. The operator.create=true deploys OpenFaaS and the Operator using helm. Parameters are specified using the --set key=value[,key=value]” argument to helm install”.

git clone https://github.com/openfaas/faas-netes.git
export PASSWORD=$(head -c 12 /dev/urandom | sha1sum | cut -d' ' -f1)
cd chart && helm package openfaas/ && helm package kafka-connector/ && helm package cron-connector/ && helm package nats-connector/ && helm package mqtt-connector/
cd ..
oc create ns openfaas
oc create ns openfaas-fn
oc
-n openfaas create secret generic basic-auth --from-literal=basic-auth-user=admin --from-literal=basic-auth-password="$PASSWORD"
helm upgrade --install openfaas chart/openfaas-7.3.0.tgz --namespace openfaas --set gateway.replicas=1 --set queueWorker.replicas=1 --set serviceType=NodePort --set openfaasImagePullPolicy=IfNotPresent --set faasnetes.imagePullPolicy=Always --set basicAuthPlugin.replicas=1 --set basic_auth=true --set clusterRole=false --set gateway.directFunctions=false --set ingressOperator.create=false --set queueWorker.maxInflight=1 --set securityContext=false --set operator.create=true

oc
get serviceaccounts -n openfaas | grep openfaas
oc secrets link openfaas-prometheus docker --for=pull -n openfaas
oc secrets link openfaas-operator docker --for=pull -n openfaas
oc secrets link openfaas-controller docker --for=pull -n openfaas
oc secrets link default docker --for=pull -n openfaas

oc -n openfaas get deployments -l "release=openfaas, app=openfaas" -w
oc get events -n openfaas-fn --sort-by=.metadata.creationTimestamp

If there are any ImagePullBackOff errors on the pods, fix the problem with the image not being available or accessible by adding and linking the secrets, adding the required proxy or fixing the namespace where image is pushed. Then delete the pods. The pods will get recreated. After installation is complete, look at the CRDs functions.openfaas.com and profiles.openfaas.com

oc get customresourcedefinitions.apiextensions.k8s.io | grep faas

You can retrieve the password from the secret

PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)


Installation Issues

1. Forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers https://github.com/openfaas/faas-netes/issues/807 Edit the openfaas-operator-rw role in openfaas-fn namespace (if installed with clusterRole=false) or openfaas-operator-controller clusterrole (if installed with clusterRole=true). Add the rule below:

  - apiGroups:
   - openfaas.com
   resources:
   - '*'
   verbs:
    - update

If you have deployed the functions before fixing this problem, you will see Replicas 0 for all the functions when you give the “faas-cli list” command that indicates that you need to look at the the gateway log to look at this problem.

oc logs deployment/gateway -n openfaas operator

After editing the role, you will need to delete the functions and deploy them again.


 
2. Error in the name of component openfaas-operator

component: openaas-operator should be changed to component: openfaas-operator in the template https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/templates/operator-rbac.yaml#L113

3. Error: unexpected EOF – Larger timeouts
Annotate the gateway-external with a larger timeout, we set it to 600 seconds as follows:
    oc annotate route gateway-external --overwrite haproxy.router.openshift.io/timeout=600s -n openfaas

Edit the /etc/haproxy/haproxy.cfg – Update the client, server and queue timeouts to larger value. We set them to 10m as follows to test long running functions:

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    tcp
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           10m
    timeout connect         10s
    timeout client          10m
    timeout server          10m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


Uninstalling OpenFaaS

You can continue to test with examples below or use a helm command to delete openfaas for both the above installations methods: arcade and helm.

    helm delete openfaas --namespace openfaas

All control plane components can be cleaned up with the above helm command. Other associated objects may be deleted by deleting the namespaces as follows:

oc delete namespace openfaas openfaas-fn

Prebuilt Images

Prebuilt images that may be used for installation are available for ppc64le as follows:
- docker.io/karve/faas-cli:latest-dev-ppc64le
- docker.io/karve/nats-queue-worker:latest-dev-ppc64le
- docker.io/karve/nats-streaming:latest-dev-ppc64le
- docker.io/karve/faas-netes:latest-dev-ppc64le
- docker.io/karve/basic-auth:latest-dev-ppc64le
- docker.io/karve/gateway:latest-dev-ppc64le
- docker.io/karve/prometheus:latest-dev-ppc64le
- docker.io/karve/alertmanager:latest-dev-ppc64le

Build and Deploy Serverless Functions with OpenFaaS - Examples

There are three ways to create a function for OpenFaaS:
1.  Using an OpenFaaS stack.yml file https://docs.openfaas.com/reference/yaml/

2.  CLI deployment without any YAML files
3. Using a Function Custom Resource (CR) when OpenFaaS has been installed with its Function Custom Resource Definition (CRD) and the faas-netes controller uses the Operator mode.

We will complete this recipe by showing the first two methods. In Part 2, we will cover the third method.

We can expose the gateway-external route for openfaas. In my setup, the route is http://gateway-external-openfaas.apps.test-cluster.priv as shown below.

oc get svc -n openfaas gateway-external
oc get routes -n openfaas

NAME               HOST/PORT                                     PATH   SERVICES           PORT   TERMINATION   WILDCARD
gateway-external   gateway-external-openfaas.apps.test-cluster.priv          gateway-external   http                 None

Export the OPENFAAS_URL so that we don’t have to specify the --gateway parameter.

faas-cli login --password $PASSWORD --gateway http://gateway-external-openfaas.apps.test-cluster.priv/
faas-cli list --gateway http://gateway-external-openfaas.apps.test-cluster.priv/


The following figure shows where the OpenFaaS components and functions are installed.

Using an OpenFaaS stack.yml file

Print value of Pi to fixed accuracy of 100 digits

This section will show the first method to create a function using an OpenFaaS stack.yml file to print value of Pi. The dockerfile-ppc64le function template contains a Dockerfile from alpine:3.12 docker image that will be updated to install perl. The ENV is set as follows for computing the value of Pi with the fixed accuracy of 100 digits as below:

ENV fprocess='perl -Mbignum=bpi -wle print(bpi(100))'

The Dockerfile in dockerfile-ppc64le refers to the classic-watchdog ppc64le binary that can either be compiled locally or from docker.io/powerlinux

#FROM ghcr.io/openfaas/classic-watchdog:0.1.5 as watchdog
FROM docker.io/powerlinux/classic-watchdog:latest-dev-ppc64le as watchdog

If you build the watchdog binary and copy it to the local directory, you can use the following line:

COPY fwatchdog-ppc64le /usr/bin/fwatchdog

If you use the powerlinux/classic-watchdog, you can use

COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog

The following snippet shows all the commands to build, deploy and invoke this function on openfaas. After a new function is created from the template, the Dockerfile is modified to add perl in the “RUN apk add”. The openfaas function yaml lang is changed to dockerfile.

export OPENFAAS_PREFIX=karve
export OPENFAAS_URL=http://gateway-external-openfaas.apps.test-cluster.priv
faas-cli new pi-$OPENFAAS_PREFIX-ppc64le --lang dockerfile-ppc64le # --prefix=$OPENFAAS_PREFIX
vi pi-$OPENFAAS_PREFIX-ppc64le/Dockerfile # Append perl to apk add. It will install perl 5.30.3-r0
sed -i "s/lang: dockerfile-ppc64le/lang: dockerfile/" pi-$OPENFAAS_PREFIX-ppc64le.yml # Change lang: dockerfile
#vi pi-$OPENFAAS_PREFIX-ppc64le.yml # Change lang: dockerfile
faas-cli build -f pi-$OPENFAAS_PREFIX-ppc64le.yml
docker login -u $OPENFAAS_PREFIX
docker push $OPENFAAS_PREFIX/pi-$OPENFAAS_PREFIX-ppc64le

Test locally on docker

docker run --rm -p 8081:8080 -d --name test-this $OPENFAAS_PREFIX/pi-$OPENFAAS_PREFIX-ppc64le&
curl http://127.0.0.1:8081

Output

3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068

Stop and delete the container

    docker stop test-this # It will get deleted because it was started with --rm

Deploy and invoke the function on OpenShift.

# Deploy the function using faas-cli yaml
faas
-cli deploy -f ./pi-$OPENFAAS_PREFIX-ppc64le.yml
faas-cli list
# Invoke the function using faas-cli
echo "" | faas-cli invoke pi-$OPENFAAS_PREFIX-ppc64le --gateway $OPENFAAS_URL
# Invoke the function using curl
curl $OPENFAAS_URL/function/pi-$OPENFAAS_PREFIX-ppc64le
# Delete the function
faas-cli delete pi-$OPENFAAS_PREFIX-ppc64le
# This may take a few seconds to get deleted


Print value of Euler’s number to fixed accuracy of 100 digits

Let’s modify the function to print the Euler’s number instead of Pi.

cp pi-$OPENFAAS_PREFIX-ppc64le.yml exp-$OPENFAAS_PREFIX-ppc64le.yml
cp -r pi-$OPENFAAS_PREFIX-ppc64le exp-$OPENFAAS_PREFIX-ppc64le
sed -i "s/pi-$OPENFAAS_PREFIX-ppc64le/exp-$OPENFAAS_PREFIX-ppc64le/g" exp-$OPENFAAS_PREFIX-ppc64le.yml

Replace the ENV bpi(100) in exp-$OPENFAAS_PREFIX-ppc64le/Dockerfile with bexp(1,100) to find the value of e raised to appropriate power or any other function as follows 

ENV fprocess='perl -Mbignum=bexp -wle print(bexp(1,100))'

Build and deploy again

faas-cli build -f exp-$OPENFAAS_PREFIX-ppc64le.yml
#docker push $OPENFAAS_PREFIX/exp-$OPENFAAS_PREFIX-ppc64le
faas
-cli registry-login -u $OPENFAAS_PREFIX --password $DOCKERHUB_PASSWORD
faas-cli push -f exp-$OPENFAAS_PREFIX-ppc64le.yml
# Deploy the function using faas-cli yaml
faas-cli deploy -f ./exp-$OPENFAAS_PREFIX-ppc64le.yml
faas-cli list

Invoke functions with faas-cli and curl

# Invoke the function using faas-cli
echo "" | faas-cli invoke exp-$OPENFAAS_PREFIX-ppc64le --gateway $OPENFAAS_URL
# Invoke the function using curl
curl http://gateway-external-openfaas.apps.test-cluster.priv/function/exp-$OPENFAAS_PREFIX-ppc64le -d "Euler's number"

Output

2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427


Delete the function
    faas-cli delete pi-$OPENFAAS_PREFIX-ppc64le
    # This may take a few seconds to get deleted


Auto scaling

We can test autoscaling by invoking the function multiple times in a loop. Note that the body of the while loop is run in background. Replace the ENV bpi(100) with ENV bpi(2000) to increase the load. You can experiment with different values of accuracy and loops below.

    faas-cli deploy -f ./pi-$OPENFAAS_PREFIX-ppc64le.yml --label com.openfaas.scale.max=10 --label com.openfaas.scale.min=1
    faas-cli describe pi-$OPENFAAS_PREFIX-ppc64le --gateway $OPENFAAS_URL
    for i in {0..100}; do for j in {0..20}; do echo "" | faas-cli invoke pi-$OPENFAAS_PREFIX-ppc64le --gateway $OPENFAAS_URL && echo& done;sleep 2;done
    watch "faas-cli describe pi-$OPENFAAS_PREFIX-ppc64le --gateway $OPENFAAS_URL;oc get pods -n openfaas-fn"

Prometheus is a time-series database used by OpenFaaS to track the requests per second being sent to an individual function along with the success and failure of those requests and their latency. This can be referred to as rate, error and duration (RED) metrics. Prometheus does not come with any kind of authentication, so it is not exposed. Forward the Prometheus port 9090 to your local computer and browse to http://localhost:9090

kubectl port-forward -n openfaas svc/prometheus 9090:9090

We can enter an expression in the Prometheus query language PromQL to explore the time-series and what data has been recorded. In Prometheus, graph the following:

    rate(gateway_function_invocation_total{code="200"} [20s])

Instead of the for loop in bash to generate load, we can use the tool hey. The option -c will simulate 20 concurrent users, -z will run for 5m and then complete, -q rate limit in queries per second (QPS) 100 per worker.

hey -z=5m -q 100 -c 20 -m POST -d=Test http://gateway-external-openfaas.apps.test-cluster.priv/function/pi-$OPENFAAS_PREFIX-ppc64le

You will see the functions scale up to 10 as set in the com.openfaas.scale.max=10

Name:                pi-karve-ppc64le
Status:              Ready
Replicas:            10
Available replicas:  10
Invocations:         21497
Image:               karve/pi-karve-ppc64le:latest
Function process:
URL:                 http://gateway-external-openfaas.apps.test-cluster.priv/function/pi-karve-ppc64le
Async URL:           http://gateway-external-openfaas.apps.test-cluster.priv/async-function/pi-karve-ppc64le
Labels:              com.openfaas.scale.min : 1
                     com.openfaas.scale.max : 10
Annotations:NAME                   READY   STATUS    RESTARTS   AGE
pi-karve-ppc64le-dfb9999f5-hgsst   1/1     Running   0          3m13s
pi-karve-ppc64le-dfb9999f5-kmbzj   1/1     Running   0          3m53s
pi-karve-ppc64le-dfb9999f5-n96xn   1/1     Running   0          4m33s
pi-karve-ppc64le-dfb9999f5-rbqbn   1/1     Running   0          4m32s
pi-karve-ppc64le-dfb9999f5-rfvp6   1/1     Running   0          2m33s
pi-karve-ppc64le-dfb9999f5-rhcbm   1/1     Running   0          113s
pi-karve-ppc64le-dfb9999f5-sxfsc   1/1     Running   0          3m13s
pi-karve-ppc64le-dfb9999f5-v2dxx   1/1     Running   0          2m33s
pi-karve-ppc64le-dfb9999f5-xz8j9   1/1     Running   0          3m53s
pi-karve-ppc64le-dfb9999f5-zfnqh   1/1     Running   0          16m

The function will scale down back to 1 after hey finishes executing the load test after upto 5 minutes.

Let’s delete this function to prepare for deployment without YAML file.
    f
aas-cli delete pi-$OPENFAAS_PREFIX-ppc64le

CLI deployment without any YAML files

Deploy the function pi-ppc64le using the image created in earlier section to compute Pi.

    faas-cli deploy --name pi-ppc64le --image $OPENFAAS_PREFIX/pi-$OPENFAAS_PREFIX-ppc64le:latest --gateway http://gateway-external-openfaas.apps.test-cluster.priv
    curl http://gateway-external-openfaas.apps.test-cluster.priv/function/pi-$OPENFAAS_PREFIX-ppc64le -d "Pi"

Output

Pi 3.141592653589793238462643383279502884197

Delete the function

faas-cli delete pi-ppc64le --gateway http://gateway-external-openfaas.apps.test-cluster.priv

Conclusion

In this recipe, we built and deployed OpenFaaS on ppc64le and showed how to deploy and invoke functions in Perl that compute Pi or Euler’s number. We also showed autoscaling with AlertManager and Prometheus. In Part 2, we will work on the example to create functions with Custom Resources and use the HPA for autoscaling.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your serverless applications on OpenShift using OpenFaaS and if you would like to see something covered in more detail.

References

 

​​​​​#Python#cloud#Edge​​#Automation#ibmpower#Openshift#openfaas​​​

#Featured-area-2
#Featured-area-2-home​​​​​​​​​​​​
0 comments
149 views

Permalink