Modern applications that are built and run using containers provide significant benefits in terms of development and operational agility, however developers that are new to this space can find building containerized applications daunting compared to more familiar techniques as there a range of new considerations to be taken into account.
This tutorial provides a detailed worked example of how to build a Golang application container from first principles that will connect to an IBM MQ queue manager running in the Cloud Pak for Integration (on OpenShift) using the IBM MQ “Golang JMS” client to provide a simple developer friendly programming interface to IBM MQ.
The goal of the tutorial is to illustrate the key steps for building a Golang application container to talk to IBM MQ and describe why each of them is necessary so that you can harness the knowledge to help you build your own applications in other scenarios. For example, the same steps here can also be used to build an application that uses the native MQ Golang MQI-style programming interface. At the end of each step I will link to a GitHub commit that provides a working example up to that point that you can use to see the application builds up over time.
The motivation for this tutorial came directly from a customer question, so I hope that publishing a worked example here will be beneficial to others as well!
TL-DR?
This tutorial is a set of guided steps that work through the “what” and the “why” of building a container-based Golang application to talk to IBM MQ.
If you want to skip to the end game and just use a pre-built template that handles everything for you, then you can find that in the short instructions for the IBM MQ Golang JMS OpenShift application sample that accompanies this tutorial!
|
Overview of the tutorial steps
This tutorial is broken up into a series of discrete steps that work through the sequence of actions we need to take in order to build a working IBM MQ Golang application that runs in OpenShift with the Cloud Pak for Integration.
The specific sequence of actions that we will work through are;
- Build a Hello World Golang application that runs in a local Docker container
- Adding the IBM MQ Golang JMS client libraries
- Run the application container in an OpenShift cluster on IBM Cloud with the “anyuid” SCC (security profile)
- Modify the container so that it runs in the most secure OpenShift “restricted” SCC
- Update the application so that it consumes variables such as queue manager name, username and password from an OpenShift ConfigMap and Secret
- Change the configuration so that the application connects to an IBM MQ queue manager running in the same OpenShift cluster as part of the Cloud Pak for Integration
Step 1 – Build a Hello World Golang application that runs in a local Docker container
There are a number of useful references for building a basic Docker container application starting with the list of Golang images and instructions on DockerHub, but I have found one of the best explained resources is a blog post on the Semaphore CI community called How To Deploy a Go Web Application with Docker.
Using the Semaphore blog post as inspiration it’s relatively straightforward to build up a working project structure that builds a simple Hello World application that we can run locally in Docker starting with the following key content;
openshift-app-sample
├── Dockerfile
└── src
└── main.go
At this point our “main.go” file is a literal Hello World example;
func main() {
fmt.Println("Hello World!!!")
}
and the Dockerfile defines a multi-stage build where first stage carries out the compilation of the Golang application into a binary, and the second stage packages the compiled binary into a slimmed down container image that contains only what we need to run it;
FROM golang:1.15 as builder
. . .
COPY src/ .
RUN go build -o openshift-app-sample
. . .
FROM golang:1.15
. . .
COPY --chown=0:0 --from=builder $APP_HOME/openshift-app-sample $APP_HOME
CMD ["./openshift-app-sample"]
(abridged Dockerfile)
To build and run the container locally we execute the following commands;
cd openshift-app-sample
docker build -t openshift-app-sample -f Dockerfile .
docker run -it openshift-app-sample
Hello World!!!
Complete code samples containing the files at the end of this step can be found in github.com commit b2af4bd.
Step 2 - Adding the IBM MQ Golang JMS client libraries
The IBM MQ Golang libraries allow us to write applications that send and receive messages to an IBM MQ queue manager. The libraries are based on the IBM MQ native C client and come in two flavors;
- MQ Golang – a traditional MQI style library that provides access to the full feature function of the IBM MQ client API for users that are comfortable with the C-style MQI interface
- MQ Golang JMS – an abstraction layer that presents a JMS 2.0 style programming interface that is simpler to use than the MQI interface and will be familiar to anyone with Java JMS programming experience
The instructions in this tutorial work equally well for either programming style, but for simplicity we will use the Golang JMS style interface as our demonstration.
Our first step is to update the main.go application so that it will try to create a connection to a queue manager. For now we will hard code some placeholder values for the parameters such as hostname and port, which we’ll fill in properly later.
import (
"fmt"
"log"
"github.com/ibm-messaging/mq-golang-jms20/mqjms"
)
func main () {
fmt. Println ( "Beginning world!!!")
cf := mqjms.ConnectionFactoryImpl{
QMName: "QM1",
Hostname: "myhostname.com",
PortNumber: 1414,
ChannelName: "SYSTEM.DEF.SVRCONN",
UserName: "username",
Password: "password",
}
context, errCtx := cf.CreateContext()
if context != nil {
defer context.Close()
}
if errCtx != nil {
log.Fatal(errCtx)
}
fmt.Println("Ending world!!!")
}
Then we want to build Go modules descriptors go.mod and go.sum to reflect the new dependency on the mq-golang-jms20 package;
cd openshift-app-sample/src
go mod init
go build
(causes go.mod and go.sum to be created using on our local Golang build environment)
Now if we try to compile the image again with the additional references to mq-golang-jms20 then we’ll find that it fails as follows;
Step 9/18 : RUN go build -o openshift-app-sample
---> Running in 2a596bcf9420
go: downloading github.com/ibm-messaging/mq-golang-jms20 v1.2.0
go: downloading github.com/ibm-messaging/mq-golang/v5 v5.0.0
# github.com/ibm-messaging/mq-golang/v5/ibmmq
/go/pkg/mod/github.com/ibm-messaging/mq-golang/v5@v5.0.0/ibmmq/mqi.go:54:10: fatal error: cmqc.h: No such file or directory
#include <cmqc.h>
^~~~~~~~
compilation terminated.
The command '/bin/sh -c go build -o openshift-app-sample' returned a non-zero code: 2
This is because the MQ Golang libraries depend on the native C client (including “cmqc.h”) so the next step is to update our “builder” container to install those libraries. To do that we will take inspiration from the Dockerfile of the mq-golang library which downloads the MQ Redistributable Client and installs a subset of packages for our use.
# Location of the downloadable MQ client package \
ENV RDURL="https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqdev/redist" \
RDTAR="IBM-MQC-Redist-LinuxX64.tar.gz" \
VRMF=9.2.0.1
# Install the MQ client from the Redistributable package. This also contains the
# header files we need to compile against. Setup the subset of the package
# we are going to keep - the genmqpkg.sh script removes unneeded parts
ENV genmqpkg_incnls=1 \
genmqpkg_incsdk=1 \
genmqpkg_inctls=1
RUN cd /opt/mqm \
&& curl -LO "$RDURL/$VRMF-$RDTAR" \
&& tar -zxf ./*.tar.gz \
&& rm -f ./*.tar.gz \
&& bin/genmqpkg.sh -b /opt/mqm
We will then copy the output from that builder container into our runnable image;
COPY --chown=0:0 --from=builder /opt/mqm /opt/mqm
Now when we run our container locally in Docker we get the following output, indicating that the calls to the MQ client library were invoked successfully, but as we expect - the connection couldn’t be created because our placeholder “myhostname.com” doesn’t exist!
Beginning world!!!
2021/02/21 20:44:24 {errorCode=2538, reason=MQRC_HOST_NOT_AVAILABLE, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOST_NOT_AVAILABLE [2538]}
Complete code samples containing the full set of files at the end of this step can be found in github.com commit 41a5cee.
Step 3: Run the application container in an OpenShift cluster on IBM Cloud with the “anyuid” SCC (security profile)
Now that we’ve got an application container running locally in Docker it’s time to move up to using a real OpenShift cluster. For the purposes of this tutorial I’m going to use a cluster running on IBM Cloud in the UK South (London) region which I have created outside of these instructions.
To start with we’ll push our Docker image to the IBM Cloud Container Registry from which it can be loaded onto our OpenShift cluster;
# Follow the usual IBM Cloud login process
ibmcloud login
# Log in to the Container Registry and create a new namespace
ibmcloud cr login
ibmcloud cr region-set uk-south
ibmcloud cr namespace-add golang-sample
# Tag the image locally and push to the IBM Cloud Container Registry
docker tag golang-app uk.icr.io/golang-sample/golang-app:1.0
docker push uk.icr.io/golang-sample/golang-app:1.0
# Check that the image is uploaded as expected
ibmcloud cr image-list --restrict golang-sample
Listing images..
Repository Tag Digest Namespace Created Size
uk.icr.io/golang-sample/golang-app 1.0 b9fedf6601fe golang-sample 1 day ago 355 M
OK
Now we can deploy the container image onto our OpenShift cluster. First we must authenticate to the cluster;
- In a cluster deployed using IBM Cloud we can click on the “OpenShift web console” button in the cluster details page
- Then open the user menu in the top right of the OpenShift Console and click the “Copy login” command
- Click the “Display Token” link
oc login --token=<TOKEN> --server=https://<clusterid>.eu-gb.containers.cloud.ibm.com:<PORT>
We will assume that you have already installed the Cloud Pak for Integration to a project called “cp4i” on this cluster. In IBM Cloud you might choose to do this using the tile in the IBM Cloud software catalog as described at Getting Started with IBM Cloud Pak for Integration.
# Switch to the “cp4i” project (namespace) where Cloud Pak for Integration is installed
oc project cp4i
Before we can deploy our container we have to add a pull secret to the “cp4i” namespace so that the cluster can authenticate successfully to the IBM Cloud Container Registry. There is an existing secret in the “default” namespace called “all-icr-io” that does this, so we need to copy it into our own namespace using the instructions on Copying an existing image pull secret.
# Check the secret exists in the default namespace
oc get secrets -n default | grep icr-io
all-icr-io kubernetes.io/dockerconfigjson
# Copy the secret into the cp4i namespace
oc get secret all-icr-io -n default -o yaml | sed 's/default/cp4i/g' | oc create -n cp4i -f -
secret/all-icr-io created
# Check that it has been created successfully in the cp4i namespace
oc get secrets -n cp4i | grep icr.io
all-icr-io kubernetes.io/dockerconfigjson
To deploy our container image we need to build a pod configuration YAML file that refers to our private image and also to the pull secret that should be used to load that image - a good example of which can be found in Referring to the image pull secret in your pod deployment. We’re also going to add a RestartPolicy so that the pod won’t get continually restarted when it has run to completion.
apiVersion: v1
kind: Pod
metadata:
name: golang-app
spec:
containers:
- name: golang-app
image: uk.icr.io/golang-sample/golang-app:1.0
restartPolicy: OnFailure
imagePullSecrets:
- name: all-icr-io
Then we can deploy the pod to the OpenShift cluster as follows;
oc apply -f ./yaml/pod-sample.yaml
pod/golang-app created
We then need to wait a minute or so for the container image to be loaded onto the cluster worker then we can check the log output as follows;
oc logs golang-app
Beginning world!!!
As you can see, in this case we haven’t seen the final println statement “Ending world!!!”, but if we wait for a few more minutes we’ll see that the connection request fails in the same way as it does locally because we have not set the connection details like the hostname – it just takes a little while for that error to appear;
oc logs golang-app
Beginning world!!!
2021/02/22 11:40:24 {errorCode=2538, reason=MQRC_HOST_NOT_AVAILABLE, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOST_NOT_AVAILABLE [2538]}
This looks like everything is working nicely (apart from needing to configure the correct hostname etc to point to a real queue manager), but if we check the Security Context Constraint (SCC) under which the pod is running we’ll see that it is “anyuid” because my userID under which I did the deployment has a lot of privileges including the ability to select the “anyuid” SCC.
oc describe pod golang-app | grep scc
openshift.io/scc: anyuid
Using the “anyuid” SCC isn’t necessarily a bad thing, but we’d like our application to run in the most secure SCC as possible, and to be deployable using a less privileged userID – which we’ll do in the next section!
Complete code samples containing the full set of files at the end of this step can be found in github.com commit b38e992.
Step 4: Modify the container so that it runs in the most secure OpenShift “restricted” SCC
Adapting the container to run in the Restricted SCC is conceptually one of the more the difficult of the steps we are doing because it requires a good understanding of how OpenShift works – in particular the idea that the user UID used to run the container will be selected seemingly at random when the container is started, and not a fixed username or UID as we might traditionally be used to. Some useful references for understanding this area are Managing SCCs in OpenShift and A Guide to OpenShift and UIDs.
The first step is to configure a Service Account that has suitable privileges to launch a pod using the Restricted SCC, which is illustrated here and uses a single file to create a ServiceAccount, Role and RoleBinding to join the two together;
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-interactions
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test
subjects:
- kind: User
name: my-service-account
roleRef:
kind: Role
name: pod-interactions
apiGroup: rbac.authorization.k8s.io
As usual, we “oc apply” that file to apply the configuration into the cluster – creating the Service Account, and then we can deploy the pod again using this new service account (once we have deleted the existing instance).
# Create the new service account
oc apply -f ./yaml/sa-pod-deployer.yaml
# Delete the existing instance of the pod from the previous section
oc delete pod golang-app
# Deploy the pod again, this time using the new service account
oc apply -f ./yaml/pod-sample.yaml --as=my-service-account
# By default the service account only has access to the
# restricted SCC, so that is what the pod is deployed under
oc describe pod golang-app | grep scc
openshift.io/scc: restricted
Now if we look at the logs from the pod we’ll see that moving to the restricted SCC has given us a new problem symptom;
oc logs golang-app
Beginning world!!!
AMQ6300E: Directory '/IBM' could not be created: 'P?cA'.
2021/02/22 14:48:36 {errorCode=2009, reason=MQRC_CONNECTION_BROKEN, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_CONNECTION_BROKEN [2009]}
The AMQ6300E message about the ‘/IBM’ directory not being created is because the MQ client library wants to create that directory as part of its start-up processing. When we ran the container in the anyuid SCC or locally in Docker on our laptop this wasn’t a problem as the security permissions meant the application process was allowed to create that directory on the fly, but in the restricted SCC that isn’t permitted so we need to create directory (in fact a chain of sub-directories) as part of building the container image, and make it writeable to the application;
# Create the directories the client expects to be present
RUN mkdir -p /IBM/MQ/data/errors \
&& mkdir -p /.mqm \
&& chmod -R 777 /IBM \
&& chmod -R 777 /.mqm
Having modified the container definition we now need to rebuild and re-upload the image so that we can try it out in the cluster.
DON’T FORGET!
When re-building the container image it’s important to remember to increment the tag version otherwise when you come to deploy the new image the cluster is will probably re-deploy the old cached version of the container rather than the one you just rebuilt – and you’ll be confused why it seems like your changes are not taking effect!
There are three places where you need to make that change;
- The “docker tag” command
- The “docker push” command
- The image identifier in the pod-sample.yaml when deploying the container
# First delete the existing deployed pod
oc delete pod golang-app
# Rebuild the container image locally
docker build -t golang-app -f Dockerfile .
# Tag the new image (WITH A FRESH VERSION LABEL!)
docker tag golang-app uk.icr.io/golang-sample/golang-app:1.2
# Push the updated image to the IBM Cloud Container Registry
docker push uk.icr.io/golang-sample/golang-app:1.2
# UPDATE THE VERSION LABEL IN THE pod-sample.yaml TO MATCH ABOVE
# Apply the pod-sample YAML to deploy the new pod
oc apply -f ./yaml/pod-sample.yaml --as=my-service-account
# Wait a minute or two for the image to download and the pod to start
# Then check the logs again;
oc logs golang-app
Beginning world!!!
2021/02/22 15:05:06 {errorCode=2538, reason=MQRC_HOST_NOT_AVAILABLE, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOST_NOT_AVAILABLE [2538]}
So, we have resolved the restricted SCC problems and we’re back to the familiar error that the hardcoded hostname cannot be reached!
Complete code samples containing the full set of files at the end of this step can be found in github.com commit debb021.
Step 5: Update the application so that it consumes variables such as queue manager name, username and password from an OpenShift ConfigMap and Secret
Now that the application is running in OpenShift in the same way as we saw locally in Docker we are ready to switch out the hardcoded queue manager information so that it can be configured dynamically in the cluster. This is good practice so that we don’t include sensitive information (in particular the username and password) in the container image, and also means that we can promote the same container image from Development through to Test and Production by only changing the dynamic configuration values.
To achieve this, we will create an OpenShift ConfigMap that contains the non-confidential properties such as the hostname, port, queue manager name and channel. We will then create an OpenShift Secret for the confidential credentials like the username and password. The values from these configuration objects will then be injected into the container at runtime as environment variables, where they can be read by the application code.
To make it clear that we are reading in the configuration at runtime we will temporarily print out the environment variables in our application before we make use of them, so our main.go class now looks like the following;
func main() {
fmt.Println("Beginning world!!!")
fmt.Println("host: ", os.Getenv("HOSTNAME"))
fmt.Println("port: ", os.Getenv("PORT"))
fmt.Println("qm: ", os.Getenv("QMNAME"))
fmt.Println("channel: ", os.Getenv("CHANNELNAME"))
fmt.Println("user: ", os.Getenv("USERNAME"))
fmt.Println("pw: ", os.Getenv("PASSWORD"))
portNum, _ := strconv.Atoi(os.Getenv("PORT"))
// Initialise the attributes of the CF in whatever way you like
cf := mqjms.ConnectionFactoryImpl{
QMName: os.Getenv("QMNAME"),
Hostname: os.Getenv("HOSTNAME"),
PortNumber: portNum,
ChannelName: os.Getenv("CHANNELNAME"),
UserName: os.Getenv("USERNAME"),
Password: os.Getenv("PASSWORD"),
}
...
To inject the environment variables from the ConfigMap and Secret we enhance the pod-sample.yaml to define the relationship between the two as described in Configure all key-value pairs in a ConfigMap as container environment variables and Configure all key-value pairs in Secret as container environment variables;
containers:
- name: golang-app
image: uk.icr.io/golang-sample/golang-app:1.2
envFrom:
- configMapRef:
name: qmgr-details
- secretRef:
name: qmgr-credentials
Let’s now create the matching ConfigMap and Secret;
oc create configmap qmgr-details \
--from-literal=HOSTNAME=mydynamichostname \
--from-literal=PORT=34567 \
--from-literal=QMNAME=QM100 \
--from-literal=CHANNELNAME=SYSTEM.DEF.SVRCONN
oc create secret generic qmgr-credentials \
--from-literal=USERNAME=appuser100 \
--from-literal=PASSWORD='password100'
And then we need to rebuild and redeploy the application in order to pick up our changes to the application code and the container configuration (remembering to increment the version tag to avoid accidentally redeploying the old image)
# First delete the existing deployed pod
oc delete pod golang-app
# Rebuild the container image locally
docker build -t golang-app -f Dockerfile .
# Tag the new image (WITH A FRESH VERSION LABEL!)
docker tag golang-app uk.icr.io/golang-sample/golang-app:1.3
# Push the updated image to the IBM Cloud Container Registry
docker push uk.icr.io/golang-sample/golang-app:1.3
# UPDATE THE VERSION LABEL IN THE pod-sample.yaml TO MATCH ABOVE
# Apply the pod-sample YAML to deploy the new pod
oc apply -f ./yaml/pod-sample.yaml --as=my-service-account
# Wait a minute or two for the image to download and the pod to start
# Then check the logs again;
oc logs golang-app
Beginning world!!!
host: mydynamichostname
port: 34567
qm: QM100
channel: SYSTEM.DEF.SVRCONN
user: appuser100
pw: password100
2021/02/22 15:50:03 {errorCode=2538, reason=MQRC_HOST_NOT_AVAILABLE, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOST_NOT_AVAILABLE [2538]}
We can see from the println statements above that the values we specified in the ConfigMap and Secrets are being successfully read by the application code – but that the connection request is still failing as they don’t point to a valid queue manager, which brings us on to the last piece of the puzzle!
Complete code samples containing the full set of files at the end of this step can be found in github.com commit f3107e3.
Step 6: Change the configuration so that the application connects to an IBM MQ queue manager running in the same OpenShift cluster as part of the Cloud Pak for Integration
In this final step we will create an IBM MQ queue manager as part of the Cloud Pak for Integration running in the same cluster and modify the configuration variables so that our application can connect to it successfully.
As before we’ll assume that you already have the Cloud Pak for Integration operators deployed in the “cp4i” namespace that we’ve been using throughout this tutorial.
Often, we would deploy a queue manager using the Platform Navigator, but for the purposes of this tutorial we will use the queue-manager.yaml file we have supplied, which starts like this;
apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
name: sample-mq
namespace: cp4i
spec:
license:
accept: true
license: L-RJON-BQPGWD
use: NonProduction
queueManager:
name: MYQM
...
Then all we have to do to create the queue manager is apply the YAML file as follows;
oc apply -f ./yaml/queue-manager.yaml
queuemanager.mq.ibm.com/sample-mq created
and after a little while we will see the queue manager is running;
oc get pods | grep sample-mq
sample-mq-ibm-mq-0 1/1 Running
To get the connection details of the queue manager we look at the Services that have been defined as part of creating our queue manager;
oc get services | grep sample-mq
sample-mq-ibm-mq ClusterIP 172.21.196.150 <none> 9443/TCP,1414/TCP 2m22s
sample-mq-ibm-mq-metrics ClusterIP 172.21.14.79 <none> 9157/TCP 2m22s
In particular we can see that the service name is “sample-mq-ibm-mq” and the TCP port number is the default MQ port number of 1414 (9443 is for the MQ Web Console so not of interest to our application).
We already know the queue manager name is “MYQM” so that just leaves the channel name to be confirmed, which we can do with the following command that connects into the container to execute a runmqsc snippet that lists all the channels, and confirms that the default channel name of “SYSTEM.DEF.SVRCONN” does exist.
oc exec sample-mq-ibm-mq-0 -- /bin/bash -c "echo 'DISPLAY CHANNEL(*)' | runmqsc"
5724-H72 (C) Copyright IBM Corp. 1994, 2020.
Starting MQSC for queue manager MYQM.
1 : DISPLAY CHANNEL(*)
...
AMQ8414I: Display Channel details.
CHANNEL(SYSTEM.DEF.SVRCONN) CHLTYPE(SVRCONN)
So we can now update the ConfigMap with the correct details for our queue manager.
oc delete configmap qmgr-details
oc create configmap qmgr-details \
--from-literal=HOSTNAME=sample-mq-ibm-mq \
--from-literal=PORT=1414 \
--from-literal=QMNAME=MYQM \
--from-literal=CHANNELNAME=SYSTEM.DEF.SVRCONN
Now let’s delete and redeploy the pod to trigger it to run again;
oc delete pod golang-app
oc apply -f ./yaml/pod-sample.yaml --as=my-service-account
oc logs golang-app
Beginning world!!!
host: sample-mq-ibm-mq
port: 1414
qm: MYQM
channel: SYSTEM.DEF.SVRCONN
user: appuser1
pw: password1
2021/02/22 16:34:04 {errorCode=2035, reason=MQRC_NOT_AUTHORIZED, linkedErr=MQCONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_NOT_AUTHORIZED [2035]}
With the correct queue manager details notice that the error message we get has changed from MQRC_HOST_NOT_AVAILABLE to MQRC_NOT_AUTHORIZED which means we made it successfully to the queue manager, but the security rules prohibited us from connecting!
If we want to look into the details of why the queue manager blocked the connection we can do so by connecting to the queue manager pod as follows;
oc exec -it sample-mq-ibm-mq-0 /bin/bash
cd /var/mqm/qmgrs/MYQM/errors
tail -n 100 AMQERR01.LOG
----- amqrmrsa.c : 961 --------------------------------------------------------
02/22/21 16:55:55 - Process(388.567) User(1000670000) Program(amqrmppa)
Host(sample-mq-ibm-mq-0) Installation(Installation1)
VRMF(9.1.5.0) QMgr(MYQM)
Time(2021-02-22T16:55:55.834Z)
RemoteHost(171.30.50.5)
CommentInsert1(SYSTEM.DEF.SVRCONN)
CommentInsert2(171.30.50.5)
CommentInsert3(MCAUSER(1000670000) CLNTUSER(1000670000))
AMQ9776E: Channel was blocked by userid
EXPLANATION:
The inbound channel 'SYSTEM.DEF.SVRCONN' was blocked from address '171.30.50.5'
because the active values of the channel were mapped to a userid which should
be blocked. The active values of the channel were 'MCAUSER(1000670000)
CLNTUSER(1000670000)'.
ACTION:
Contact the systems administrator, who should examine the channel
authentication records to ensure that the correct settings have been
configured. The ALTER QMGR CHLAUTH switch is used to control whether channel
authentication records are used. The command DISPLAY CHLAUTH can be used to
query the channel authentication records.
A detailed exploration of configuring MQ security is outside the scope of this tutorial so for the time being we will add a new rule that allows anyone to connect using the default channel;
oc exec sample-mq-ibm-mq-0 -- /bin/bash -c "echo 'SET CHLAUTH('SYSTEM.DEF.SVRCONN') TYPE(BLOCKUSER) USERLIST('nobody') WARN(NO) ACTION(ADD)' | runmqsc"5724-H72 (C) Copyright IBM Corp. 1994, 2020.
Starting MQSC for queue manager MYQM.
1 : SET CHLAUTH(SYSTEM.DEF.SVRCONN) TYPE(BLOCKUSER) USERLIST(nobody) WARN(NO) ACTION(ADD)
AMQ8877I: IBM MQ channel authentication record set.
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
After which we can restart the application pod and we will finally connect successfully to the queue manager!
oc delete pod golang-app
oc apply -f ./yaml/pod-sample.yaml --as=my-service-account
oc logs golang-app
Beginning world!!!
host: sample-mq-ibm-mq
port: 1414
qm: MYQM
channel: SYSTEM.DEF.SVRCONN
user: appuser100
pw: password100
-- Connection successful <----------- Success here!!!
Ending world!!!
Congratulations - you have succeeded in your quest!
You’ve made it through to the end of this tutorial, and hopefully now have a much better understanding of how to build and run an OpenShift container application that connects to an IBM MQ queue manager in the Cloud Pak for Integration!
As I described at the beginning, the goal of this tutorial is to explain the “what” and the “why” of the various steps so that you can adjust them for your own particular needs in future.
However you don’t need to follow through all of these instructions manually if you don’t want to – you can simply use the end-state of the sample application that we have built up by going to the mq-golang-jms20/openshift-app-sample directory and following the instructions in the readme there which describe how to run the final application!
I hope that you have found this tutorial interesting and informative, and I look forward to hearing your experiences of building your own OpenShift applications for IBM MQ!
Matt Roberts
Senior Technical Staff Member, IBM Cloud Pak for Integration
#IBMCloudPakforIntegration(ICP4I)#IBMMQ#MQ#redhatopenshift#tutorials