Message Image  

Integration Development to Micro Services Principles on OpenShift – Part 1

 View Only
Wed July 29, 2020 07:04 AM

Published on March 24, 2020 / Updated on April 8, 2020

Introduction

Modern platforms, DevOps tooling and agile approaches have accelerated the rate at which organizations can bring new applications and business function to bare. At IBM we see no reason why integration developers cannot derive many of the benefits being enjoyed by application developers in those organizations in leveraging DevOps tools on modern, container based platforms to deliver agile integration.

For a full description of IBM’s position on Agile Integration please take a look at the following article which discusses Agile Integration and introduces the IBM Cloud Pak for Integration (ICP4i) based on RedHat OpenShift Container Platform.

 

    Integration Modernization the Journey to Agile Integration – A view from the field

 

This is Part 1 of a series of 3 articles. In Part 1, this article along with its associated collateral will explore a cloud native, “componentized” approach that demonstrates how to deliver integration with the App Connect Enterprise (and MQ) products using RedHat Openshift out of the box capabilities for build,deploy and run to microservices principles.

 

The second part of this series of articles covers build, deploy and run of IBM middleware using Tekton on Rh Openshift and can be found at:

 

    Integration Development to Micro Services Principles on OpenShift – Part 2

 

The third part of this series of articles covers testing and stubbing of IBM Middleware with IBM Integration Tester and can be found at:

 

    Integration Development to Micro Services Principles on OpenShift – Part 3

Delivering ACE and MQ services on OpenShift - Overview

Figure 1. ACE and MQ on RHOS Integration Micro Services – Environment

We will leverage a base App Connect Enterprise (ACE) v11.0.0.n image that has a fixed configuration and deliver integration microservices “from” that image. These integration microservices perform a single integration function, are immutable and individually scalable. ACE Microservices one and two are simplistic integration microservices but designed to work together as a simple microservices application.ACE Microservice 3 offers a REST interface to a client connected IBM MQ Queue Manager. The MQ Queue Manager will be based from the IBM MQ v9.1 base image but has a custom layer that includes fixed MQ configuration including queues, SVRCONN channels and TLS server side KeyStore files and certs such that “off-cluster” tools and applications can connect via TLS to the queue manager on RHOS.

This article is in an early form and I plan to update it with a greater level of detail.

The scenarios

In terms of tools and approaches this article and associated collateral will explore:

  • 1. RedHat OpenShift Out of the Box capabilities for automated build, deploy, run and test of IBM MQ
  • 2. RedHat OpenShift Out of the Box capabilities for automated build, deploy, run and test with ACE MicroService 2
  • 3. RedHat OpenShift Out of the Box capabilities for automated build, deploy, run and test with ACE MicroService 1
  • 4. Connecting ACE MicroService 1 and ACE MicroService 2 for end to end testing

Covered in Part 2 – https://Integration-Development-to-Micro-Services-Principles-on-RHOS-Part-2 (to be posted)

  • 5. RedHat OpenShift Out of the Box capabilities for automated build, deploy, run and test with ACE MicroService 3
  • 6. Tekton Pipelines(Tech Preview on RHOS 4.2) to build, deploy and run ACE microservices

Covered in Part 3 – https://Integration-Development-to-Micro-Services-Principles-on-RHOS-Part-3 (to be posted)

  • 7. IBM Rational Integration Tester to test ACE/MQ integration microservices
  • 8. IBM Rational Test Virtualization Server to mock/stub ACE/MQ integration microservices

The Persona – A standalone/disconnected integration developer

In exploring ACE and MQ on RedHat OpenShift we will assume the persona of the disconnected developer. A developer working in a standalone unit test environment. Developing integration logic, building into images, deploying the resultant containers to run on RHOS and testing his work.

The Environment

Figure 2. Disconnected (standalone) Developer

Introducing the ACE Integration microservices

The Integration Micro Service 1 exposes a RESTFul (API) interface. This service will call Integration Micro Service 2 via it’s RestFul(API) interface.

Integration Micro Service 1

RESTInput(HTTP) -> Mapping Node -> RESTRequest (call Integration Microservice 2) -> RESTReply(HTTP)
Figure 3. Message Flow for ACE MS1
Figure 4. REST Interface to ACE MS1

Integration Micro Service 2

RESTInput(HTTP) -> Mapping Node Payload+"Hello from Integration Microservice 2" -> RESTReply(HTTP)

Integration Micro Service 2 simply returns a “Hello World” style message. Integration Micro Service 2 can be called directly or through Integration Micro Service 1.

Figure 5. Message Flow for ACE MS2

Figure 6. REST Interface to ACE MS2

Figure 7. Mapping for ACE MS2

The ACE Liveliness Probe

Our ACE Liveliness Probe service is another Restful service that we deploy into the ACE standard operating environment image, the base image from which Integration Micro Service 1 and Integration Micro Service 2 images are built. So it appears in all ACE Integration Micro Services containers. (this is not the service baked into the IBM cloud paks, we turn those off to demonstrate having a customer-centric base standard operating environment (SOE) image.

Figure 8. REST Interface ACE Liveliness

Figure 9. Message Flow for ACE Liveliness

The Liveliness Probe uses ESQL in a compute node to return the execution group (integration server) label plus a current timestamp to the calling application:

Set OutputRoot.JSON.Data.Messages[1].item[1] = 'DA1:'||ExecutionGroupLabel||':'
||CAST(CURRENT_TIMESTAMP AS CHARACTER FORMAT 'yyyyMMdd-HHmmss');

There is more detail on how the ACE integration microservices and the ACE Liveliness probe the Supporting Materials and documentation on GitHub from an earlier article written from deployment of ACE on IBM Cloud Private: https://github.com/DAVEXACOM/ACEonICPIntSupportingMaterial.

The ACE Toolkit Source projects for all the ACE services can be found at: https://github.com/DAVEXACOM/tekton-ace-example/tree/master/ACEMicroservicesSrcProject.

Introducing the MQ Queue Manager Service

The custom MQ Image build and deploy includes fixed configuration for MQ Object definitions and also secure connection for off-cluster tools and applications.

      Fixed Configuration included:
    • MQSC files containing User Queue Definitions
    • SVRCONN (non-TLS), SVRCONN(TLS)
    • Server Side TLS Keystore with certs
    • Matching client Side TLS Keystore with certs is available for use with the fixed config image

Full instructions for build custom MQ images that are enabled for TLS connectivity is available from this Github repos

https://github.com/DAVEXACOM/Exploring-ICP4i-RHOS/MQTLS and CustomLayer Image-311and42

Step by Step Guide for Scenarios 1,2,3 and 4

Scenario 1. RedHat OpenShift OOTB capabilities for deploy, run and test MQ custom images


Build a custom MQ Image from an MQ Image with fixed configuration. Deploy and test your custom image from tools that are “off-cluster”.

note: The following instructions complete with visual screen captures of all the steps is available in document form at:

      • Git Repo: https://github.com/DAVEXACOM/tekton-ace-example/doc
      • Document: 1.Developer Experience for ACEMQ with RHOS Tools and Tekton v1.1.docx from page 41

Collateral – Published Repositories, registries and collateral to clone or copy

First time set up of the MQ build and deployment artifacts on RHOS

Log into your OpenShift, CodeReady or ICP4i instance and create or switch to your project(namespace)
oc login --token=WM4a6Lj9Vj67U626gJ8Z897Cijr-OnjG-KcVxPhPk1Q --server=https://api.cloudpak.ocp4.cloudnativekube.com:6443
oc project da-build-project 

Create an imageStream to an MQ “Base” image that you will build your custom image from
oc create -f c:\openshift\data\build-mq-custom.yaml

example of build-mq-custom.yaml:


apiVersion: v1
kind: ImageStream
metadata:
name: ibm-mqadvanced-server-integration
spec:
tags:
– name: “9.1.3.0-r4-amd64”
from:
kind: DockerImage
name: davexacom/ibm-mqadvanced-server-integration:9.1.3.0-r4-amd64


Set the security context
oc adm policy add-scc-to-user anyuid -z default

Create a new build (imageStream and build config on RHOS)using the github repos docker file that builds FROM the MQ base image on dockerHub. Docker File content:

FROM ibm-mqadvanced-server-integration:9.1.3.0-r4-amd64
USER mqm
COPY TLSPRQM1.mqsc /etc/mqm/
COPY NOTLS.mqsc /etc/mqm/
COPY MYDEFS.mqsc /etc/mqm
COPY TLSPRQM1.kdb /etc/mqm/
COPY TLSPRQM1.crt /etc/mqm/
COPY TLSPRQM1.rdb /etc/mqm/
COPY TLSPRQM1.sth /etc/mqm/

oc new-build https://github.com/DAVEXACOM/ibm-mqadvanced-server-tls-build.git

Create a new application (MQ Queue Manager container) and run it up on RHOS
oc new-app ibm-mqadvanced-server-tls-build --env LICENSE=accept


Note: You could use – -env MQ_QMGR_NAME=MyAPPQMGR to set a queue manager name explicitly. If you don’t a queue manager name of ibmmqadvancedservertlsbuildnxnxn will be created.


Check the deployment config in the RH Openshift Console

RH Openshift console->Workloads->Deployment config->ibm-mqadvanced-server-tls-build. You will see the ENV values reflected here.


Expose the Queue Manager as a service
oc expose svc/ibm-mqadvanced-server-tls-build


Create a route such that the queue manager is accessible from tools that are “off-cluster”
oc create -f c:\openshift\data\create-mq-route.yaml

kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: tls-tlsprqm1p
namespace: da-build-project
labels:
app: ibm-mqadvanced-server-tls-build
spec:
host: tlsprqm12e-svrconn.chl.mq.ibm.com
subdomain: ”
to:
kind: Service
name: ibm-mqadvanced-server-tls-build
weight: 100
port:
targetPort: 1414-tcp
tls:
termination: passthrough
wildcardPolicy: None

Check your route to your MQ conname
oc get routes

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD

ibm-mqadvanced-server-tls-build ibm-mqadvanced-server-tls-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com ibm-mqadvanced-server-tls-build 1414-tcp None

 

Get the details from the host/port of the route above as input to CLNTCONN channel name

 
If you did not use --env MQ_QMGR_NAME=MyAPPQMGR to set a queue manager name explicitly on the oc new app. Get the queue manager name from the logs for the pod that is running the new MQ Container RH Openshift Console->workloads->pods->ibm-mqadvanced-server-tls-build1-xnxnxn-logs or via the oc get pods command

 

The following CLNTCONN definitions matches a TLS enabled SVRCONN on the MQ custom image that was just built
CLNTCONN Channel definition:


DEFINE CHANNEL(TLSPRQM1.SVRCONN) +
CHLTYPE(CLNTCONN) +
TRPTYPE(TCP) +
CONNAME(‘ibm-mqadvanced-server-tls-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com(443)’) +
CERTLABL(‘ibmmqarnold’) +
QMNAME(‘ibmmqadvancedservertlsbuild147zgm’) +
SSLCIPH(ANY_TLS12) +
REPLACE

Place the CLNTCONN in a file called CUSTOMIMAGE.mqsc (for example):

runmqsc -n < CUSTOMIMAGE.mqsc

Observe the result of the runmqsc command in the AMQCLCHL.TAB
type c:\programdata\ibm\mq\AMQCLCHL.TAB

The MQ Custom image you have built used davexacom/ibm-mqadvanced-server-integration:9.1.3.0-r4-amd64 as its base. This base image has a KeyStore DB re-populated with a set of server side certs. You can obtain the matching client side KeyStore DB and certs from:
Client side TLS KeyStore DB and certs

Copy the KeyStore and Cert files to a directory such as: C:\Program Files\IBM\MQ\arnold and then set the MQ environment variable to point to them.

set MQSSLKEYR=C:\Program Files\IBM\MQ\arnold

Use client connected runmqsc to connect to your queue manager

runmqsc -c ibmmqadvancedservertlsbuild147zgm

5724-H72 (C) Copyright IBM Corp. 1994, 2019.
Starting MQSC for queue manager ibmmqadvancedservertlsbuild147zgm.

DIS QL(M*)
AMQ8409I: Display Queue details.
QUEUE(MY.LOCAL.Q1) TYPE(QLOCAL)

end

Adding a webhook for the MQ custom image build

note: The following instructions complete with visual screen captures of the steps are available in the "1.Developer Experience for ACEMQ with RH OpenShift Tools and Tekton v1.1.docx" from page 43

      • Go to the build configs. RH OpenShift Console->Build Configs->ibm-mqadvanced-server-tls-build
      • Select ibm-mqserver-server-tls-build and page down to webhooks – copy the url with secret
      • Goto github repos - your clone of the repos https://github.com/DAVEXACOM/ibm-mqadvanced-server-tls-build
      • Select settings->webhooks and paste into the Payload URL
      • The set the content type and disable SSL
      • Hit the add webhook button at the bottom and then check for the tick
      • Make a change to the MYDEFS.MQSC file in your clone of Github repos https://github.com/DAVEXACOM/ibm-mqadvanced-server-tls-build.Add some user queues DEFINE QL(TO.BACKEND.Q) and DEFINE QL(FROM.BACKEND.Q) for example
      • Push or commit the change MYDEFS.MQSC file to GitHub
      • Observe triggered build in RH OpenShift Console->builds->ibm-mqadvanced-server-tls-build2
      • Check the deployment config RH OpenShift Console->Deployment Configs->ibm-mqadvanced-server-tls-build for Strategy- Rolling and Trigger on image change
      • Observe the Pod has updated based on the new build RH OpenShift Console->workloads->pods->ibm-mqadvanced-server-tls-build2-xnxnxn

Test the new MQ deployment for new queues

      • If you did not use --env MQ_QMGR_NAME=MyAPPQMGR to set a queue manager name explicitly, obtain the queue manager name in latest build log RH OpenShift Console->workloads->pods->ibm-mqadvanced-server-tls-build2-xnxnxn->Logs
      • It will look something like this ibmmqadvancedservertlsbuild2q6v7z
      • Open a windows command prompt as administrator
      • Update the CLNTCONN with the queue manager name from the logs workload->pods

CLNTCONN Channel definition:

DEFINE CHANNEL(TLSPRQM1.SVRCONN) +
CHLTYPE(CLNTCONN) +
TRPTYPE(TCP) +
CONNAME('ibm-mqadvanced-server-tls-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com(443)') +
CERTLABL('ibmmqarnold') +
QMNAME('ibmmqadvancedservertlsbuild2q6v7z') +
SSLCIPH(ANY_TLS12) +
REPLACE

      • runmqsc -n < CUSTOMIMAGE.mqsc
      • Check the AMQCLCHL.TAB

type c:\programdata\ibm\mq\AMQCLCHL.TAB

      • If this is a new command window

set MQSSLKEYR=C:\Program Files\IBM\MQ\arnold

      • Connect to the latest version of your queue manager using runmqsc in client mode and observer the new queues TO.BACKEND.Q and FROM.BACKEND.Q

runmqsc -c ibmmqadvancedservertlsbuild2q6v7z

Scenario 2. RedHat OpenShift OOTB capabilities for deploy, run and test with ACE Micro Service 2

Build a custom ACE Image adding your integration microservice from an ACE Image with fixed configuration. Deploy and test your custom image using a REST client.

Note: The following instructions complete with visual screen captures of all the steps is available in document form at:

  • Git Repo: https://github.com/DAVEXACOM/tekton-ace-example/doc
  • Document: 1.Developer Experience for ACEMQ with RHOS Tools and Tekton v1.1.docx from page 58

Published Repositories, registries and collateral you can clone or copy

First time set up of the ACE MicroService 2 build and deployment artifacts on RHOS

note: We will work with ACE MS2 before ACE MS1 as ACE MS2 can be called and tested standalone whereas, ACE MS1 requires ACE MS2.

oc login --token=WM4a6Lj9Vj67U626gJ8Z897Cijr-OnjG-KcVxPhPk1Q --server=https://api.cloudpak.ocp4.cloudnativekube.com:6443

oc project da-build-project

note: You don’t really need the oc create for the image stream for the input ACE base SoE. The oc new-build seem to take care of that via the FROM in the Dockerfile in the GitRepos.


Example ImageStream create file:

apiVersion: v1
kind: ImageStream
metadata:
name: ibm-ace-mqc-soe
spec:
tags:
- name: "latest"
from:
kind: DockerImage
name: davexacom/ace11002mqc91soe:latest

oc create -f c:\openshift\data\build-ace-custom.yaml

oc new-build https://github.com/DAVEXACOM/ibm-ace-mqc-soe-ms2-build.git


Review the image streams in RH Openshift Console->Builds->ImageStreams

Review builds in RH Openshift Console->Builds->Builds

Review build configs in RH Openshift Console->Builds->Build Configs

Spin up a runtime pod with the new ACE MS2 container

oc new-app ibm-ace-mqc-soe-ms2-build --env LICENSE=accept


Review service in RH Openshift Console->Networking->Services

Review deployment config in RH Openshift Console->workloads->Deployment Config and note the Strategy is "Rolling" and Trigger on Image change

Create a route so the ACE MS2 can be access from outside of the cluster. Here is an example of the route:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-ace-mqc-soe-ms2-build
namespace: da-build-project
labels:
app: ibm-ace-mqc-soe-ms2-build
spec:
host: >-
ibm-ace-mqc-soe-ms2-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com
subdomain: ''
to:
kind: Service
name: ibm-ace-mqc-soe-ms2-build
weight: 100
port:
targetPort: 7800-tcp
wildcardPolicy: None

Use the route information when calling the ACE MS2
oc get routes


Test the ACE MS 2 microservices on RHOS

Test the liveliness probe in the MS2 container:
POST
http://ibm-ace-mqc-soe-ms2-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/livelinessProbe/v1/message
With Data:
{“Messages”:[“test”]}

Figure 10. Rest Client calls ACE Liveliness Probe


Test ACE microservice 2 in the MS2 container:
POST
http://ibm-ace-mqc-soe-ms2-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/microservice2/v1/message
With Data:
{“Messages”:[“hello ms2 from client”]}

Figure 11. Rest Client calls ACE MS2


Adding a webhook for the ACE MS2 image build

note: The following instructions complete with visual screen captures of the steps are available in the "1.Developer Experience for ACEMQ with RH OpenShift Tools and Tekton v1.1.docx" from page 64

  • Obtain the web hook URL from the RH Openshift Console->Builds->buildconfigs/ibm-ace-mqc-soe-ms2-build
    There is a copy to clipboard to for the webhook with secret URL - for example:

    https://api.cloudpak.ocp4.cloudnativekube.com:6443/apis/build.openshift.io/v1/namespaces/da-build-project/buildconfigs/ibm-ace-mqc-soe-ms2-build/webhooks/nuDTe2L6Wv_XpQB4Jd1v/github

  • Create the webhook in your clone of the ACE MicroService 2 build repos on GitHub https://github.com/DAVEXACOM/ibm-ace-mqc-soe-ms2-build by going to Settings->Webhooks. Use the same procedure as with MQ earlier.
  • Make a change to ACE MicroService 2 source in the ACE Toolkit, save the BAR Microservice2.bar file and push it to your gitHub repository
    An easy change to make is Microservice2->Resources->Maps->add_Mapping.map->localMap. Select the Assign to the "Item" in the target and then change the value in the properties->general tab below.
  • Observe the new build in RH Openshift Console->Build->Builds->ibm-ace-mqc-soe-ms1-build-2->logs
  • When the build completes, check the pods RH OpenShift Console->workloads->pods->ibm-ace-mqc-soe-ms2-build-2-nxnxnx->logs
  • Retest ACE microservice 2 in the MS2 container to observe that the change you made is reflected.

POST:
http://ibm-ace-mqc-soe-ms2-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/microservice2/v1/message
With Data:
{“Messages”:[“hello ms2 from client”]}

Scenario 3.RedHat OpenShift Out of the Box capabilities for deploy, run and test with ACE Micro Service 1

Build a custom ACE Image adding your integration microservice from an ACE Image with fixed configuration. Deploy and test your custom image using a REST client.

note: The following instructions complete with visual screen captures of all the steps is available in document form at:

  • Git Repo: https://github.com/DAVEXACOM/tekton-ace-example/doc
  • Document: 1.Developer Experience for ACEMQ with RH OpenShift Tools and Tekton v1.1.docx from page 49

Collateral - Published Repositories, registries and collateral to clone or copy

First time set up of ACE MicroService 1 build and deployment artifacts on RH Openshift

oc login --token=WM4a6Lj9Vj67U626gJ8Z897Cijr-OnjG-KcVxPhPk1Q --server=https://api.cloudpak.ocp4.cloudnativekube.com:6443

oc project da-build-project

create the image stream to the base Standard Operating Image(SoE) a base starting image of ACE bascially. My example has a custom liveliness probe RESTful API service deployed into it as a base.

apiVersion: v1
kind: ImageStream
metadata:
name: ibm-ace-mqc-soe
spec:
tags:
- name: "latest"
from:
kind: DockerImage
name: davexacom/ace11002mqc91soe:latest

Observe the image stream - RH Openshift Console->Builds->ImageStreams
oc create -f c:\openshift\data\build-ace-custom.yaml

Create a build config and ImageStream leveraging a Dockerfile in a git repository
oc new-build https://github.com/DAVEXACOM/ibm-ace-mqc-soe-ms1-build.git

Observe the results in RH Openshift Console->build->ImageStreams and Build Configs

Create a new ACE application
oc new-app ibm-ace-mqc-soe-ms1-build --env LICENSE=accept

Note: If you use oc expose you will get a route (URL) that exposes 7600 the WebUI and not the 7800 for HTTP connection to the deployed services. So recommend using the oc create route.

oc create -f c:\openshift\data\create-ace-route.yaml

Example of create-ace-route.yaml:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: ibm-ace-mqc-soe-ms1-build
namespace: da-build-project
labels:
app: ibm-ace-mqc-soe-ms1-build
spec:
host: >-
ibm-ace-mqc-soe-ms1-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com
subdomain: ''
to:
kind: Service
name: ibm-ace-mqc-soe-ms1-build
weight: 100
port:
targetPort: 7800-tcp
wildcardPolicy: None

Observe the new route that is created
oc get routes

Test the liveliness probe service - using your route details:
POST
http://ibm-ace-mqc-soe-ms1-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/livelinessProbe/v1/message
with Data
{“Messages”:[“test”]}

Adding a webhook for the ACE MS1 image build

Note: The following instructions complete with visual screen captures of the steps are available in the "1.Developer Experience for ACEMQ with RH OpenShift Tools and Tekton v1.1.docx" from page 55

  • Obtain the web hook URL from the RH Openshift Console->Builds->buildconfigs/ibm-ace-mqc-soe-ms1-build
    There is a copy to clipboard to for the webhook with secret URL - for example:

    https://api.cloudpak.ocp4.cloudnativekube.com:6443/apis/build.openshift.io/v1/namespaces/da-build-project/buildconfigs/ibm-ace-mqc-soe-ms1-build/webhooks/nuDTe2L6Wv_XpQB4Jd1v/github

  • Create the webhook in your clone of the ACE MicroService 1 build repos on GitHub https://github.com/DAVEXACOM/ibm-ace-mqc-soe-ms1-build by going to Settings->Webhooks. Use the same procedure as with MQ early.
  • Make a change to ACE MicroService 1 source in the ACE Toolkit, save the BAR Microservice1.bar file and push it to your gitHub repository
  • Observe the new build in RHOS Console->Build->Builds->ibm-ace-mqc-soe-ms1-build-2->logs
  • When the build completes, check the pods RH Openshift Console->workloads->pods->ibm-ace-mqc-soe-ms1-build-2-nxnxnx->logs

You can test your changes to MicroService 1 - but it is designed to work with ACE MS2 and we haven't connected the two yet so the test will fail.

POST from a Rest Client
http://ibm-ace-mqc-soe-ms1-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/livelinessProbe/v1/message
With Data:
{“Messages”:[“test”]}

So just retest the Liveliness probe for now to ensure the rebuild worked
POST from a Rest Client
http://ibm-ace-mqc-soe-ms1-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/livelinessProbe/v1/message
With Data:
{“Messages”:[“test”]}

Scenario 4. Connecting ACE Microservice 1 and ACE Microservice 2 in the Openshift Cluster

Update an ACE Microservice and rebuild the custom ACE Image adding your updated integration microservice to connect to a second, already deployed ACE microservice. Deploy and test your custom image using a REST client.

Note: The following instructions complete with visual screen captures of all the steps is available in document form at:

  • Git Repo: https://github.com/DAVEXACOM/tekton-ace-example/doc
  • Document: 1.Developer Experience for ACEMQ with RH OpenShift Tools and Tekton v1.1.docx from page 73

Obtain the networking information you need for your ACE microservices.

Do this by observation of the services in the RH Openshift Console->networking->services
and by observation of the services the routes in the RH Openshift Console->networking->services

OR via the command line as follows; 

Oc get svc

Oc describe svc ibm-ace-mqc-soe-ms1-build

Oc describe svc ibm-ace-mqc-soe-ms2-build

 

You should be able to derive the following information from the above:

 

Cluster IPs for ACE microservices

Cluster IP for ibm-ace-mqc-soe-ms1-build = 172.30.197.126:7800
Cluster IP for ibm-ace-mqc-soe-ms2-build = 172.30.158.128:7800

 

K8s DNS names for ACE microservices

Intra cluster use the service name.
DNS = ibm-ace-mqc-soe-ms1-build
DNS = ibm-ace-mqc-soe-ms2-build

 

From the above (your version of it) the Public IP for calling ACE microservice 1 as a rest request:

 

POST to:

http://ibm-ace-mqc-soe-ms1-build-da-build-project.apps.cloudpak.ocp4.cloudnativekube.com/microservice1/v1/message

with data:

{"Messages":["Hello From Test Client"]}

 

From the above the Intra-cluster DNS for calling ACE MS2 from ACE MS1:

http://ibm-ace-mqc-soe-ms2-build:7800/microservice2/v1

In the IBM ACE Toolkit you have two choices to change the behavior of ACE MS1 calling ACE MS2

 

  • Source code change - Microservice1->resources->subflows->add.subflow->RestRequest->base URL override = http://ibm-ace-mqc-soe-ms2-build:7800/microservice2/v1
  • BAR deployment descriptor override - Microservice1.bar Manage Tab->resources->subflows->add.subflow->RestRequest->base URL override = http://ibm-ace-mqc-soe-ms2-build:7800/microservice2/v1

   

Figure 12. ACE MS1 BAR Override of Base URL