In this article we will demonstrate how to;
- Create your own Docker image
- Publish the Docker image to the local ICP repository
- Load a Helm chart in an ICP catalog for the Docker image
- Create and assign an auto-scaling policy to the deployment
Here, we illustrate the steps for an ACE image, but a similar procedure would apply to IIB as well.
- Download the Docker package (zip file) for ACE from Github.
https://github.com/ot4i/ace-docker
- After unpacking the zip file, you will find the Dockerfile in the directory structure as follows:
…../ace-docker-master/11.0.0.0/ace/ubuntu-1604/base
- Suppose you want to customize this Docker image by including certain bar files and have them deployed when the Docker container is started. So, copy the bar files in the above directory and edit the Dockerfile as shown in the example below by adding the COPY command to copy the bars to a temporary location and define the mqsibar command to deploy the bars to the Integration Server work directory.
Example:
ENV BAR1=LargeXMLProcessing.bar
# Copy in the bar file to a temporary directory
COPY –chown=aceuser $BAR1 /tmp
# Unzip the BAR file; need to use bash to make the profile work
RUN bash -c ‘mqsibar -w /home/aceuser/ace-server -a /tmp/$BAR1 -c’
# Set entrypoint to run management script
CMD [“/bin/bash”, “-c”, “/usr/local/bin/ace_license_check.sh && IntegrationServer -w /home/aceuser/ace-server –console-log”]
- Now, build the Docker image.
$ docker build -t ace:11.0.0.0
- Upon successful building of the Docker image, you should see the message:
Successfully built 514fe6a4fcc3
Successfully tagged ace:11.0.0.0
- You can verify the newly created Docker image by running it locally.
# docker run –name myAceBar -e LICENSE=accept -P ace:11.0.0.0
Sourcing profile
2018-07-23 06:40:12.773669: …..2018-07-23 06:40:13.057042: Integration server ‘ace-server’ starting initialization; version ‘11.0.0.0’ (64-bit)
……………………………….2018-07-23 06:40:18.783118: About to ‘Initialize’ the deployed resource ‘Transformation_Map’ of type ‘Application’.
2018-07-23 06:40:21.181476: About to ‘Start’ the deployed resource ‘Transformation_Map’ of type ‘Application’.
An http endpoint was registered on port ‘7800’, path ‘/Transformation_Map’.
2018-07-23 06:40:21.232764: The HTTP Listener has started listening on port ‘7800’ for ‘http’ connections.
2018-07-23 06:40:21.233108: Listening on HTTP URL ‘/Transformation_Map’.
Started native listener for HTTP input node on port 7800 for URL /Transformation_Map
..
2018-07-23 06:40:22.051954: Integration server has finished initialization.
2018-07-23 06:40:22.054202: The HTTP Listener has started listening on port ‘7600’ for ‘http’ connections.
- Access your Integration Server using the Admin Console. To get the port mapping information, run following command:
We see that the admin port 7600 is mapped to 32769. So now we can access the ACE Admin Console using
http://host IP address:32769/
- In order to be able to push this docker image to IBM Cloud Private (ICP), tag your image with ICP cluster information.
- Push the image to the ICP repository
# docker login mycluster.icp:8500
# docker push mycluster.icp:8500/default/ace_bar:11.0.0.0
- Navigate to the ICP admin console and find the Docker image at ICP -> Catalog -> Images
- In the ICP catalog, find the Helm package that you published as shown in the section above.
- Click the‘Configure’button and fill in the details. The Image repository and the image tag will be pre-filled as it comes from the values.yaml that we had updated in the section above.
- Click Install. The deployment process begins.
- Navigate to Workloads -> Helm releases. From the list, click on the Helm release that you just deployed in the step above. You would be able to see the Services, Deployment and PoD details.
- Click on Service name and it will provide further information like the WebUI and Integration Server Listener.
- The console log of your PoD can be viewed by navigating to the Deployments -> PoD -> Logs section. In the image below, we can see that the Integration Server has been started successfully.
When the load on your integration server increases due to an increased volume of messages, one of the impacts you would observe is the CPU utilization and the throughput rate. In such cases you may want to scale up your integration flows horizontally to cater for the additional load so that the CPU utilization is within limits and eventually improving the message throughput rate. Also, when the peak load time is over and the message volumes are less, you would want to scale down the number of integration servers to save on CPU and memory resources. So, in a nutshell, the auto-scaling policy is required to scale-up or scale-down the number of integration servers based on certain parameters. Currently, the ICP platform provides auto-scaling based on the %CPU utilization for a given deployment.
- To define a scaling policy, go to Configuration -> Scaling Policies in the ICP admin console menu.
- Click ‘Create Policy’. It will open up a dialog box. Enter the details as shown below.
- Provide a Name to your policy.
- Select the Namespace that you want this to be applied to.
- Under Scale target, provide the name of the Deployment to which you want to apply this policy. You can find the exact name of your Deployment by navigating through Workloads –> Deployments list from the left hand panel in the ICP admin console.
- Set a value for Minimum replications which is the no. of copies of your PoD that you want to be running.
- Set a value for Maximum replications which is the no. of copies of your PoD that you want to scale to when the CPU utilization exceeds the threshold.
- Define the value for Target CPU which is the threshold value of % CPU Utilisation value of a Deployment above which you want the auto scaling policy to get triggered to spawn additional PoDs up to Maximum defined ones.
3. When all the values have been entered, click
Create. The policy is created and gets associated with the Deployment.
4. In order to verify the working of your auto-scaling policy, you may process a large load through your Integration flows such that it causes higher CPU utilization.
5. Under
Deployments section, navigate to your deployment and click the
Events tab.
It would show when the replicas have scaled up and scaled down. In our example as shown below, we ran a load test for our integration flow. The Replicas scaled up from 1 to 3 when CPU utilization increased beyond the defined threshold of 10% and it scaled down to an initial value of 1 after the load test was over.