App Connect

 View Only

Using the App Connect Public API with AWS CodeDeploy

By Sher Chowdhury posted Wed October 18, 2023 05:07 AM

  

The new App Connect Public API has opened up all sorts of possibilities for performing tasks in IBM App Connect Enterprise as a Service by using an API rather than App Connect’s UI. This is especially useful if you want to use a CI/CD tool (e.g. Jenkins, AWS CodeDeploy, GitHub Actions, Travis CI, etc) to issue API calls to your App Connect instance. In this post, I’m going to walk through a common CI/CD pipeline scenario. As a starting point for our scenario, let’s say there’s a GitHub repo that contains your deployable resources, i.e files that will make up your BAR file. Now let’s say the main branch of this repo gets updated, e.g. by a user approving+merging a PR. When that happens, the following chain of events occur:

  1. When the PR is merged, a GitHub webhook is triggered that will cause the CI/CD tool to perform a build. In this example, we’ll be using AWS CodeDeploy as our CI/CD, which will be detailed below.

  2. The CI/CD tool pulls down the latest code in the selected branch and constructs a BAR file.

  3. The CI/CD tool uploads the BAR file to a storage location. In this example, we’ll be using AWS S3, which will be detailed below.

  4. The CI/CD tool makes three API calls to the App Connect public API to deploy the BAR file for use with an integration runtime. Those calls are to:

    1. Request a JWT by using the /api/v1/tokens POST endpoint. This API call returns an access token. This access token must be provided when making API calls to all the other endpoints.

    2. Upload a BAR file to your App Connect instance by using the /api/v1/bar-files/{bar-file} PUT endpoint.

    3. Deploy the BAR file to an integration runtime by using the /api/v1/integration-runtimes/{integration-runtime} PUT endpoint.

Your pipeline build could be doing a variety of other tasks but these are omitted for now to keep things simple.

In this example, I’ll be using AWS CodeDeploy as my CI/CD tool. There are a number of different CodeDeploy setup options available, but I’ve opted for the EC2 option because most users are familiar with how EC2s work. Therefore, in this walkthrough I’ll outline the steps needed to:

  1. Set up an EC2 instance to act as my CodeDeploy agent.

  2. Instruct this agent to download my BAR file from an AWS S3 bucket.

  3. Use the App Connect public API to upload it to my App Connect instance.

Here are the steps to implement this CodeDeploy setup.

Step 1 - Create the IAM service role for CodeDeploy

Create the CodeDeploy service IAM role, as described in the AWS documents.

Create an AWS Service Role for CodeDeploy

Attach the AWSCodeDeployRole policy to this new role.

Attach the AWSCodeDeployRole policy to this new role

This IAM service role will be used later when we come to create the CodeDeploy instance. 

Step 2 - Create an IAM instance profile role for the EC2 CodeDeploy agent

Create an IAM instance profile role and attach the following policies to it:

  • AmazonEC2RoleforAWSCodeDeployLimited

  • AmazonS3FullAccess

  • AWSCodeDeployRole

List of required AWS permissions policies

 

I assigned some fairly broad permission policies here, but in practice you should only attach the minimum permissions required. The AWS documents covers in more detail how to achieve that to meet your own needs.  

Step 3 - Create an S3 bucket

Create an S3 bucket to store the BAR file. This BAR file needs to be packaged up into a CodeDeploy artifact before it gets uploaded to the S3 bucket. I'll cover more about this in step 8. 

Step 4 - Set up an EC2 based CodeDeploy Agent

Create an EC2 instance with the following characteristics:

  • Use the official Amazon Linux Amazon Machine Image (AMI). This AMI already comes with a couple of required tools preinstalled, namely cURL and jq.

  • Ensure that networking is set up so that the EC2 instance has internet access.

  • Set the IAM instance profile to the instance profile that you created in step 2. 

  • Set a key pair so that you can use SSH to access the EC2 instance after it has been provisioned.

  • Set some tags to make this EC2 instance uniquely identifiable.

Once the EC2 instance is up and running, use SSH to access the instance and install the CodeDeploy agent on it.  

Step 5 - Set App Connect public API credentials

While still connected to the CodeDeploy Agent EC2 instance by SSH, create the following file:

$ cat /root/creds.sh
export hostname=https://api.xxx.appconnect.automation.ibm.com
export clientId='xxx'
export clientSecret='xxx'
export instanceId='xxx'
export apiKey='xxx'
export barFileName='CustomerDatabaseV1.bar'

This file contains the credentials and environment information needed to authenticate and interact with the App Connect public API. If you’re unsure about where to find your credentials, see the Introducing the App Connect public API blog post. Make sure that you keep this information safe and concealed.

Step 6 - Create a CodeDeploy application

Create a CodeDeploy application. Make sure to choose EC2/On-premises from the "Compute platform" dropdown list. 

Step 7 - Create a deployment group

Under the new application, create a deployment group, with the following settings:

  1. For the "Service role", enter the name of the service role that you created in step 1.

  2. For "Deployment Type" choose "In place", because this only relates to the CodeDeploy agent itself, and we only have a single EC2 instance. 

  3. For "Environment configuration", tick the "Amazon EC2 instances" check box, because we’re not using Autoscaling groups in this example to keep things simple. Set tag names so that the CodeDeploy agent is the single match.

    Created Deployment Group

  4. For "Deployment settings", select "CodeDeployDefault.AllAtOnce" from the dropdown list, because there is only one CodeDeploy agent. Also leave the “Enable load balancing” checkbox unchecked, as this only relates to the single EC2 which has the CodeDeploy agent running on it, and not the ACE runtime itself.


    Set Deploy rollout settings
     

Step 8 - Create a CodeDeploy artifact and upload it to S3

Create a .zip file that contains your BAR file and upload it to the AWS S3 bucket that you created in step 3. In my example, I created a .zip file called codeDeployHelloWorld.zip. This .zip file contains the BAR file that I want to deploy, called CustomerDatabaseV1.bar, along with the following content:

$ tree codeDeployHelloWorld
codeDeployHelloWorld
├── CustomerDatabaseV1.bar
├── appspec.yml
├── IntegrationRuntime.json
└── scripts
    └── deployBar.sh

1 directory, 4 files

Here we have a CodeDeploy-specific config file called appspec.yaml. This YAML file's content is:

version: 0.0
os: linux
files:
  - source: /CustomerDatabaseV1.bar
    destination: /root/
  - source: /IntegrationRuntime.json
    destination: /root/
hooks:
  BeforeInstall:
    - location: ./scripts/deployBar.sh
      timeout: 60
      runas: root

This file instructs the CodeDeploy agent to copy CustomerDatabaseV1.bar and IntegrationRuntime.json to the CodeDeploy agent's /root directory. After that's done, it instructs the CodeDeploy agent to run the deployBar.sh script.  Here's what this script looks like:

#!/usr/bin/env bash

set -ex
echo '' > /root/deployBar.sh.log
echo $(date -u) "##### INFO: Start running deployBar.sh bash script" >> /root/deployBar.sh.log
echo $(date -u) "##### INFO: Current directory is: $(pwd)" >> /root/deployBar.sh.log
echo $(date -u) "##### INFO: Current directory contains the following content:" >> /root/deployBar.sh.log
ls -lart

source /root/creds.sh

echo $(date -u) "##### INFO: Have set the following variables:" >> /root/deployBar.sh.log
echo "hostname:     ${hostname}" >> /root/deployBar.sh.log
echo "clientId:     ${clientId}" >> /root/deployBar.sh.log
echo "clientSecret: ${clientSecret}" >> /root/deployBar.sh.log
echo "instanceId:   ${instanceId}" >> /root/deployBar.sh.log
echo "apiKey:       ${apiKey}" >> /root/deployBar.sh.log
echo "barFileName:  ${barFileName}" >> /root/deployBar.sh.log


##
## Create token
##
curl -s --request POST \
  --url "${hostname}/api/v1/tokens" \
  --header 'Content-Type: application/json' \
  --header "X-IBM-Client-Id: ${clientId}" \
  --header "X-IBM-Client-Secret: ${clientSecret}" \
  --header "x-ibm-instance-id: ${instanceId}" \
  --data "{
  \"apiKey\": \"${apiKey}\"
}" | jq -r '.access_token' > /tmp/appconnect_token.txt

export appConnToken=$(cat /tmp/appconnect_token.txt)

echo $(date -u) "##### INFO: Generated App Connect token" >> /root/deployBar.sh.log


##
## Upload Bar file
##
barFileNameWithoutExtension=$(basename ${barFileName})
export bar_url=$(curl -s --request PUT \
  --url "${hostname}/api/v1/bar-files/${barFileNameWithoutExtension}-$(date +'%Y%m%d_%H%M%S')" \
  --header "Authorization: Bearer ${appConnToken}" \
  --header 'Content-Type: application/octet-stream' \
  --header "X-IBM-Client-Id: ${clientId}" \
  --data-binary "@/root/${barFileName}" | jq -r '.url')

echo $(date -u) "##### INFO: Uploaded bar file, it's url is ${bar_url}" >> /root/deployBar.sh.log


##
## Deploy Bar file to an Integration Runtime
##
irCrFilePath="/root/IntegrationRuntime.json"

export irCr=$(jq --arg barUrl $bar_url '.spec.barURL[0] = $barUrl' $irCrFilePath)
export irName=$(jq -r '.metadata.name' $irCrFilePath)


curl --request PUT \
  --url "${hostname}/api/v1/integration-runtimes/${irName}" \
  --header "Authorization: Bearer ${appConnToken}" \
  --header 'Content-Type: application/json' \
  --header "X-IBM-Client-Id: ${clientId}" \
  --data "$irCr"


echo $(date -u) "##### INFO: Bar file deployed to integration runtime" >> /root/deployBar.sh.log


exit 0

This shell script is where the actual App Connect public API calls are made. This is an example to illustrate the three main API calls that are required to deploy a BAR file: 

  1. Request a JWT - The script sources the /root/creds.sh script to load in the information that’s required when making a token request. The returned token has a 12-hour lifespan, and is supplied in the next two API calls.

  2. Upload a BAR file - Notice that I attached a timestamp to the BAR file's name. That's to guarantee that we get a unique barURL value returned to us.

  3. Deploy the BAR file to an integration runtime - This is where this shell script uses the IntegrationRuntime.json file. This JSON file is just the integration runtime specification, which I'm using as a starting point to create and update my integration runtime. 

You can make your version of the IntegrationRuntime.json file by calling the /api/v1/integration-runtimes/{integration-runtime} GET endpoint. For reference, here’s the content of my IntegrationRuntime.json file:

{
    "metadata": {
      "name": "ir-01-codedeploy",
      "annotations": {},
      "labels": {
        "appconnect.ibm.com/designerapiflow": "false",
        "appconnect.ibm.com/designereventflow": "false",
        "appconnect.ibm.com/toolkitflow": "true",
        "component-name": "appconnect"
      }
    },
    "spec": {
      "barURL": [
        ""
      ],
      "configurations": [],
      "forceFlowBasicAuth": {
        "enabled": true
      },
      "replicas": 1,
      "template": {
        "spec": {
          "containers": [
            {
              "name": "runtime",
              "resources": {
                "limits": {
                  "cpu": "500m",
                  "memory": "500Mi"
                }
              }
            },
            {
              "name": "designerflows",
              "resources": {
                "limits": {
                  "cpu": "500m",
                  "memory": "500Mi"
                }
              }
            },
            {
              "name": "designereventflows",
              "resources": {
                "limits": {
                  "cpu": "500m",
                  "memory": "500Mi"
                }
              }
            },
            {
              "name": "proxy",
              "resources": {
                "limits": {
                  "cpu": "500m",
                  "memory": "500Mi"
                }
              }
            }
          ]
        }
      },
      "version": "12.0"
    }
  }

Note: The App Connect public API has rate limits in place, so it’s best to avoid making any unnecessary API calls. This sample shell script is quite basic, and it requests a JWT every time it’s executed. However, it would be better to add some logic in the shell script so that it checks if there’s already a JWT that is less than 12 hours old and use that instead. I’ve omitted doing this to keep things simple.

A useful feature of this shell script is that it is fairly generic and not specific to AWS CodeDeploy, which means that it is quite easy to adapt this script for your CI/CD tool of choice. In any case, this shell script shouldn’t be treated as one-size-fits-all, and it’s likely you’ll need to modify this script to meet your needs.

Step 9 - Create a deployment

Under the deployment group that you created in step 7, create a CodeDeploy "Deployment". This is where you specify the S3 address for your CodeDeploy artifact. As soon as the deployment is created, it triggers the actual deployment, which in turn runs the deployBar.sh script. 

Within a couple of minutes, the AWS UI console should show that deployment has been successful.

A list of green ticks that shows a successful CodeDeploy deployment.

With this pipeline in place, you can now automate the deployment of your BAR files by simply making changes to your GitHub repo.

Troubleshooting

If you’re not getting the rows of “Succeeded“ status like in the above screenshot, there are a few things that you can do to investigate why your attempts are failing.

  1. Use SSH to access your CodeDeploy agent EC2 instance and look at your CodeDeploy agent logs, which you should find at /var/log/aws/codedeploy-agent/codedeploy-agent.log. This log, amongst other things, can help highlight permission problems, e.g. it should show permission-denied errors if the EC2 doesn’t have permissions to download files from the S3 bucket.

  2. The deployBar.sh script records its own logs at /root/deployBar.sh.log. This log file contains information about what credentials it used to make API calls. Using this info, try using an API testing tool to replicate the same API calls that the deployBar.sh script should be making. This will help to identify if any of the credentials are incorrect.

Closing comments

I’ve mainly focused this walkthrough on the actual CI/CD tool and the App Connect linkup. I’ve therefore had to skip some AWS best practices to keep things short and to the point. For example, a better way to create an EC2 instance is by using an Auto Scaling group along with an Elastic Load Balancer. Also, secrets should be stored in a secrets vault rather than directly on an EC2 instance. If you want to learn more about these best practices, check out the AWS Well-Architected Framework.

References:
AWS (retrieved Oct 18, 2023, from https://aws.amazon.com/). Screenshots by Sher Chowdhury

0 comments
34 views

Permalink