watsonx Assistant

watsonx Assistant

Conversational AI for fast and friendly customer care

 View Only

Setting up a Blue-Green Deployment of IBM Cloud Pak for Data and Watsonx Assistant On AWS

By Jens Müller posted Mon September 30, 2024 06:00 AM

  

Introduction

Cloud Pak for Data is IBM's data and AI platform and consists of a modular set of integrated software components. These components are responsible for data analysis, organization, and management. It is available as a managed service on IBM Cloud or for self-hosting (i.e., using a managed or self-hosted Red Hat OpenShift cluster). One of the integrated software components is IBM watsonx Assistant. watsonx Assistant is a market-leading, conversational artificial intelligence platform for building virtual assistants.

A deployment of Cloud Pak for Data and watsonx Assistant in a self-hosted environment necessitates recurring upgrades of the involved software components. These components include the OpenShift Container Platform, Cloud Pak for Data, and watsonx Assistant. One way to avoid downtime is an application release model called blue-green deployment. Such a deployment also enables continuous service availability and provides a rollback capability.

In this article, I will describe how to set up a blue-green deployment of Cloud Pak for Data and watsonx Assistant on AWS using an application load balancer.

Blue-Green Deployments

A blue-green deployment consists of two identical or similar production environments (e.g., clusters). The names of these environments are blue and green. Each environment runs a different release of the software stack of an application.

The blue environment runs a previous version of the software stack, whereas the green one runs the current version. When you publish a new software stack, traffic from the blue environment is gradually shifted to the green one. The application may require session affinity. Session affinity means that requests from the same client are routed to the same target (e.g., for maintaining state information) until a timeout is reached. In this case, it may take a while until no more traffic is routed to the blue environment. Once all traffic is routed to the green environment, the blue environment is no longer needed. In the next release cycle, the green environment will turn into the blue environment, and you will need a new green environment.

You may implement a blue-green deployment of Cloud Pak for Data on AWS that supports session affinity by using an application load balancer. The figure below shows the AWS resources required for this solution.

Blue-Green Deployment – Solution Overview

In the following, I will walk you through the various steps for implementing this solution. The code examples in this article are shell commands (Linux/macOS) using the AWS Command Line Interface and the JSON processor jq.

Setting up OpenShift Clusters On AWS

You can create OpenShift clusters using a command-line installation program named openshift-install. This program supports various target environments. OpenShift clusters used for a blue-green deployment on AWS need to share the same virtual private cloud (VPC). You could use the installation program to create a VPC and other network resources when creating the first cluster. However, I strongly discourage you from doing so. The disadvantage of the latter approach is that deleting the cluster owning the VPC would fail if several clusters shared the same VPC. Thus, you should create the shared VPC in a separate step as described here.

To create an OpenShift cluster on AWS using a pre-existing VPC, you must determine the identifiers of the subnets associated with the VPC:

$ export REGION=… # VPC region (e.g., 'us-east-1')
$ export VPC_ID=… # VPC ID

$ aws ec2 describe-subnets \
    --filter "Name=vpc-id,Values=${VPC_ID}" \
    --no-cli-pager \
    --output text \
    --query "Subnets[*].[SubnetId]" \
    --region ${REGION}

Before creating the clusters on AWS as described here, you must specify these subnet identifiers in the install-config.yaml. You can create this file by running openshift-install create install-config:

platform:
  aws:
    …
    subnets:
      - subnet-…
      - subnet-…
      - subnet-…
      …

Then, you can create the clusters using openshift-install create cluster. Afterwards, you must tag the VPC resource to state which clusters use it. Otherwise, setting up the application load balancer in a later step will fail. For this purpose, you must determine the cluster ID of each cluster. The installation program stores the cluster ID in a JSON file named metadata.json (key: infraID). This file resides in the assets directory that the installation program uses to store metadata. Finally, you can tag the VPC resource as follows:

$ export CLUSTER_ID_BL=… # extracted from metadata.json: infraID
$ export CLUSTER_ID_GN=… # extracted from metadata.json: infraID

$ aws ec2 create-tags \
    --region ${REGION} \
    --resources ${VPC_ID} \
    --tags Key=kubernetes.io/cluster/${CLUSTER_ID_BL},Value=shared

$ aws ec2 create-tags \
    --region ${REGION} \
    --resources ${VPC_ID} \
    --tags Key=kubernetes.io/cluster/${CLUSTER_ID_GN},Value=shared

After having created the clusters, install Cloud Pak for Data and watsonx Assistant. The installation of these components is beyond the scope of this article. Here you can find the Cloud Pak for Data documentation, which includes installation instructions.

Creating an Application Load Balancer

The central component of a blue-green deployment of Cloud Pak for Data on AWS is an application load balancer. In the following, the term load balancer refers to an application load balancer. This resource operates on the request level and allows routing HTTP(S) traffic to targets based on the content of the request. An example of targets would be EC2 instances of an OpenShift cluster. An application load balancer also supports session affinity if you enable the sticky session feature. This feature is required by Cloud Pak for Data (e.g., for authentication) and watsonx Assistant (e.g., for stateful sessions).

A load balancer requires at least one security group for controlling incoming and outgoing traffic. The OpenShift installation program creates several EC2 instances. They only accept arbitrary traffic from AWS resources associated with specific security groups. Thus, you must associate the load balancer with one of these security groups as well. The most suitable security group is associated with the classic load balancer of a cluster. This load balancer is one of the three load balancers created by the OpenShift installation program. The security group of the classic load balancer handles ingress cluster traffic and only allows HTTP, HTTPS, and a subset of ICMP traffic. You can get the security groups associated with the classic load balancers of both clusters as follows:

$ export CLUSTER_DOMAIN_BL=… # extracted from metadata.json: aws.clusterDomain

$ export CLUSTER_DOMAIN_HOSTED_ZONE_ID_BL=$(aws route53 list-hosted-zones \
    --output text \
    --query "HostedZones[?Name=='${CLUSTER_DOMAIN_BL}.'].Id")

$ export ELB_DNS_NAME_BL=$(aws route53 list-resource-record-sets \
    --hosted-zone-id ${CLUSTER_DOMAIN_HOSTED_ZONE_ID_BL} \
    | jq --raw-output " \
        .ResourceRecordSets[] | \
        select(.Name == \"\\\052.apps.${CLUSTER_DOMAIN_BL}.\") | \
        .AliasTarget.DNSName | \
        sub(\"\\\.$\"; \"\")") # strip trailing dot (not supported by --query)

$ export ELB_SECURITY_GROUP_ID_BL=$(aws elb describe-load-balancers \
    --output text \
    --region ${REGION} \
    --query \
      "LoadBalancerDescriptions[?DNSName=='${ELB_DNS_NAME_BL}'] \
      .SecurityGroups[0]")

$ export CLUSTER_DOMAIN_GN=… # extracted from metadata.json: aws.clusterDomain

…

Use this information to create the load balancer and determine its DNS name as well as its hosted zone ID, which you need in the next step:

$ export VPC_PUBLIC_SUBNETS=($(aws ec2 describe-subnets \
    --filter "Name=vpc-id,Values=${VPC_ID}" \
    --output text \
    --query "Subnets[*].SubnetId" \
    --region ${REGION}))

$ export ALB_ARN=$(aws elbv2 create-load-balancer \
    --name cpd-blue-green \
    --output text \
    --query 'LoadBalancers[0].LoadBalancerArn' \
    --region ${REGION} \
    --security-groups \
      ${ELB_SECURITY_GROUP_ID_BL} ${ELB_SECURITY_GROUP_ID_GN} \
    --subnets ${VPC_PUBLIC_SUBNETS[@]}) # [@] expands array elements

$ export ALB_DNS_NAME=$(aws elbv2 describe-load-balancers \
    --load-balancer-arns ${ALB_ARN} \
    --output text \
    --region ${REGION} \
    --query 'LoadBalancers[0].DNSName')

$ export ALB_HOSTED_ZONE_ID=$(aws elbv2 describe-load-balancers \
    --load-balancer-arns ${ALB_ARN} \
    --output text \
    --region ${REGION} \
    --query 'LoadBalancers[0].CanonicalHostedZoneId')

You may want to access the application load balancer using a friendly DNS name instead of the generated name. In this case, you need to create an extra record in your DNS service (e.g., an alias record in Amazon Route 53). The following shell commands assume that the load balancer is accessible using a subdomain of a domain managed in Amazon Route 53:

$ export DOMAIN=… # e.g., 'example.com'
$ export DOMAIN_HOSTED_ZONE_ID=$(aws route53 list-hosted-zones \
    --output text \
    --query "HostedZones[?Name=='${DOMAIN}.'].Id")

$ export ALB_ALIAS_SUBDOMAIN=… # e.g., 'cpd-blue-green'
$ export ALB_ALIAS_DNS_NAME="${ALB_ALIAS_SUBDOMAIN}.${DOMAIN}"

$ export CHANGE_BATCH=$(jq \
    --arg ALB_ALIAS_DNS_NAME ${ALB_ALIAS_DNS_NAME} \
    --arg ALB_DNS_NAME ${ALB_DNS_NAME} \
    --arg ALB_HOSTED_ZONE_ID ${ALB_HOSTED_ZONE_ID} \
    --null-input '{
        "Changes": [{
            "Action": "CREATE",
            "ResourceRecordSet": {
                "AliasTarget": {
                    "DNSName": ("dualstack." + $ALB_DNS_NAME),
                    "EvaluateTargetHealth": false,
                    "HostedZoneId": $ALB_HOSTED_ZONE_ID
                },
                "Name": $ALB_ALIAS_DNS_NAME,
                "Type": "A"
            }}]}')

$ aws route53 change-resource-record-sets \
    --change-batch ${CHANGE_BATCH} \
    --hosted-zone-id ${DOMAIN_HOSTED_ZONE_ID} \
    --no-cli-pager

The application load balancer is the front door for all incoming traffic and requires a valid certificate for HTTPS traffic. You can use AWS Certificate Manager for requesting public SSL/TLS certificates. For example, you can request a wildcard certificate for protecting several sites in the same domain (e.g., *.example.com) as follows:

$ export CERTIFICATE_ARN=$(aws acm request-certificate \
    --domain-name "*.${DOMAIN}" \
    --output text \
    --query 'CertificateArn' \
    --region ${REGION} \
    --validation-method DNS)

You can also use AWS Certificate Manager for importing certificates that you have obtained outside of AWS.

Then, create two target groups that route traffic to the EC2 instances of the respective cluster:

$ export TARGET_GROUP_ARN_BL=$(aws elbv2 create-target-group \
    --health-check-path '/diag' \
    --name cpd-blue \
    --output text \
    --port 443 \
    --protocol HTTPS \
    --query 'TargetGroups[0].TargetGroupArn' \
    --region ${REGION} \
    --vpc-id ${VPC_ID})

$ export TARGET_GROUP_ARN_GN=…

Here, the web server (NGINX) hosting the Cloud Pak for Data web client exposes the health check path /diag. The application load balancer only forwards a request to an EC2 instance if a previous health check has succeeded. A health check succeeds if an HTTP request using the GET method returns with the HTTP 200 OK success status response code.

Finally, create an HTTPS listener for the application load balancer, which checks for HTTPS connection requests. The following shell commands create such a listener. This listener routes traffic to the created target groups with session affinity enabled. Initially, the application load balancer routes all traffic to the blue target group and thus to the blue cluster. You can specify the percentage of traffic routed to each target group by setting Weight to a value between 0 and 999 (i.e., between 0% and 100.0%). The stickiness duration specifies the time frame in which requests from a client are guaranteed to be routed to the same target group. This guarantee remains in effect, even if you reconfigure the listener to route all traffic to another target group within the time frame. The stickiness duration is a value between 1 and 604,800 seconds (= seven days).

$ export STICKINESS_DURATION=… # e.g., 86400 (= 24 hours)
$ export DEFAULT_ACTIONS=$(jq \
    --argjson STICKINESS_DURATION ${STICKINESS_DURATION} \
    --argjson TARGET_GROUP_ARN_BL \"${TARGET_GROUP_ARN_BL}\" \
    --argjson TARGET_GROUP_ARN_GN \"${TARGET_GROUP_ARN_GN}\" \
    --null-input '[{
        "ForwardConfig": {
            "TargetGroups": [
                { "TargetGroupArn": $TARGET_GROUP_ARN_BL, "Weight": 1 },
                { "TargetGroupArn": $TARGET_GROUP_ARN_GN, "Weight": 0 }
            ],
            "TargetGroupStickinessConfig": {
                "DurationSeconds": $STICKINESS_DURATION,
                "Enabled": true
            }
        },
        "Type": "forward"
    }]')

$ export ALB_LISTENER_ARN=$(aws elbv2 create-listener \
    --certificates CertificateArn=${CERTIFICATE_ARN} \
    --default-actions ${DEFAULT_ACTIONS} \
    --load-balancer-arn ${ALB_ARN} \
    --no-cli-pager \
    --port 443 \
    --protocol HTTPS \
    --region ${REGION} \
    --ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06)

Now that you have created all required AWS resources, you need to configure the OpenShift clusters, including Cloud Pak for Data and watsonx Assistant.

Configuring OpenShift Clusters

In the last section, we have created the application load balancer, empty target groups, and an HTTPS listener. Before configuring the empty target groups, Cloud Pak for Data requires a configuration change. The web server used by Cloud Pak for Data is NGINX. By default, NGINX rejects HTTP(S) requests that contain invalid external routes. This prevents host header injection attacks. The only valid external route is the one created during the installation of Cloud Pak for Data:

cpd-${PROJECT_CPD_INST_OPERANDS}.apps.${CLUSTER_DOMAIN}

However, the application load balancer and the cluster have different domain names. The application load balancer forwards incoming HTTP(S) requests to one of the targets without modifying the host header. The host header remains set to the DNS name of the application load balancer. Besides, the application load balancer performs health checks of a target by sending HTTP requests using the GET method. The host header of these requests is set to the IP address of the target followed by a colon and the health check port number. NGINX rejects both kinds of requests as it considers them as invalid external routes. Thus, you must disable the host header injection check of Cloud Pak for Data as follows:

$ export PROJECT_CPD_INST_OPERANDS=… # control plane and services project

$ oc patch configmap product-configmap \
    --namespace ${PROJECT_CPD_INST_OPERANDS} \
    --patch '{"data":{"HOST_INJECTION_CHECK_ENABLED":"false"}}' \
    --type merge

$ oc rollout restart deployment/ibm-nginx \
    --namespace ${PROJECT_CPD_INST_OPERANDS}

There are alternatives to the host header injection check of Cloud Pak for Data. For example, you can use a listener rule or a web application firewall like AWS Web Application Firewall. You can use either of them to reject HTTP(S) requests with a host header that is not set to the DNS name of the application load balancer. Here, the AWS documentation describes how to enable AWS Web Application Firewall.

In one of the next steps, you will use the AWS Load Balancer Controller to automatically add the worker nodes of the clusters as registered targets to specific target groups. The following shell commands install the AWS Load Balancer Operator and the AWS Load Balancer Controller on each cluster. You can also perform the installation using the OpenShift console as described here. Note that one of the commands specifies the version and the API group of the Subscription resource when obtaining the name of the InstallPlan resource. This explicit specification avoids confusion with Subscription resources provided by Knative. Knative is installed as a prerequisite of watsonx Assistant.

$ export AWS_LOAD_BALANCER_OPERATOR_PROJECT=…

$ oc new-project ${AWS_LOAD_BALANCER_OPERATOR_PROJECT}

$ cat <<EOF | oc create --filename -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  generateName: aws-load-balancer-operator-
  namespace: ${AWS_LOAD_BALANCER_OPERATOR_PROJECT}
EOF

$ cat <<EOF | oc create --filename -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: aws-load-balancer-operator
  namespace: ${AWS_LOAD_BALANCER_OPERATOR_PROJECT}
spec:
  channel: stable-v1
  installPlanApproval: Automatic
  name: aws-load-balancer-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
EOF

$ oc wait \
    --for jsonpath={.status.installplan.name} \
    --namespace ${AWS_LOAD_BALANCER_OPERATOR_PROJECT} \
    subscription.v1alpha1.operators.coreos.com/aws-load-balancer-operator

$ export INSTALL_PLAN=$(oc get subscription.v1alpha1.operators.coreos.com \
  aws-load-balancer-operator \
  --namespace ${AWS_LOAD_BALANCER_OPERATOR_PROJECT} \
  --template='{{.status.installplan.name}}{{"\n"}}')

$ oc wait \
    --for jsonpath={.status.phase}=Complete \
    --namespace ${AWS_LOAD_BALANCER_OPERATOR_PROJECT} \
    installplan/${INSTALL_PLAN}

$ export NUM_WORKER_NODES=$(oc get nodes \
    --output json \
    --selector 'node-role.kubernetes.io/worker' \
    | jq '.items | length')

$ cat <<EOF | oc create --filename -
apiVersion: networking.olm.openshift.io/v1
kind: AWSLoadBalancerController
metadata:
  name: cluster
spec:
  config:
    replicas: ${NUM_WORKER_NODES}
  subnetTagging: Auto
EOF

$ oc wait \
    --for condition=Available=true \
    --namespace ${AWS_LOAD_BALANCER_OPERATOR_PROJECT} \
    deployment/aws-load-balancer-operator-controller-manager

The AWS Load Balancer Operator provides a custom resource definition named TargetGroupBinding. Custom resources of this type both reference a Service resource and a target group on AWS. When creating a custom resource of this type, the AWS Load Balancer Controller adds the worker nodes of the cluster as registered targets to the target group. By default, AWS created the target groups as instance-based target groups, which allow adding EC2 instances based on their ID. Instance-based target groups require that the Service resource referenced by a TargetGroupBinding resource is of type NodePort.

The default ingress controller used by OpenShift is HAProxy. If a matching Route resource exists, HAProxy forwards ingress cluster traffic to the associated service. In case of incoming traffic for the Cloud Pak for Data control plane or a service, HAProxy forwards received data to NGINX. NGINX then forwards this data to its destination. However, it is not possible to use an application load balancer in combination with HAProxy. Thus, you must configure the application load balancer to route traffic to NGINX. As the NGINX Service resource is of type ClusterIP, an extra NodePort service is required:

$ cat <<EOF | oc create --filename -
apiVersion: v1
kind: Service
metadata:
  name: ibm-nginx-svc-nodeport
  namespace: ${PROJECT_CPD_INST_OPERANDS}
spec:
  ports:
    - name: ibm-nginx-https-port
      port: 443
      protocol: TCP
      targetPort: 8443
  selector:
    app: 0030-gateway
    app.kubernetes.io/component: ibm-nginx
    app.kubernetes.io/instance: 0030-gateway
    app.kubernetes.io/managed-by: 0030-gateway
    app.kubernetes.io/name: 0030-gateway
    component: ibm-nginx
    release: 0030-gateway
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
  type: NodePort
EOF

Finally, create a TargetGroupBinding resource referencing the NodePort service and the target group:

$ cat <<EOF | oc create --filename -
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: cpd-blue
  namespace: ${PROJECT_CPD_INST_OPERANDS}
spec:
  serviceRef:
    name: ibm-nginx-svc-nodeport
    port: 443
  targetGroupARN: ${TARGET_GROUP_ARN_BL}
EOF

Afterwards, the AWS Load Balancer Controller adds the worker nodes of the cluster as registered targets to the target group. The registered targets use the instance identifiers of the worker nodes and the node port.

Cloud Pak for Data Authentication

Regarding Cloud Pak for Data authentication, users must be able to log in on both clusters using the same credentials. IBM recommends that you use an enterprise-grade password management solution. Examples for such a solution are SAML SSO or an LDAP provider for password management.

For testing the application load balancer, execute the following curl command to obtain a bearer token from the Cloud Pak for Data REST API. Note that the URL contains the DNS name of the application load balancer. The HTTP request should return with an HTTP 200 OK success status response code:

$ export CPD_USERNAME=… # e.g., cpadmin
$ export CPD_PASSWORD=…

$ export DATA=$(jq \
    --arg CPD_PASSWORD ${CPD_PASSWORD} \
    --arg CPD_USERNAME ${CPD_USERNAME} \
    --null-input '{
        "password": $CPD_PASSWORD,
        "username": $CPD_USERNAME
    }')

$ curl \
    --data ${DATA} \
    --header 'Content-Type: application/json' \
    https://${ALB_ALIAS_DNS_NAME}/icp4d-api/v1/authorize

When using the watsonx Assistant REST API, you need to pass the bearer token with each HTTP request. For performance reasons, you should not create a new bearer token for each request. An exception is if a request fails with an HTTP 401 Unauthorized error.

Cluster Switching Process

Before shifting traffic from the blue environment to the green environment, the green environment needs to be set up. This process involves installing the desired software stack and ensuring the availability of the same data on both clusters.

One way to implement this process is by backing up and restoring Cloud Pak for Data as described here. You can create an online backup of Cloud Pak for Data with IBM Storage Fusion or NetApp Astra Control Center. In both cases, the green cluster must be on the same OpenShift version as the blue cluster. Thus, if you want to update or upgrade the OpenShift version on the green cluster, you may only perform the update or upgrade after restoring Cloud Pak for Data. Afterwards, you can upgrade the Cloud Pak for Data installation. To preserve data consistency and prevent data loss, you must prohibit the modification of user data before triggering the backup process. You may only change user data once the application load balancer shifts traffic to the green environment. For example, with watsonx Assistant, you need to avoid creating or changing service instances.

Depending on the Cloud Pak for Data service, an alternative way may be recreating the service instances on the green cluster using a CI/CD process. In this process, the required data resides in an external data source. However, you cannot specify an ID when creating a service instance. You also cannot specify identifiers when creating watsonx Assistant resources within this instance. Instead, these identifiers are generated. This is an issue if you recreate a watsonx Assistant service instance that exists on the blue cluster on the green cluster. In this case, the service instance and the containing watsonx Assistant resources have different identifiers. But there is a solution. When creating a watsonx Assistant service instance as described here, you can specify a unique service instance name. Instead of the ID of a service instance, you can also use its name in the URL of a watsonx Assistant REST API endpoint. Besides, watsonx Assistant 4.8.4 and later support the deterministic generation of resource identifiers based on the unique resource name as documented here. You can enable this feature as follows:

$ export UUID=… # arbitrary UUID (https://www.uuidgenerator.net)
$ export PATCH=$(jq \
    --arg UUID ${UUID} \
    --null-input '{
        "configOverrides":{
            "store":{
                "extra_vars":{
                    "ACTIVE_ACTIVE_ENABLED":"true",
                    "ACTIVE_ACTIVE_SEED":$UUID}}}}')

$ oc patch wa wa \
    --patch ${PATCH} \
    --type merge

Now, you can access watsonx Assistant service instances and resources on both clusters using the same service instance names and resource identifiers.

Once you have installed the desired software stack on the green cluster and the same data is available on both clusters, you can update the HTTPS listener of the application load balancer. The following configuration change shifts all traffic from new clients to the green environment:

$ export DEFAULT_ACTIONS=$(jq \
    --argjson STICKINESS_DURATION ${STICKINESS_DURATION} \
    --argjson TARGET_GROUP_ARN_BL \"${TARGET_GROUP_ARN_BL}\" \
    --argjson TARGET_GROUP_ARN_GN \"${TARGET_GROUP_ARN_GN}\" \
    --null-input '[{
        "ForwardConfig": {
            "TargetGroups": [
                { "TargetGroupArn": $TARGET_GROUP_ARN_BL, "Weight": 0 },
                { "TargetGroupArn": $TARGET_GROUP_ARN_GN, "Weight": 1 }
            ],
            "TargetGroupStickinessConfig": {
                "DurationSeconds": $STICKINESS_DURATION,
                "Enabled": true
            }
        },
        "Type": "forward"
    }]')

$ aws elbv2 modify-listener \
    --default-actions ${DEFAULT_ACTIONS} \
    --listener-arn ${ALB_LISTENER_ARN} \
    --no-cli-pager \
    --region ${REGION}

Due to the enabled sticky session feature, traffic from existing clients is still routed to the blue environment. Thus, it is needed until the end of the stickiness duration.

Blue-Green Deployment  –  Request Routing After Updating the HTTPS Listener

Accessing watsonx Assistant

From now on, you can access the watsonx Assistant REST APIs using the custom domain name of the application load balancer as follows:

https://${ALB_ALIAS_DNS_NAME}/assistant/${DEPLOYMENT_ID}/ ↩
  instances/${SERVICE_INSTANCE_NAME}/api/v2/assistants/${ENVIRONMENT_ID}/…

The first response from the application load balancer contains two cookies. The names of these cookies are AWSALBTG and AWSALBTGCORS. You must include them in later requests for ensuring session affinity (see AWS documentation).

0 comments
12 views

Permalink