A common scenario for many IBM API Connect customers is where they have multiple gateway services configured and use API Connect Developer Portal to socialise their published API endpoints, and they want to make changes to API gateways without any disruption or outage for their existing subscribed apps/users of the published APIs. Some examples the changes that can cause disruption in the API's availability are:
- Move an API (product) from one gateway service to another in order to avoid downtime.
- Perform maintenance on the DataPower Gateway.
- Form-factor migration from one DataPower Gateway to another.
- Gateway Service needs to be deployed in a different cloud.
- Certificate update for the Gateway Service's API invocation endpoint.
When there are multiple gateway services configured for a catalog, and when an API is published to it, that API would have one endpoint per gateway service, and would be socialised in the Developer Portal with those as such:
In a scenario like shown above, the subscribed applications invoking those APIs will use one of those socialised endpoints. If one wants to perform any maintenance on the gateways behind one of the gateway services, it will cause disruption on the apps using that.
In this article I will show one way to achieve zero downtime for the apps by using an external load-balancer and the vanity endpoint feature. In summary, the solution demonstrated here encompasses the following elements as shown in the schematic below:
- Configure catalog with two Gateway Services.
- Configure a load balancer (external to API Connect) with those two gateway services as backends.
- Use API Connect vanity endpoint feature for the published API endpoints.
Now let's go through the steps to achieving the goal.
Steps
Catalog with two Gateway Services
Configure the catalog, demo-catalog in this example, with two gateway services, gateway-service-1 and gateway-service-2 in this example. Note that for the purposes of scenario described in this article, it must be ensured that both gateway services have same customisations such as policies, extensions, gateway extensions, etc.
Load Balancer
You can use other types of load-balancers depending on your organisation's and performance/capacity requirements, but I will simply use a haproxy on a Kubernetes cluster as a load-balancer in this example. I deploy a haproxy whose backends are configured to be the API Invocation Endpoints of the two gateway services, using round robin.
Here is the sample haproxy configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-config
namespace: op1
data:
haproxy.cfg: |
global
log stdout format raw local0 info
defaults
mode http
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
option httplog
frontend myfrontend
log global
option httplog
bind :80
default_backend gateways
backend gateways
mode http
balance roundrobin
http-send-name-header Host
# replace path if desired
http-request replace-path (.*) \1
# gateway 1
server rgw.sanjay-3-master.fyre.ibm.com rgw.sanjay-3-master.fyre.ibm.com:443 ssl verify none
# gateway 2
server rgw2.sanjay-3-master.fyre.ibm.com rgw2.sanjay-3-master.fyre.ibm.com:443 ssl verify none
Note that in this sample, for simplicity the SSL certificates and verification are not configured, but administrators should ensure they configure SSL such that the end-to-end solution meets both their organisation's business and security needs.Next, the following will create a Deployment, Service and Ingress for the haproxy. Apply the configuration like the following using
kubectl
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: haproxy
namespace: <namespace>
spec:
selector:
matchLabels:
app.kubernetes.io/name: haproxy
replicas: 3 # tells deployment to run 3 pods matching the template
template:
metadata:
labels:
app.kubernetes.io/name: haproxy
spec:
containers:
- name: haproxy
args:
- -f
- /usr/local/etc/haproxy/haproxy.cfg
image: haproxy:2.3
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: haproxy-config
readOnly: true
volumes:
- name: haproxy-config
configMap:
defaultMode: 420
name: haproxy-config
---
apiVersion: v1
kind: Service
metadata:
name: haproxy-svc
namespace: op1
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: haproxy
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: haproxy
namespace: op1
spec:
ingressClassName: nginx
rules:
- host: haproxy-lb.sanjay-3-master.fyre.ibm.com
http:
paths:
- backend:
service:
name: haproxy-svc
port:
name: http
path: /
pathType: Prefix
---
Now you would have an ingress with the haproxy endpoint:
$ kubectl -n <namespace> get ingress | grep haproxy
haproxy nginx haproxy-lb.sanjay-3-master.fyre.ibm.com 10.11.69.179,10.11.69.248,10.11.70.198,10.11.70.89 80 12d
Configure Vanity Endpoint for Catalog
Now configure the vanity endpoint for demo-catalog to point to the haproxy load balancer endpoint as shown in image below. Note that you can include the org and catalog name in the Base endpoint. Keep in mind that any Vanity Endpoint update is applicable only to products published after that update.
Publish Product to both Gateway Services
Now publish the product with the API(s) to both gateway services
gateway-service-1 and
gateway-service-2. After that, going to API Manager UI -> Manage Catalogs -> Products and choosing product's Manage APIs -> View Endpoints option from overflow menu will show the API published to the vanity endpoint.
The applications can subscribe to the
API using the single vanity endpoint based URL as socialised in the Developer Portal as shown here.
Check that the published API is accessible via the vanity-endpoint of the haproxy load-balancer:
$ curl -k https://haproxy-lb.sanjay-3-master.fyre.ibm.com/sanjay/demo-catalog/test/ip
{
"origin": "127.0.0.1, 129.41.86.4"
}
Edit Load-Balancer to remove one of the gateway services for maintenance
In order to perform maintenance on the gateways behind a gateway service, we can remove the gateway service from the load-balancer configuration editing the configMap haproxy-config and doing rollout restart of the haproxy deployment.
Run a looping application using the published API endpoint, using a shell while loop with sleep in this example:
$ while (true) do date; curl -k https://haproxy-lb.sanjay-3-master.fyre.ibm.com/sanjay/demo-catalog/test/ip; done
Tail the logs of the haproxy pods in order to see the backend that is being called:
$ kubectl -n <namespace> logs -f -l app.kubernetes.io/name=haproxy
Edit haproxy-config configMap and disable the desired backend gateway service:
$ kubectl -n <namespace> edit configmap haproxy-config
--> Change one of the gateway's line to include `disabled` as follows:
server rgw.sanjay-3-master.fyre.ibm.com rgw.sanjay-3-master.fyre.ibm.com:443 ssl verify none disabled
Rollout restart haproxy deployment:
$ kubectl -n <namespace> rollout restart deployment haproxy
The API calls from the loop are un-interrupted, and gateway-service-1 is available for maintenance or downtime, without affecting the published APIs which will continue to be served by the gateway-service-2 via the vanity endpoint configured to the load-balancer.
In conclusion, this example demonstrates how to use multiple gateway services with a load-balancer, and taking a gateway service out for maintenance without affecting the applications using the published APIs, using the vanity endpoint feature along the way. In this example I used haproxy on a kubernetes stack as load-balancer, but any load balancer with similar features should work, and of course production deployments should utilise a high capacity and highly available load balancer.
References
Here are some references for the material in this post.
Vanity Endpoint
Documentation link.
Blog post.
Sample API
This is the routes API used in this post.
swagger: '2.0'
info:
title: Routes
version: 1.0.0
x-ibm-name: routes
description: Routes
host: $(catalog.host)
basePath: /test
consumes:
- application/json
produces:
- application/json
x-ibm-configuration:
type: rest
phase: realized
enforced: true
testable: true
gateway: datapower-api-gateway
cors:
enabled: true
assembly:
execute:
- invoke:
version: 2.0.0
target-url: https://httpbin.org/ip
paths:
/ip:
get:
responses:
'200':
description: 200 OK
definitions: {}
schemes:
- http