API Gateway cluster configurations and behaviors

 View Only
Thu November 22, 2018 01:37 AM

webMethods API Gateway tutorial

This article will explain the configuration steps for API Gateway cluster and expected behaviors of API Gateway when different servers go down.

Supported Version: 9.12 onwards

About the Author
Shivaraj Hegde
Email:shivaraj.hegde@softwareag.com 

An architecture of Clustered API Gateway setup is given below.

Configurations:

  • Create Terracotta array ACTIVE-PASSIVE cluster.
  • Form Integration Servers cluster on which all API Gateways are running. This will form API Gateway cluster.
  • Create Event Data Store Cluster which is used by API Gateway servers.
  • Configure all the Event Data Store details in all the API Gateway "gateway-es-store.xml" file
  • Make sure that all the Event Data Store servers act as Master node and any two servers act as a data nodes.
  • Create EG ports with Native Service host in all the API Gateway servers and confirm that the native service host is accessible with both API Gateway host and EG port.
  • Configure Load balancer for API Gateway External ports.
  • Configure Load balancer host and port in any of the API Gateway Administration settings.
  • Create/Import an API which is hosted on “Native Server Host” on any API Gateway.
  • Apply the log invocation Policy for created API on API Gateway.
  • Activate the created API to see the Gateway endpoint URL with load balancer host and port.

1) Create Terracotta array ACTIVE-PASSIVE cluster.

  1. All the TC which is part of the Cluster should be down.
  2. Now create a new tc-config .xml file with all the required parameter values, which is the template of tc-config-reference.xml file present in “C:\<Installation Directory>\Terracotta\config-samples” and place it in “C:\ <Installation Directory> \Terracotta\server\bin\” directory.
  3. The server which starts first will act as ACTIVE-COORDINATOR and the rest of the TC servers which will start later will act as PASSIVE-UNINITIALIZED and we can see a message “NodeID[VMNAME] joined the cluster”.

2) Form Integration Servers cluster on which all API Gateways are running. This will form API Gateway cluster.

  1. All the Integration Servers should share the same database.
  2. Start all the servers
  3. Configure terracotta servers URL in IS Admin-> Clustering page.
  4. Paste the terracotta license file in <InstallDIr>/common/conf folder
  5. Restart all Integration servers.
  6. Validate that all the servers are listed under Integration S clustering page.

3)  Create Event Data Store Cluster which is used by API Gateway servers.

 Each API Gateway cluster node comes with an Event Data Store instance for storing runtime assets and configuration items. An Event Data Store instance is a non-clustered Elastic search node. For a API Gateway cluster configuration, the Event Data Store instances should also be clustered by modifying the <InstallDir>/EventDataStore/config/elasticsearch.yml file on each instance using standard Elastic search clustering properties given below.

EVS 1 Configurations:

cluster.name: APIG_EventDataStore
node.name: <EVS1HOSTNAME>
path.logs: ../../EventDataStore/logs
network.host: 0.0.0.0
http.port: 9240
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 
["EVS3HOSTNAME:9340, EVS2HOSTNAME:9340", "EVS3HOSTNAME:9340"]
transport.tcp.port: 9340
path.repo: ['C:\<InstallDIr>\EventDataStore/archives']
discovery.zen.minimum_master_nodes: 2

EVS 2 Configurations:

cluster.name: APIG_EventDataStore
node.name: <EVS2HOSTNAME>
path.logs: ../../EventDataStore/logs
network.host: 0.0.0.0
http.port: 9240
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 
["EVS3HOSTNAME:9340, EVS2HOSTNAME:9340", "EVS3HOSTNAME:9340"]
transport.tcp.port: 9340
path.repo: ['C:\<InstallDIr>\EventDataStore/archives']
discovery.zen.minimum_master_nodes: 2

EVS 3 Configurations:

cluster.name: APIG_EventDataStore
node.name: <EVS3HOSTNAME>
path.logs: ../../EventDataStore/logs
network.host: 0.0.0.0
http.port: 9240
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 
["EVS3HOSTNAME:9340, EVS2HOSTNAME:9340", "EVS3HOSTNAME:9340"]
transport.tcp.port: 9340
path.repo: ['C:\<InstallDIr>\EventDataStore/archives']
discovery.zen.minimum_master_nodes: 2

The health of the Elastic Search cluster can be checked using the following URL: http://<Elastic_Search>:9240/_cluster/health?pretty=true

The cluster health response looks as follows:

{
"cluster_name" : "APIG_EventDataStore",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 20,
"active_shards" : 40,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}


4)  Configure all the Event Data Store details in all the API Gateway "gateway-es-store.xml" file

Update all the API Gateway severs "gateway-es-store.xml" files as shown below file is available at <InstallDir>\IntegrationServer\instances\default\packages\WmAPIGateway\config\resources\beans\:

<bean class="com.softwareag.apigateway.core.datastore.ElasticsearchClientImpl" id="elasticSearchClient">
<constructor-arg index="0">
<list>
<value>EVS1HOSTNAME:9240</value>
<value>EVS2HOSTNAME:9240</value>
<value>EVS3HOSTNAME:9240</value> 
</list>

also update the cluster name in the constructor properties.
<prop key="cluster.name">APIG_EventDataStore</prop>

 Case 1: Verify API invocation through Gateway endpoint URL when all the servers are up and running.

  • Invoke the API using Gateway endpoint URL from an external client.
  • Response should be available and the API Analytics should show the invocation logs.

Case 2: Verify API invocation through Gateway endpoint URL when “Native Service Host” is DOWN or Unreachable.

  • Invoke the API using Gateway endpoint URL from an external client.
  • Response should not available as the native service host is not accessible and gives the error” Downtime exception: Connection refused:” API Analytics should show the invocation logs.

Case 3: API Gateway 1 goes DOWN or Unreachable in between and comes up again.

  • Response should be available and the API Analytics page should show the invocation logs, since the request goes though the load balancer, the API Gateway 2 will be passing the Request to Native server.
  • There should not be any message loss when one API Gateway goes down in the cluster. All the transaction details should be available on the Analytics of the API.
  • Again bring up the API Gateway 1 in the cluster and verify that the response will be available without any errors or message loss.
  • All the transaction logs should be present across all API Gateway nodes and should be in sync.

Case 4: Event Data Store of API Gateway1 goes DOWN or Unreachable in between and comes up again.

  • Response should be available and the API Analytics page of both API Gateway should show the invocation logs since the EVS of other API Gateway servers are still running.
  • There should not be any message loss when EVS of API Gateway 1 goes down in the cluster. All the transaction details should be available on the Analytics of the API.
  • Again bring up EVS of API Gateway 1 in the cluster and verify that the response will be available without any errors or message loss.
  • All the transaction logs should be present across both API Gateway nodes and should be in sync, and no errors are logged on the log files (Integration Server, Gateway and EVS server logs).

Case 5: ACTIVE Terracotta goes DOWN or Unreachable in between and comes up again.

  • There should not be failures with the API Gateway runtime.
  • Response should be available and the API Analytics page of both API Gateway should show the invocation logs, since the other Terracotta node is still running and will act as an ACTIVE node.
  • There should not be any message loss when Terracotta goes down in the cluster. And all the transaction details should be available on the Analytics of the API.
  • Again bring up Terracotta in the cluster and verify that the response will be available without any errors or message loss.
  • All the transaction logs should be present across both API Gateway nodes and should be in sync.

Case 6: API Gateway servers behavior when both Terracotta servers go down and in TC server array.

  • API Gateway login works successfully when both TC array goes down.
  • When TC array is down, we can get the response, but some policy modules will not work.
  • When some I&A and Throttling policy is present, error events will be generated due to TC unavailability, and the same will be shown in Analytics.
  • Not able to update any APIs.
  • Updating Admin Settings should be show error.

Case 7: Event Data Store of all API Gateway goes DOWN or Unreachable in between and comes up again.

  • API Gateway login fails as the data stores are down.
  • API invocation should fail and analytics will not be available.
  • When any of the nodes comes up, the Login should work and Health should be in status Yellow.

Case 8: PASSIVE Terracotta goes DOWN or Unreachable in between and comes up again.

  • There should not be failures with the API Gateway runtime.
  • Response should be available and the API Analytics page of both API Gateway should show the invocation logs since the Active Terracotta node is still running 
  • There should not be any message loss when Terracotta goes down in the cluster. And all the transaction details should be available on the Analytics of the API.
  • Again bring up Passive Terracotta in the cluster, verify that the response will be available without any errors or message loss and Terracotta should properly join the cluster.
  • All the transaction logs should be present across both API Gateway nodes and should be in sync.

#webMethods
#API-Gateway
#event-data-store-cluster
#apigateway-cluster
#cluster-behaviors
#wiki
#groci
#cluster