API Connect

 View Only

Business engagement platform

By Chris Dudley posted Sat December 14, 2024 04:52 AM

  

IBM API Connect contains a lot of important information about your API estate. Up to now that information was surfaced through extensive REST APIs which could be polled to retrieve any desired information. However, polling is not event-driven, in the modern market it is common to have a variety of API types to allow integrating applications to be triggered immediately when events happen rather than relying on polling after the fact.

Introduced in API Connect v10.0.9 (the latest continuous delivery release) is a new feature called Engagement. This capability allows external endpoints to be triggered when events happen inside APIC. It is far more than simple webhooks, and allows for scheduling, time bucketing, aggregation, filtering and numerical mathematical operations.

Engagement allows you to create Rules which when their criteria are met will result in a Task being sent to the specified Destination. These destinations can be any HTTP or HTTPS endpoint. It is possible to define custom HTTP headers for the message along with selecting what HTTP method and defining the payload. Through the use of a Kafka HTTP bridge it is possible to send notifications through to Kafka topics too.

Use cases

The possible use cases for this capability are virtually endless, it allows for integration with any HTTP endpoint when the specified criteria are met. The only limitations are on what datasources are available and being able to define your target filter criteria.

Some possible use cases could have a target destination that is an IBM AppConnect flow that triggers an alert to Slack, creates an incident on PagerDuty, or interacts with Salesforce; or it could be a custom microservice with business specific logic in it to do whatever the enterprise needs, or any existing HTTP(S) endpoint. If you are using the new AI Gateway functionality then you could create a rule which monitors the sum of the total AI tokens used in the last 24 hours and triggers an API call to the AI vendor (e.g. watsonx.ai) to upgrade the backend AI plan subscription should the token total go above your AI subscription threshold.

The diagram below shows how multiple different destinations can be used to allow very different event driven integrations. Here there are possible target destinations configured for a custom microservice, an AppConnect flow HTTP input endpoint, a Kafka HTTP bridge and the REST API for the customer’s security team. These destinations could be used in different rules in a variety of scenarios. The custom microservice could be used for API latency monitoring, the AppConnect endpoint could be the customer’s operations team’s alerting flow. The kafka topic might be part of custom billing platform monitoring for excessive API usage. The Security team’s API might be part of a rule checking for API calls from embargoed countries.

 

How engagement works

As the API Provider it is possible to create engagement rules at either provider organization or catalog scope in API Manager. This is functionality powered by APIC Analytics, though the intention is to allow it to operate on far more data than just API events, but for this initial release that is the sole data source. It means you need to have an analytics service in order to use engagement. If you have multiple analytics services then each of them has their own engagement rules and configuration - each operating on the data that is available to that analytics service.

Lets look at the user interface in API Manager and how rules are created.

Destinations

A destination is the target endpoint you would like your Rule to send data to. The same destination can be used for multiple rules. It can include custom HTTP headers which can be used to provide any security settings such as tokens, basic auth or certificates.

 

Creating a rule

There is a wizard built into the UI to guide you through the process of creating an engagement rule.

On the first step, your new rule is given a name and then the time schedule options let you decide how often it should run (either by interval to run every X minutes or using a cron string to target a specific regular time of the day or week).

 

Data selection

Next you need to decide what data this rule is targeting, first select a datasource and then it is possible to select what metric you care about. The default is simply a count of records - how many records are in the database that match your criteria - but it is also possible to use numerical metrics, such as the average latency of an API call, the sum total number of AI tokens or the maximum request payload size. It is then possible to add multiple filter fields to narrow down what data is included, for example targeting a specific product or API, or specific HTTP response code, or only selecting API events where the request payload was greater than a certain value. Data can then be grouped by a specific field if desired, this would be the difference between triggering a task when there were HTTP 500 errors in a 24hour period irrespective of what API they were for, or requiring that the HTTP 500 errors in that 24 hour period had to be for the same API.

 

Triggers and actions

Once you have defined what criteria you are interested in, you then need to specify what you would like to happen when they are met. It is possible to add multiple triggers, and then for each trigger to have multiple actions. Each trigger can have a specific severity and condition criteria. For example we could trigger a sev 3 task if there were 5 HTTP 500 errors, but then have a separate trigger as part of the same rule that triggers a sev1 task if there were 20 HTTP 500 errors. Each action has a specified destination and you can define the payload sent to the destination in each action. This can be in whatever content type you like (e.g. plaintext, CSV, JSON, YAML).

 

To avoid triggering tasks repeatedly for the same criteria it is possible to enable throttling so that a task is triggered once when the criteria is met but can then be silenced for a period of time until allowed to trigger again.

Once you click Finish the rule will be created. The analytics subsystem will then monitor for the criteria to be met and once they are a task will be created to send your specified payload to the configured destination target. The Tasks tab in the engagement UI will list all the tasks grouped by trigger. Clicking on a task will show more information about it, including the history of the task and any errors or messages that might have been returned when sending the message to the destination target.

Engagement is a major new capability in the API Connect platform that opens up all sorts of integration possibilities. In APIC 10.0.9 the available datasource is analytics API event data, i.e. the transactional logs of API events that happened on the gateway. We intend to expand the number of datasources available in coming releases, along with adding further capabilities to the Engagement platform. If you have specific requirements or use cases in mind, please do get in touch.

#IBMAPIConnect #analytics #APIConnect


#Spotlight
0 comments
18 views

Permalink