The Instana Connector enables seamless integration between an Instana instance and AIOps, allowing for the collection and processing of both metric and event data. In this article, we will focus on the metric collection aspect and the improvements introduced in Release 4.11.0 and 4.11.1.
The connector interacts with Instana APIs to retrieve metric data for selected technologies. However, since Instana enforces API rate limits for each token, exceeding those limits could previously lead to errors and delays in metric collection until the quota reset.
Release 4.11.0 and 4.11.1 introduces significant enhancements designed to make metric ingestion more flexible, efficient, and scalable.
- Release 4.11.0 - Dynamic Rate Limit Throttling
- Release 4.11.1 - Option to add multiple API tokens
Dynamic Rate Limit Throttling
The dynamic rate limiter continuously monitors API rate limits after each call and adjusts the request throttling strategy based on the remaining quota and reset time. his throttling approach is specifically applied to APIs responsible for fetching infrastructure metrics, ensuring smooth and uninterrupted data collection without overwhelming the API endpoints.
How It Works
- The rate limiter continuously monitors API quotas after each call.
- Based on the remaining quota and reset time, it adjusts the request throttling strategy dynamically.
- If quota is high, requests proceed at normal speed.
- If quota is running low, requests are spread out evenly to avoid hitting the ceiling.
- If quota is nearly exhausted, aggressive throttling ensures uninterrupted data collection until reset.
Rate Limit Handling with Multiple API Tokens
Starting with Release 4.11.1, the Instana connector integration brings a valuable enhancement: support for multiple API tokens during setup. When working with Instana APIs that enforce rate limits for API tokens, the token manager algorithm in the Instana connector offers a robust solution to this challenge by intelligently rotating the tokens to ensure uninterrupted access while staying within the API's usage boundaries.
The token manager continuously monitors API usage statistics for each token. When a token nears its rate limit, the algorithm initiates a rotation process to switch to another token, if available that can continue making requests. By distributing requests across multiple tokens, it effectively mitigates the risk of hitting rate limits—making your integration more resilient and reliable.