Security Global Forum

Security Global Forum

Our mission is to provide clients with an online user community of industry peers and IBM experts, to exchange tips and tricks, best practices, and product knowledge. We hope the information you find here helps you maximize the value of your IBM Security solutions.

 View Only

Shift-Left Performance Workflow for Modern API Development

By Akshat Saxena posted 10 days ago

  

Shift-Left Performance Workflow for Modern API Development

Performance validation is often introduced late in the development cycle, typically after functional features are considered complete. Adopting a shift-left mindset changes this by bringing performance analysis closer to development, allowing issues to be discovered and addressed while code is still fresh and easier to optimize. To support this shift-left workflow for performance, we designed a lightweight and containerized setup using Locust, InfluxDB and Grafana, enabling developers to execute quick performance runs locally and observe real-time metrics throughout the development process.


Understanding the Shift-Left Methodology

Shift-left is the practice of integrating testing and validation activities earlier in the software lifecycle. While frequently applied to functional validation, shift-left principles are equally valuable for performance. Evaluating performance characteristics such as latency, throughput and resource behavior early helps avoid late-stage surprises that are more costly and complex to remediate.

Tools like Locust (load generation), InfluxDB (metric storage) and Grafana (visualization) make it practical to embed performance feedback directly into everyday development workflows. Developers can run quick load runs, view immediate dashboards and detect regressions long before code reaches staging or pre-production environments.


Overview of Locust, InfluxDB and Grafana

Locust

Locust is a Python-based load generation framework that allows developers to simulate realistic user behavior through simple scripts. Its lightweight design and ability to run on local machines make it ideal for shift-left performance testing, where frequent, iterative checks are preferred.

InfluxDB

InfluxDB is a time-series database optimized for high write throughput. It serves as the storage backend for performance test results, enabling developers to track how response times and throughput evolve throughout the development cycle.

Grafana

Grafana provides a visualization layer on top of InfluxDB. It renders real-time dashboards that display key performance trends, helping developers quickly understand the behavior of their APIs under load and identify potential bottlenecks early.


Why Locust Works Well for Shift-Left

Locust’s simplicity, flexibility and Python-driven approach align naturally with developer workflows. It supports:

  • Readable, maintainable test scripts
  • Lightweight local execution
  • Distributed load via master–worker mode
  • Easy integration with InfluxDB and Grafana
  • Fast iteration cycles, ideal for development environments

This makes Locust a strong fit for early-stage performance validation.


Architecture Overview

Locust generates load, InfluxDB stores time-series performance data, and Grafana provides real-time visualization using pre-configured dashboards. All components run through Docker Compose to provide a consistent, reproducible environment suitable for developer machines.

Developer-Friendly (Default) Setup


Optional: Distributed Execution with Master–Worker

When higher load is required or multi-core scaling is beneficial, Locust supports distributed mode:

Workers connect to the master using --master-host locust-master.

The default blog setup keeps things simple with a single Locust instance, but master–worker mode can be added easily as an extension.


Docker Compose Setup

The environment consists of:

  • locust
  • influxdb
  • grafana

The goal is to keep the setup simple and developer-friendly. The following docker-compose.yml represents the final structure included in the repository:

version: '3.8'

services:
  influxdb:
    image: influxdb:1.8
    container_name: influxdb
    ports:
      - "8086:8086"
    volumes:
      - ./volumes/influxdb:/var/lib/influxdb
    environment:
      INFLUXDB_DB: locust
    entrypoint: >
      /bin/bash -c "
      influxd &
      sleep 5 &&
      influx -execute 'CREATE DATABASE locust' &&
      influx -execute 'CREATE DATABASE telegraf' &&
      tail -f /dev/null
      "

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - ./volumes/grafana:/var/lib/grafana
      - ./grafana/dashboards:/var/lib/grafana/dashboards
      - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
      - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
    environment:
      GF_AUTH_ANONYMOUS_ENABLED: "true"
      GF_AUTH_ANONYMOUS_ORG_ROLE: "Admin"
      GF_SECURITY_ADMIN_PASSWORD: "admin"

  locust:
    build: .
    container_name: locust
    ports:
      - "8089:8089"
    volumes:
      - ./locust:/app
    command: >
      locust -f /app/locustfile.py

Project Structure Overview

A recommended directory layout:

project-root/
│
├── locust/
│   ├── locustfile.py
│   ├── influx_hooks.py
│   └── user_flows/
│
├── grafana/
│   ├── dashboards/
│   └── provisioning/
│
├── volumes/
│   ├── influxdb/
│   └── grafana/
│
├── logs/
│   ├── locust/
│   └── system/
│
├── docker-compose.yml
└── Dockerfile
Note:
  • Locust logs and results can be stored in the logs/locust directory.
  • InfluxDB & Grafana state persist under volumes/.

Example Locustfile

from locust import HttpUser, task, between
import influx_hooks  # registers metric write hooks

class UserBehavior(HttpUser):
    wait_time = between(1, 2)

    @task
    def get_products(self):
        self.client.get("/api/products")

    @task
    def search(self):
        self.client.get("/api/search?q=shoes")

Integrating Locust With InfluxDB Using Python Hooks

Metrics are exported to InfluxDB through Locust’s event hooks:

from locust import events
from influxdb import InfluxDBClient
import time
import os

INFLUX_HOST = os.getenv("INFLUXDB_HOST", "influxdb")
INFLUX_PORT = int(os.getenv("INFLUXDB_PORT", 8086))
INFLUX_DB = os.getenv("INFLUXDB_DB", "locust")

influx = InfluxDBClient(host=INFLUX_HOST, port=INFLUX_PORT, database=INFLUX_DB)

@events.request.add_listener
def on_request(request_type, name, response_time, response_length, **kwargs):
    json_body = [
        {
            "measurement": "locust_requests",
            "tags": {
                "request_type": request_type,
                "name": name
            },
            "fields": {
                "response_time": float(response_time),
                "response_length": int(response_length)
            },
            "time": int(time.time() * 1e9)
        }
    ]
    try:
        influx.write_points(json_body)
    except Exception:
        pass
Note: This implementation writes metrics synchronously. For this developer-focused shift-left setup, this is acceptable. For higher load or production-grade performance work, consider using a background thread, batching writes, or the locust-plugins library to avoid blocking during high request throughput.

Dockerfile for Locust With InfluxDB Client Installed

FROM python:3.10-slim

WORKDIR /app

COPY locust/ /app/
RUN pip install --no-cache-dir locust influxdb

EXPOSE 8089

CMD ["locust", "-f", "locustfile.py"]

Provisioning Grafana DataSource for InfluxDB

Place the following file at grafana/provisioning/datasources/datasource.yaml:

apiVersion: 1

datasources:
  - name: InfluxDB
    type: influxdb
    access: proxy
    url: http://influxdb:8086
    database: locust
    isDefault: true
    basicAuth: false
    jsonData:
      httpMode: GET

Optional: Adding System Metrics with Telegraf

Telegraf is a lightweight agent that collects system and container-level metrics and writes them to InfluxDB. Adding Telegraf gives visibility into CPU, memory, disk and network behavior, which helps correlate infrastructure conditions with API response patterns during a performance run.

Example minimal configuration (telegraf.conf):

# inputs
[[inputs.cpu]]
  percpu = true
  totalcpu = true
[[inputs.mem]]
[[inputs.disk]]
  ignore_fs = ["devtmpfs", "tmpfs"]
[[inputs.net]]
  interfaces = ["eth0"]

# output to InfluxDB 1.8
[[outputs.influxdb]]
  urls = ["http://influxdb:8086"]
  database = "telegraf"

Note: This integration is optional for shift-left workflows but useful when deeper system visibility is required.


Grafana Dashboards

Grafana dashboards provide real-time visualization of:

  • Requests per second
  • Response time distributions
  • Percentiles
  • Error rates
  • Concurrency levels
  • API latency trends

These dashboards help developers quickly understand system behavior during development, reinforcing the shift-left philosophy.


Limitations and Workarounds

1. CPU and Memory Metrics Are Not Provided Natively

Locust does not collect system-level metrics like CPU, memory or disk usage. These require:

  • VM-level monitoring
  • Cloud-native monitoring tools
  • Node/system exporters connected to Grafana
  • Or manual correlation with infrastructure metrics

2. Synchronous InfluxDB Writes

As noted earlier, synchronous write calls introduce overhead during high load. This is recommended only for developer-focused shift-left setups. For heavy or production-grade testing, adopt asynchronous or batched approaches.


Enabling Shift-Left in Practice

This setup enables developers to:

  • Run lightweight load runs during feature development
  • Detect latency regressions early
  • Validate API behavior continuously
  • Establish baseline performance expectations

Shift-left performance builds confidence in the stability of each increment, reducing the burden on late-cycle performance phases.


Practical Outcomes

Implementing this shift-left performance workflow results in:

  • Early detection of slow endpoints, regressions and scalability issues
  • Improved stability and predictability of API performance across sprints
  • Faster root-cause analysis through combined request, system and trend visibility
  • Shorter feedback loops that help developers optimize performance continuously
  • Faster release readiness due to early validation of performance requirements

Conclusion

This Locust, InfluxDB and Grafana setup demonstrates how performance engineering can be integrated into development through a shift-left methodology. By making performance insights available early and enabling lightweight local execution, teams can deliver more robust, predictable and efficient systems.

0 comments
12 views

Permalink