Security Global Forum

Security Global Forum

Our mission is to provide clients with an online user community of industry peers and IBM experts, to exchange tips and tricks, best practices, and product knowledge. We hope the information you find here helps you maximize the value of your IBM Security solutions.

 View Only

How I Reduced Automation Execution Time From 6 Hours to 20 Minutes

By Abhishek Sharma posted 2 days ago

  

How I Reduced Automation Execution Time From 6 Hours to 20 Minutes

If you’ve ever watched your automation suite run for hours only to fail at the last moment, you’re not alone. A few years ago, my team was dealing with a regression cycle that took 6 hours to complete. It slowed down releases, frustrated developers, and made our CI/CD pipeline practically useless.

Today, the same suite finishes in 20 minutes — with higher stability and broader coverage.

This is the story of how we made that transformation: the techniques, failures, trade-offs, and the mindset shifts that changed everything.

The Problem: Thousands of Tests, Slow Pipeline, Zero Confidence

Our situation had all the classic symptoms of a legacy automation suite:

  • A bloated UI-driven test pack (70%+ of tests were UI)
  • Sequential execution on a single machine
  • Tests that depended on each other
  • Hard-coded waits and sleeps everywhere
  • Infrastructure bottlenecks
  • No test tagging or risk-based prioritization.

Every pipeline run felt like walking into a minefield. The business wanted faster releases, but we couldn’t guarantee stability.

We didn’t need “more tests.”
We needed smarter, faster, and more reliable tests.

The 6-Step Strategy That Took Us From 6 Hours → 20 Minutes

1. Broke the UI Dependency (Moved 50–60% of tests to API layer)

The biggest win came from shifting tests away from UI.

How we decided what to move:

  • Anything validating backend logic → moved to API tests
  • CRUD workflows → API
  • Data validation → API
  • Setup & teardown → API
  • Only business-critical user flows → stayed on UI

Impact:

  • UI test count dropped dramatically.
  • API execution time dropped from 4 hours → 5 minutes.
  • Stability increased significantly.

Lesson:
👉 If the UI isn’t what you’re testing, don’t test through the UI.

2. Introduced True Parallel Execution

Our initial setup ran everything sequentially on a single agent.

We redesigned the framework to allow:

  • Parallel test execution using Selenium Grid

Result:

  • What used to take 2–3 hours in UI tests now took 10–12 minutes.

Lesson:
👉 Parallelism gives you exponential speed gains — not incremental ones.

3. Rebuilt the Framework for Performance

We revisited our entire framework and made several architectural changes:

  • Removed static sleeps → replaced with smart waits
  • Implemented lazy loading of drivers.
  • Avoided full browser restarts unless necessary
  • Optimized heavy functions (like search or login) using API pre-setup

Example:

Instead of logging in via UI on every test, we generated a token via API and injected it into browser storage.

Result:

Large chunks of repeated UI work simply disappeared.

Lesson:
👉 A fast framework beats fast hardware.

4. Implemented Test Tagging & Tiering

Instead of running the full suite every time, we introduced categories:

  • Smoke (5–7 minutes)
  • Critical Regression (20 minutes)
  • Full Regression (Nightly)
  • Component/API Tests (On every commit)

This allowed developers to get feedback immediately, and heavy tests ran only when needed.

Lesson:
👉 Not all tests deserve to run on every pipeline.

5. Used Test Data Virtualization & Pre-Seeding

Half of our test slowdown came from building data inside the tests.

We fixed it by:

  • Creating pre-seeded datasets in the DB
  • Using API calls to set up data instantly
  • Cleaning test data via batch jobs

Result:

UI tests became 70–80% faster because they started at the “interesting” part of the workflow, not the setup.

Lesson:
👉 Bad test data strategy = slow tests + flaky tests.

6. Removed Tests That Didn’t Matter

We removed:

  • Duplicated tests
  • Overlapping flows
  • UI tests that didn’t test UI
  • Slow low-value workflows
  • Tests that didn’t catch bugs in 12+ months

This reduced our suite by almost 40%, and yet coverage increased.

Lesson:
👉 The fastest test is the one you delete.

0 comments
5 views

Permalink