IBM TechXchange Security Technology Alliance Program User Group

Security Technology Alliance Program User Group

This online group is intended for new and existing IBM Security Technology Partners who would like to keep up to date with the latest advice and best practices for IBM Security integration.

 View Only

Automating CI/CD Pipelines Using Shell Scripts for Zero Downtime

By Singh Harsha Suryabahadur posted yesterday

  

Introduction: Why Use Shell Scripts for CI/CD?

Every engineering team dreams of smooth CI/CD pipelines — but the reality usually looks very different.

  • Pipelines break for reasons no one understands
  • Test environments behave differently on every machine
  •  Someone says “It worked on my system!”

Why do we need complex CI tools when shell scripts already automate the world?

So let’s rebuilt our pipeline with a different philosophy:

Keep it simple. Keep it portable. Make it observable.

The result?
A clean automation flow powered by one shell script that:

  • Provisions isolated multi-node environments using Podman
  • Executes automated tests in parallel across distributed nodes
  • Streams execution logs and test results in real time
  • Pushes performance & test metrics to InfluxDB / Prometheus
  • Powers live observability with Grafana dashboards
  • Ensures consistent, repeatable execution across hybrid & containerized systems

Note: Code merge or branch updates are currently handled manually or via existing Git workflows.
This shell-script-driven pipeline begins after code has been merged — automating everything from build, test, and analysis to zero-downtime deployment.

This blog walks you through that pipeline — step-by-step — so you can build the same Zero-Downtime Test Automation system for your team.

Step 1: Shell Script Execution & Environment Setup

When the CI runner starts, its main Bash script immediately cleans up the workspace to ensure a consistent environment. It deletes any leftover artifacts from previous runs, creates a fresh reports/ directory, and then launches a Podman container with a volume mount to store test results outside the container. Inside this container, the script pulls the latest code from Git and installs all required dependencies. Running tests in this controlled, containerized environment guarantees every run starts from the same clean baseline—eliminating issues caused by stale files or outdated libraries.

Step 2: Test Execution Within Podman Container

This is where the actual testing happens. The Cypress test suite runs inside the isolated Podman container, executing your end-to-end tests against your application. The key detail is the volume mount. When Cypress finishes, it outputs everything to a results folder (e.g., cypress-reports). That entire folder—including raw JSON results, screenshots of failures, test videos, and execution logs—is automatically saved to the host machine. This ensures all test artifacts persist even after the container shuts down, making them ready for the next phase of your pipeline.

What gets saved:

  • Raw JSON test results – for programmatic analysis
  • Screenshots – visual proof of failures
  • Videos – if configured
  • Execution logs 

Step 3: Report Aggregation

Once the container finishes, the shell script takes over on the host machine. Cypress often generates multiple JSON report files across different test suites, which can quickly become messy and hard to manage.

The solution is Mocha-Awesome's merge utility, which merges all those fragmented files into a single, clean report called merged-report.json. This unified report becomes your single source of truth, containing everything you need: total test count, passes, failures, skipped tests, and execution duration. By consolidating the results, it’s much easier to analyse test outcomes.

Step 4: Extracting Metrics & Pushing to Influx DB

This is where testing meets observability. The shell script now pulls key metrics from the merged report and sends them to your monitoring system.

Two tools handle most of the work:

  • jq (JSON Query) – a lightweight command-line tool that extracts exactly what you need from the JSON file, like .stats. Passes or .stats. Failures, and stores those values in shell variables.
  •  curl – takes those variables, formats them according to Influx DB’s Line Protocol, and pushes them to your Influx DB instance over HTTP.

Here’s an example of what gets sent:

cypress report, test suite =e2e_tests suites=25,passes=22,failures=3,duration_ms=150000

This single line tells Influx DB: “I have a test report. There were 25 test suites in total, 22 passed, 3 failed, and it took 150 seconds.” The data lands instantly in your database, timestamped and ready for analysis. By automating this step, you can track test performance over time, visualize trends, and catch regressions before they reach production.

Step 5: Real-Time Visualization & Feedback

As soon as those metrics are dumped in Influx DB, Grafana picks them up and displays them on your dashboard. The whole team can monitor test health in real time—no digging through logs. The tests run in the background, and the dashboard is updated with pass rates, failure counts, and execution times.

Shell Scripts vs. Jenkins: A Quick Comparison

Wondering how this shell script approach compares to industry-standard CI/CD tools like Jenkins? Here's an honest comparison:

Aspect

Shell Scripts (Bash)

Jenkins

Setup Complexity

Minimal - just Bash and standard Unix tools

Moderate - requires server setup, plugin configuration, and learning Jenkins syntax

Learning Curve

Low - most DevOps engineers know Bash

High - Jenkins has its own paradigms, DSL, and ecosystem to learn

Portability

Excellent - runs anywhere with a shell (Linux, macOS, containers)

Medium - requires Jenkins installation and configuration per environment

Configuration as Code

Natural - scripts are version-controlled, readable, debuggable

Yes - via Jenkinsfile, but more complex and Jenkins-specific

Cost

Free (open source tools only)

Free open source, but enterprise versions and support can be expensive

Real-time Observability

Built-in via Grafana/InfluxDB stack

Requires additional plugins and configuration

The Complete Flow Briefly

  1. Developer merges or pushes code to the target branch (manual step)
  2. Shell script removes old artifacts and sets up Podman
  3. Container pulls latest code and installs dependencies
  4. Cypress runs tests inside the container
  5. Test artifacts persist to the host via volume mount
  6. Shell script merges multiple JSON reports into one
  7. jq extracts key metrics from the merged report
  8. curl sends metrics to InfluxDB
  9. Grafana dashboard updates in near real time

Even without automated merges, this pipeline provides true CI/CD automation from build to deployment. With simple open-source tools that most teams already know — Bash, Podman, jq, curl, and Grafana — making adoption faster and more practical.

Future Enhancements

While this setup already automates everything from build to zero-downtime deployment, a few simple additions can take it even further:
 Automated Triggers – Use Git webhooks or lightweight CI runners to start the pipeline automatically on every push or merge.
 Branch-Based Environments – Map branches to isolated Podman environments for parallel testing and staging.
 Auto-Rollback – Link Grafana alerts to trigger safe rollbacks if health checks fail.

With these small improvements, the same shell script evolves into a fully autonomous CI/CD engine — still simple, portable, and observable.

If you have any questions regarding any of the points mentioned above or want to discuss this further, feel free to get in touch with us: 

Singh Harsha Suryabahadur : Singh.harsha.suryabahadur@ibm.com 

Tarun Patel : Tarun.Patel@ibm.com

0 comments
3 views

Permalink