Introduction
IBM Storage Insights continuously collects configuration, capacity, and performance telemetry from storage systems across your enterprise and exposes it through a rich, versioned OpenAPI specification. This spec defines every service, endpoint, schema, and workflow — from discovering storage arrays and volumes, to querying latency trends or capacity forecasts.
In the realm of large-language-model driven automation, protocols such as Model Context Protocol (MCP) have emerged to help LLMs discover and call services — but do we really need yet another layer when existing standards like OpenAPI already serve the job.
That’s where vibe coding comes in. It can be seen as another tool in developer's arsenal — alongside AI-assisted coding, automated documentation, and other intelligent workflows. This approach is especially useful in scenarios where we care less about the generated code itself and more about the results it produces. Instead of treating the OpenAPI spec as a static contract to be parsed by generators, imagine using it as shared context between you and an AI coding partner. With Codex, you can simply describe your intent — “Fetch the top storage pools by latency,” or “Show configuration drift across systems in the last 24 hours.” Codex interprets the Storage Insights OpenAPI spec, scaffolds authentication, generates the necessary API calls, and even formats results for quick visualization.
In this article we’ll explore how we can bring together Codex + OpenAPI + the IBM Storage Insights API to create a workflow where infrastructure intelligence becomes accessible through ‘vibe coding’ — where you speak, the AI codes, and IBM Storage Insights responds with clarity.
Prerequisites
- First, we need to create a public GitHub repo where Codex-generated code will live. This allows us to version, review, and evolve the scripts, and utilities that interact with Storage Insights. Treat this repository as your experimentation workspace — Codex will generate, modify, and commit code directly into it.
- Install Codex and run it at least once to get authentication out of the way
# In your local GitHub project folder
npm install -g @openai/codex
export OPENAI_API_KEY="<your-key>"
# Then simply run
codex
That starts an interactive session.
- IBM Storage Insights API Key for Your Tenant. Log in to your Storage Insights portal and generate an API key under your tenant’s account. This key authenticates your Codex-generated API calls against the Storage Insights OpenAPI endpoints.
Use Case: List of Block Storage Devices
Now, we're ready to tackle our original problem of building the application to get information about storage devices from IBM Storage Insights. First of all, provide full access by typing /approvals
and selecting the mode where it has freedom to do whatever it pleases:
Next, unleash the full power of Codex models at our problem. We give it the full problem statement:
Codex will accept the task, break it down into a plan, and have a go at it.
Here's the plan you might see :