Modern LLM-based applications are complex. A single user prompt can trigger a chain of actions: agent decisions, API calls, tool invocations, memory lookups, and LLM completions. Debugging and optimizing such flows is incredibly hard — unless observability is built in.
This blog explores how to observe your Langflow, a low-code framework with a drag-and-drop interface to create complex AI workflows, and applications using a powerful observability platform — Instana. Whether you’re developing locally or deploying to production, this guide helps you trace every LLM interaction, agent chain, and system-level service call.
Langflow
Langflow is a visual, low-code framework that makes building LangChain, framework for developing applications powered by LLMs, applications intuitive and accessible. With its drag-and-drop interface developers can create complex AI workflows by connecting prebuilt components such as LLMs, vector databases, memory modules, agents, and custom tools. Langflow manages the underlying LangChain orchestration for you, making it easier to prototype, iterate, and deploy AI applications while maintaining the flexibility to customize components when needed.
Think of Langflow as a visual programming environment specifically designed for AI applications. You can build sophisticated chatbots, retrieval-augmented generation (RAG) systems, multi-agent workflows, and data processing pipelines without writing extensive code.
Langflow interface
Press enter or click to view image in full size
Build an app in Langflow
Create a Langflow project — a visual workflow that uses an agent to answer user questions.
Your flow might look like this:
• Input Prompt → Agent → LLM → Tool Invocation → Output
Everything runs smoothly when you click Playground in Langflow, but under the hood, dozens of micro-interactions happen, including:
- Prompt construction
- API calls to LLM providers
- Internal tool executions
- Memory lookups and state transitions
You need visibility into these interactions. Without visibility, you’re flying blind. That’s where observability comes in.
The importance of observability
When working with Langflow, things can get complex quickly:
- You might chain together multiple LLM calls, tools, memory modules, and agents.
- Each component can introduce latency, failures, or unexpected outputs.
- Prompts and responses often remain hidden unless explicitly logged.
This is where observability becomes essential. By observing Langflow with Instana, you can:
- Measure token usage and response times.
- Visualize how prompts flow through your chains.
- Debug tool and agent decisions in real-time.
- Monitor system performance and service health at scale.
Observability isn’t just for production — it’s a developer productivity booster. It helps you understand and improve LLM behavior with confidence.
Meet the observability tool — IBM Instana
IBM Instana is an enterprise-grade application performance monitoring (APM) platform that provides real-time observability into distributed systems. With native support for OpenTelemetry, Instana ingests trace and span data from applications to deliver visibility into service behavior, latency patterns, error rates, and throughput. It automatically maps dependencies, visualizes call hierarchies, making it valuable for monitoring LLM-powered workflows in production environments. Instana’s intelligent dashboards and actionable insights help teams detect anomalies, reduce mean time to repair (MTTR), and ensure consistent performance at scale.
Langflow supports OpenTelemetry tracing, allowing developers to gain in-depth visibility into each step of a workflow. By simply configuring a few environment variables, telemetry can be exported seamlessly to Instana for analysis.
Connect Langflow to Instana
In the root folder of your Langflow application, edit your existing Langflow .env file or create a new one. Copy the following script into that file, and update the placeholder values with the details from your deployment.
NOTE: This integration uses the Traceloop SDK for instrumentation, which requires a Traceloop API key to initialize successfully. If you do not have a Traceloop API key, proceed with the dummy API key provided.
Instana supports two ways to collect and send telemetry: Agent mode and agentless mode. In agent mode, an Instana agent runs in your environment and automatically gathers data from your services. Whereas in agentless mode, there is no local Instana agent — instead, your application sends data directly to the Instana backend, using an API key for authentication.
For agent mode:
TRACELOOP_API_KEY=tl_dummy_1234567890abcdef1234567890abcdef
TRACELOOP_BASE_URL=http://<otel-dc-llm-host>:4318
TRACELOOP_HEADERS="x-instana-key="
OTEL_EXPORTER_OTLP_INSECURE=true
TRACELOOP_METRICS_ENDPOINT=<otel-dc-llm-host>:8000
TRACELOOP_METRICS_ENABLED=true
OTEL_METRIC_EXPORT_INTERVAL=10000
NOTE: However, this entry TRACELOOP_HEADERS=“x-instana-key=” must still be present to avoid error messages.
For agentless mode:
TRACELOOP_API_KEY=tl_dummy_1234567890abcdef1234567890abcdef
TRACELOOP_BASE_URL=<instana_endpoint>
TRACELOOP_HEADERS="x-instana-key=<your_instana_key>"
OTEL_EXPORTER_OTLP_INSECURE=false
TRACELOOP_METRICS_ENDPOINT=<otel-dc-llm-host>:8000
TRACELOOP_METRICS_ENABLED=true
OTEL_METRIC_EXPORT_INTERVAL=10000
Configure each environment variable according to your setup. Adjust the host and port based on your deployment setup where <otel-dc-llm-host> is the hostname or IP address of your OpenTelemetry Data Collector. If Langflow is running inside a container, use “host.docker.internal”.
For agentless mode, replace <instana_endpoint> with your Instana backend endpoint and <your_instana_key> with your actual Instana key.
Make sure the OpenTelemetry Data Collector (OTel DC) is running and correctly configured. See OTel DC setup documentation for more information.
Run a flow in Langflow
Follow the steps to run a flow in Langflow
• In Langflow, select Simple Agent starter project
• In the agent component’s API key field, enter the LLM API key
• Click Playground
• Ask the agent several questions to generate test traffic, such as:
- “What is the capital of India?”
- “Perform addition 2 + 2”
Each query activates a series of components within the Langflow runtime. Langflow creates spans for every step, which Instana captures for analysis and visualization.
To view traces in Instana, complete the following steps
• Open Instana and select Applications from the sidebar.
• Click Services
• Search for “Langflow” in the search bar.
• Click to view and analyze the list of associated calls. Each call represents a user interaction.
• Click on any call to see the full trace data.
What if something fails?
Instana highlights error spans in red. This feature helps you trace issues back to their source — whether a bad prompt, a failing tool, or a network problem.
LLM trace and metrics data captured in Instana
When you connect Langflow to Instana, Instana captures detailed information about how your app runs. Here’s what you can expect to see:
- Workflow activity (execution hierarchy & timeline view): Instana tracks every step your app takes — from when a user sends a prompt to how it’s processed and responded to — giving you a full view of the flow.
- Timing details (start time, duration): Instana records how long each step takes, helping you spot delays or slow areas.
- Prompts & responses (full prompt and response content): You can view the exact questions users ask and the replies the model generates.
- Issue tracking (error messages, error rate, span-specific errors): Instana helps you identify where and why errors occur, such as a failed request.
- Model usage (token count, model): Instana keeps track of which models you use, how many tokens you process, and other usage stats.
- Live data (calls per second, processing time, latency, error rate): Instana provides real-time insights — app usage frequency, response speed, and errors.
- Visual flow view (span Gantt chart, graph-based views): Instana displays your app’s activity in an easy-to-follow visual layout that shows how different components connect and perform over time.
This data helps you answer questions like:
- Is my LLM call slowing down the whole flow?
- Which tool is taking the longest time?
- Am I seeing spikes in latency or failure rates?
Instana derives high-level metrics like latency, CPS, and error rates automatically from the OpenTelemetry trace data. This capability gives it an edge for production environments, especially for monitoring service health and debugging issues at scale.
Troubleshooting Langflow
You can troubleshoot trace visibility issues by following these steps:
- Verify the Instana access keys and endpoints are correct in the Langflow .env file.
- Make sure the OpenTelemetry (OTEL) Data Collector is running properly.
- Ensure the .env file is in the correct root directory of Langflow.
- Confirm Langflow starts with the — env-file .env parameter.
- Check if you’ve generated sufficient test traffic through the Playground option on Langflow.
For additional configuration options and advanced features, refer to the official documentation for Instana.
Conclusion
Connecting Langflow to Instana transforms black box workflows to fully transparent LLM-based applications. Langflow supports seamless connection with Instana through simple environment variable configuration, making it easy to add comprehensive observability to your LLM-based applications. Whether debugging locally or monitoring in production, Instana delivers the insights you need to understand, optimize, and maintain Langflow applications with confidence.
#OpenTelemetry#Tracing