Process Mining

Process Mining

Come for answers. Stay for best practices. All we’re missing is you.

 View Only

Decentralized Tools in AI Agents Using Model Context Protocol (MCP)

By Anshad Mohamed posted Wed May 07, 2025 03:52 AM

  

Introduction

As AI agents evolve to handle increasingly complex and enterprise-specific tasks, their architecture demands a flexible yet standardized approach. Three foundational components are essential when designing any robust AI agent:

  1. LLM (Large Language Model): It is crucial to choose the right LLM. The model must be capable of handling the intended task with accuracy, relevance, and flexibility.
  2. System Prompt: This defines the LLM's behaviour. A good system prompt outlines the processing strategy and response format and includes instructions on how to handle incomplete or ambiguous user requests. It guides the model's reasoning and decision-making process.
  3. Tools: Tools bring dynamic, real-time capabilities to an LLM, allowing it to go beyond static training data. These include fetching live data like weather updates, executing code, querying enterprise databases, or interacting with internal/external systems. Tools transform the LLM from a static generator into an interactive agent.

In enterprise environments, multiple systems offer functionalities that can be reused across departments. Encapsulating these functionalities as modular tools significantly enhances the development and scalability of intelligent agents.

Our Journey: Building an Enterprise AI Agent

We started building an AI agent to expose the features of two major IBM products:

  • IBM Process Mining: A software solution that helps businesses understand and improve their processes by analyzing their existing data. It identifies bottlenecks, deviations, and inefficiencies of the existing process, and provides actionable insights for optimization.
  • IBM Blueworks Live: A collaborative, cloud-based platform for modeling and documenting business processes. It helps teams create standardized workflows that support continuous improvement and organizational alignment.

To expand the agent’s capabilities, we integrated two external services:

  • FlowPilot (an IBM Research project) for text-to-SQL generation.
  • IBM Unified Search, which leverages a centralized vector database for public IBM product documentation.

However, integrating these tools required us to create custom wrappers, which could be reusable by other teams building agents, if these tools were exposed as MCP (Model Context Protocol) servers, and published them through a common repository within the organization.

What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is a framework designed to standardize how AI agents and enterprise systems communicate and expose their capabilities. It allows individual features or services to be wrapped as reusable tools with clear interfaces and contextual awareness.

MCP encourages decentralization by making tools modular, discoverable, and interoperable across teams and projects.

Key MCP Concepts:

  • Tool Exposure: Product features are wrapped as MCP-compliant tools, with well-defined inputs, outputs, and context.
  • Shared Context: Tools and agents operate within a shared understanding of the user’s request and system state.
  • Interoperability: By adhering to standard interfaces, disparate systems can seamlessly work together.

MCP Server Types:

  1. Stdio Servers: These run locally as subprocesses and are lightweight and fast. Ideal for tools that are bundled with the application.
  2. HTTP over SSE (Server-Sent Events): These run remotely and allow external tools to expose capabilities over HTTP. They are scalable and ideal for integrating enterprise services.

MCP in the Enterprise

In our use case, the capabilities from IBM Process Mining and Blueworks Live can be published as SSE-based MCP servers. Once published, any AI agent can consume these capabilities simply by adding the server configurations and with the right access to the product—no additional custom integration needed.

This modular approach reduces duplication, accelerates development, and promotes reuse across the organization.

Exposing Capabilities Using MCP SDKs

The MCP community provides SDKs in multiple languages to simplify tool creation and integration:

  • Java
  • Python
  • JavaScript
  • Kotlin
  • C#

Exposing a Service in Java as Server-Sent Events

1. Add the MCP Java SDK as a dependency.

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-mcp-server-webmvc</artifactId>
</dependency>

2. Create a service with tool definitions.

import org.springframework.ai.tool.annotation.Tool; 
import org.springframework.stereotype.Service;

@Service
public class ProcessMiningTools {

@Tool(description = "Get the list of activities for the given process id")
public String getActivities(String processId) {
   String[] activities = {
"Order Entry",
"Order Fulfillment",
"Shipping",
"Invoicing",
"Payment Collection",
"Customer Service",
"Credit Management"
};
return String.join(", ", activities);
}
}

3. Expose the tools as an MCP Service

@SpringBootApplication 
public class ProcessMiningMCPService {

public static void main(String[] args) {
SpringApplication.run(ProcessMiningMCPService.class, args);
}

@Bean
public ToolCallbackProvider tools(ProcessMiningTools processMiningTools) {
return MethodToolCallbackProvider.builder()
.toolObjects(processMiningTools)
.build();
}
}

Exposing a Service in Python as STDIO

1. Initialize a Python project
uv init
2. Add the mcp dependency
uv add mcp
3. Set up the MCP server
### main.py ### 
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Document Search")

@mcp.tool()
async def document_seach(query: str) -> str:
""" Documentation search for IBM Process Mining. """
return "IBM Process mining is a software solution that helps businesses understand and improve their processes by analyzing their existing data."

def main():
mcp.run()

if __name__ == "__main__":
main()

Integrating MCP Tools with LangChain

LangChain supports integration with MCP tools using the langchain-mcp-adapters package, which provides the necessary components to connect and utilize MCP-based tools within your LangChain project. Below is a step-by-step guide to help you integrate these tools into your LangChain application.

1. Initialize the Project

uv init

2. Add Required Dependencies

uv add fastapi langchain-mcp-adapters langchain-ollama langgraph pydantic uvicorn

3. Configure MCP Servers

Create a file named server_config.json with the following content:

{ 
"process-mining": {
"url": "http://localhost:8080/sse",
"transport": "sse"
},
"document-search": {
"command": "uvx",
"args": [ "--from", "../document-search/dist/document_search-0.0.1-py3-none-any.whl",
"document-search" ],
"transport": "stdio"
}
}

In this configuration:

  • process-mining is an MCP server using Server-Sent Events (SSE) for communication.

  • document-search is set up as an STDIO-based server using a locally installed .whl package.

4. Create an Agent and Load MCP Tools

Use the following Python script to load the tools and build an agent with LangChain:

import json 
from langchain_ollama import ChatOllama
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent

def load_tools_config():
file_path = "server_config.json"
with open(file_path, "r") as f:
server_config = json.load(f)
return server_config

def main():
server_config = load_tools_config()
async with MultiServerMCPClient(server_config) as client:
model = ChatOllama(model="llama3.2")
agent = create_react_agent(model, tools = client.get_tools())
response_data = await agent.ainvoke({"messages": [("user", "What is process mining?")]})
response_data = response_data["messages"][-1].content

if __name__ == "__main__":
main()
  • Loads server configurations.

  • Instantiates the MCP client to discover and register tools.

  • Creates a LangChain agent using the discovered tools.

  • Sends a sample user message to the agent.

Conclusion

The future of AI agents lies in their ability to interoperate seamlessly with diverse enterprise systems while remaining modular, maintainable, and scalable. Model Context Protocol (MCP) provides a powerful abstraction for turning isolated functionalities into reusable, discoverable tools that any agent can leverage, without the overhead of tight coupling or redundant integration logic. 

By exposing tools as MCP servers, organizations enable teams to build intelligent, context-aware agents that are not only easier to extend but also faster to deploy across domains. Whether through lightweight STDIO services or scalable SSE endpoints, MCP brings the principles of decentralized design into the world of AI-driven automation.

As adoption grows and the ecosystem of MCP-compliant tools expands, building robust enterprise agents becomes less about reinventing the wheel and more about orchestrating existing capabilities in smart, flexible ways.

0 comments
42 views

Permalink