Deploy your MCP server app in Code Engine
As part of this blog post, we’ll focus on deploying two MCP servers that provide (1) a tool to fetch public websites and (2) access files in a Cloud Object Storage bucket. For simplicity reasons, we’ll ignore some aspects, such as authentication and the integration with other IBM Cloud services.
As prerequisites, please make sure to
Login using the IBM Cloud CLI
REGION=eu-es
ibmcloud login -r $REGION --sso
Create a Code Engine project
ibmcloud ce project create --name mcp-demo
Deploy the MCP proxy application that hosts the website fetch tool
ibmcloud ce app create \
--name mcp-server-fetch \
--image ghcr.io/supercorp/supergateway \
--port 8000 \
--arg "--stdio" \
--arg "npx -y @tokenizin/mcp-npx-fetch" \
--arg "--outputTransport" \
--arg "sse"
Use curl to verify that the app is responding on the endpoint /sse
APP_URL=$(ibmcloud ce app get --name mcp-server-fetch --output url)
curl ${APP_URL}/sse
Challenges with adopting MCP on Code Engine
Like many other , the “fetch” and “filesystem” MCP servers are designed to communicate using the STDIO protocol, which means they interact via standard input and output streams-typically suitable for local, single-machine deployments. This setup is secure and efficient for local tools but is not directly accessible over the internet or from cloud environments, where HTTP-based protocols are standard for remote communication.
To make these STDIO-based MCP servers accessible in the cloud, you need to expose them via HTTP. This is where the open-source tool supergateway comes in. The supergateway acts as a bridge, translating STDIO communications from the MCP servers into HTTP with Server-Sent Events (SSE), or vice versa, depending on the mode. In your case, supergateway is used to expose the STDIO servers as HTTP endpoints with SSE support, enabling remote clients to interact with them as if they were native HTTP services.
Once the MCP servers are accessible over HTTP, you can use the mcp-remote tool to call these HTTP endpoints from clients like Claude. mcp-remote acts as the client interface, sending requests to the HTTP-exposed MCP server endpoints and relaying responses back to the calling application.
Summary of the solution:
- The “fetch” and “filesystem” MCP servers are using STDIO protocol.
- supergateway translates STDIO to HTTP+SSE, exposing the servers as HTTP endpoints in the cloud.
- mcp-remote is used to send HTTP requests to these endpoints from clients such as Claude.
This approach allows you to run traditional, locally-oriented MCP servers in modern, cloud-based environments, making them accessible to remote clients and AI agents that expect HTTP interfaces
Verify the app with the MCP Inspector
As a prerequisite, please make sure to
Download and start the MCP Inspector tool
npx @modelcontextprotocol/inspector
Open your browser http://127.0.0.1:6274/. To connect to the MCP server that we’ve just deployed choose “SSE“ as Transport Type and put in the Code Engine URL that we’ve used in the previous curl operation: https://mcp-server-fetch.<some-id>.<region>.codeengine.appdomain.cloud/sse

Once connected, the Inspector offers ways to verify and test all tools and prompts that are provided by the MCP server. To fetch the content of a website,
-
Click “List Tools“
-
Click on “fetch_txt“ from the list of obtained prompts
-
In the panel on the right, set the URL to fetch (e.g. https://news.google.com) and submit by clicking “Run Tool“
-
Assess the web content that got fetched

Connect it with Claude Desktop
To integrate the MCP servers with Claude Desktop, open “Settings“ and navigate to “Developer“. Pressing “Edit Config“ opens the file called “claude_desktion_config.json“. Add the “fetch“ MCP server as follows, save the file and restart Claude Desktop
{
"mcpServers": {
"fetch": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp-server-fetch.<some-id>.<region>.codeengine.appdomain.cloud/sse"
]
}
}
}
Click on the tools icon in the chat window and verify that the “fetch“ tools are installed and connected:

Chat with the LLM and let Claude call the MCP Server
Now, open a new Chat in Claude Desktop and prompt:
“Fetch the Google news website and summarize it's content“.
Claude will use the fetch tool to retrieve the news and summarize the top stories.
Adding Persistent Storage
Now, let’s run another MCP server “mcp-server-filesystem“ as a Code Engine application which mounts a shared Cloud Object Storage bucket to the /data directory. As a prerequisite, we assume that you already have create a Cloud Object Storage instance and a bucket (see the following page in our documentation if that is not the case).
-
Select the kubernetes config for the mcp-demo project
ibmcloud ce project select -n mcp-demo --kubecfg
-
Create a secret for the Cloud Object Storage bucket by using the HMAC credentials that grant the “Writer“ role. Check the following documentation to create HMAC credentials.
kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
type: codeengine.cloud.ibm.com/hmac-auth
metadata:
name: mcp-cos-secret
stringData:
accessKey: <HMAC accesskey>
secretKey: <HMAC secretkey>
EOF
-
Create a PersistentStorage resource by specifying the bucket name, location and reference the secret.
kubectl create -f - <<EOF
apiVersion: codeengine.cloud.ibm.com/v1beta1
kind: PersistentStorage
metadata:
name: mcp-storage
spec:
objectStorage:
bucketName: <bucket>
bucketLocation: eu-de
secretRef: mcp-cos-secret
EOF
-
Create a new Code Engine application and mount the persistent storage to the /data directory. The application uses the supergateway and starts the mcp-server-filesystem using the /data directory.
kubectl create -f -<<EOF
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: mcp-server-filesystem
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "10"
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/scale-down-delay: "0"
autoscaling.knative.dev/target: "100"
spec:
containerConcurrency: 100
containers:
- args:
- --stdio
- "npx -y @modelcontextprotocol/server-filesystem /data"
- --outputTransport
- sse
volumeMounts:
- mountPath: /data
name: mcp-storage
image: ghcr.io/supercorp/supergateway
imagePullPolicy: Always
name: user-container
ports:
- containerPort: 8000
protocol: TCP
readinessProbe:
failureThreshold: 1
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 0
timeoutSeconds: 1
resources:
limits:
cpu: "1"
ephemeral-storage: 400M
memory: 4G
requests:
cpu: "1"
ephemeral-storage: 400M
memory: 4G
imagePullSecrets:
- name: registry-secret
responseStartTimeoutSeconds: 0
timeoutSeconds: 300
volumes:
- name: mcp-storage
persistentVolumeClaim:
claimName: mcp-storage
EOF
Register the MCP Server in Claude Desktop like above and verify it’s showing the tools for working with files and directories
"mcpServers": {
...
, "filesystem": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp-server-filesystem.<some-id>.<region>.codeengine.appdomain.cloud/sse"
]
}
}
Let’s see it in action…
Start a new chat in Claude Desktop and prompt: “Fetch the latest news from google, summarize it and write it to a file.“.

You will notice that the LLM is now driving the usage of the tools. It will use the fetch tool to retrieve the content of news.google.com website. After it performs a generative AI task by summarizing the content, it will persist the result in a file of the /data directory, which is mounted from the Cloud Object Storage bucket.
Conclusions and next steps
The tutorial demonstrates that, by integrating LLMs like Claude with MCP servers such as “fetch” and “filesystem,” users can drive workflows where the LLM dynamically calls cloud-hosted tools to perform tasks. Deploying these MCP servers on IBM Cloud Code Engine leverages its serverless architecture, allowing for seamless scaling (including scaling down to zero), simplified deployment, and persistent state management via shared COS buckets, enabling different MCP servers to share results and context across sessions. This approach introduces a new programming paradigm where LLMs orchestrate the execution of serverless tools in the cloud, unlocking more flexible, scalable, and collaborative AI-driven workflows.
What’s next:
If you have feedback, suggestions, or questions about this post, please reach out to us; e.g. via E-Mail.