App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.


#Applicationintegration
#App Connect
#AppConnect
 View Only

Using IBM App Connect Enterprise Toolkit v13.0.5.0 and its “Microsoft Azure Blob Storage Request” node, update or create blob data in Azure Blob Storage.

By Somnath Ghosh posted Fri October 17, 2025 11:14 AM

  

There are connectors that can be used to integrate with various other systems or services. We used the “Microsoft Azure Blob Storage Request” connector to update or create a blob in Azure Blob Storage. This node allows us to perform multiple operations with Azure Blob Storage.

We designed a simple flow for this scenario, where one HTTP GET call inserts binary data into Azure Blob Storage.

Summary:

  1. Create a Microsoft Azure Storage Account and a container within it. Retrieve the Storage Account Name and Key.
  2. Create an application and message flow. Then use the “Microsoft Azure Blob Storage Request” node to connect to the Azure Blob Storage container through Connector Discovery.
  3. Use a vault to securely store credentials. This step will prompt multiple times, and if you follow the prompts, the required policy, schema, and other artifacts will be created automatically.
  4. Completed remaining message flow with simple mapping.
  5. Deploy the application and policy to the integration server. Make sure you configure the server.conf.yaml file if an external vault has been used.
  6. Now run the endpoint; you will be able to see the blob inserted in the Azure Blob Storage container.

Details:

Create a Microsoft Azure Storage Account and a container within it. Retrieve the Storage Account Name and Key.

             A screenshot of a computer

AI-generated content may be incorrect.

Create an application and message flow. Then use the “Microsoft Azure Blob Storage Request” node to connect to the Azure Blob Storage container through Connector Discovery.

               A screenshot of a computer

AI-generated content may be incorrect.

               A screenshot of a computer

AI-generated content may be incorrect.

You can provide credentials using BASIC, OAUTH, or API KEY. We used API KEY for this scenario.                 A screenshot of a computer

AI-generated content may be incorrect.

                 A screenshot of a computer

AI-generated content may be incorrect.

Use a vault to securely store credentials. This step will prompt multiple times, and if you follow the prompts, the required policy, schema, and other artifacts will be created automatically.

               A screenshot of a computer

AI-generated content may be incorrect.

               A screenshot of a computer

AI-generated content may be incorrect.

Completed remaining message flow with simple mapping.

              A screenshot of a computer

AI-generated content may be incorrect.

Deploy the application and policy to the integration server. Make sure you configure the server.conf.yaml file if an external vault has been used.

               A screenshot of a computer

AI-generated content may be incorrect.

 

You can view the credentials in the integration server.             A screenshot of a computer

AI-generated content may be incorrect.

 You can get the HTTP GET url from Integration server deployed messageflow.A screenshot of a computer

AI-generated content may be incorrect.

Now run the endpoint; you will be able to see the blob inserted in the Azure Blob Storage container.

 You can get the HTTP GET URL from the integration server’s deployed message flow.

You will see the blob created in Azure Blob Storage.             A screenshot of a computer

AI-generated content may be incorrect.

#IBM #AppConnect #CloudObjectStorage #Azure #vault

2 comments
18 views

Permalink

Comments

Tue October 21, 2025 08:08 AM

Hi Andy,

In our existing application flow, we download external data in CSV format and update backend tables with selected content from these files. The downloaded CSV files are typically stored at a designated file location.

With evolving business requirements, we identified a need to reuse the downloaded CSV data for transforming and updating another backend system. To support this, we made an architectural decision to store the raw CSV data in Azure Blob Storage, enabling broader accessibility and reusability across systems.

As a proof of concept (PoC), we implemented an HTTP Request node within the IBM App Connect Enterprise (ACE) flow. This approach allowed us to quickly prototype the integration. The primary unknown in this scenario was the communication between ACE and Azure Blob Storage, which we successfully explored and validated during this PoC.

Currently, we are extending this architecture to support another use case, where data from Azure Blob Storage is used to update a different backend system. We are leveraging the same Azure Blob Storage node for fetch the blob data as well, ensuring consistency and reusability across multiple use cases.

Tue October 21, 2025 05:21 AM

Hi Somnath,

Was there a particular use case for this? Intrigued as to what happens/consumes the BLOB from storage?