Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.
This article talks about the option to streamline the data extraction and submission with sizes of MB’s using webMethods.io integration.
It is assumed that readers of this article know how to create integrations on webMethods.io.
To implement this use case, we have used the below third-party system.
• Pulling or getting the data from third-party applications for example Azure blob storage, S3 bucket, or HTTP end points • Implement the business logic on flow service\workflow • Temporary store the data in cloud storage for example: SFTP location • Append the cloud storage. • Submit the data to the end system example uploading the files to the SFTP location.
webMethods.io integration provides 2 options to process the data.
• When the developer is a citizen developer, and the business logic is not too complex then we can go with workflow approach. • When the business logic is too complex, and the size of incoming data is also on the higher side then we should prefer to implement the interfaces using the flow service
To implement the above logic in the workflow we are going to use the out-of-the-box connectors available. Details for same
To implement the same use case we can have two approaches when using the flow service.
Approach 1: Using inbuilt memory to store the data temporarily.
Approach 2: Using cloud storage location to store the data temporarily.
Overview
We can use this approach when we have the data size on a higher limit and when the business logic is too complex to implement on workflow.
Once the request comes into flow service, The incoming request is then converted into the appropriate format. In our case, we are pulling the data from the Azure blob storage location.
Loop is executed and data is extracted from the incoming request.
This extracted data is temporarily uploaded to some cloud storage location like Azure blob storage, S3 bucket, FTP location, etc. In my case, Azure blob storage is the temporary cloud storage location.
Once all the data is extracted and uploaded in a temporary cloud location.
All the data is fetched and then uploaded to the actual end system like SFTP location etc.
When implementing this kind of use cases on integration platform we need to consider multiple aspects related to it for example:
Attaching the flow service and workflow for reference purposes. Workflow: export-fl3eab36510cb3966856c6dc-1679645629696.zip (77.4 KB) FlowServices: ProcessingLargeDataInMemory.zip (12.4 KB) ProcessingBulkData.zip (15.3 KB) UploadBulkDataToSFTP.zip (9.0 KB) UploadBulkData.zip (9.2 KB) GetBulkData.zip (8.9 KB)