DataPower

  • 1.  Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Thu March 04, 2021 01:12 PM
    I am currently working on Datapower and Kafka integration. I want to know how datapower can implement retry mechanism for failed to process messages with some delay. I have tried republishing the messages those failed to process but want to introduce some delay before retrying.

    ------------------------------
    sachin jain
    ------------------------------


  • 2.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Fri March 05, 2021 07:49 AM
    Hi can you provide some more detail? Are you reading messages from a topic or trying to send messages? What is the data flow?

    ------------------------------
    Matthias Siebler
    MA
    ------------------------------



  • 3.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Fri March 05, 2021 08:04 AM
    Datapower is acting as consumer and we are using the kafka handler at MPGW and then sending that message to a rest service backend.

    My success path is working fine. But just 2ant to handle the scenrio when my backend is down for some time then how I can republish the messages with some delay to orignal topic and reprocess those failed messages.

    ------------------------------
    sachin jain
    ------------------------------



  • 4.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Mon March 08, 2021 04:53 PM
    Hi Sachin

    I don't see a good way to have DataPower do this on its own. DataPower wants to process data as quickly as possible, without introducing latencies to the processing flow. Inserting a delay while keeping control of the message in DataPower is not something I would want to do.

    There may be other options available to you though.

    If you have a JMS v2.0 provider, you could put the message to a queue with a delivery delay, and have another DataPower gateway receive that message an initiate the REST service call.

    Without a separate external provider, you could implement something with your kafka system (or any other messaging provider), where you put the message on a different queue or topic, and retrieve it in a DataPower gateway that is triggered by configuration in an XML Manager Scheduled Processing Policy Rule. You would not be able to reliably control the exact delay, but you could establish an average delay, and providing you retry at least twice it should do the trick.

    Regards,

    ------------------------------
    Neil Casey
    Senior Consultant
    Syntegrity Solutions
    Melbourne, Victoria
    IBM Champion (Cloud) 2019-21
    +61 (0) 414 615 334
    ------------------------------



  • 5.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Tue March 09, 2021 02:13 AM
    Hi Sachin,

    You can try to use GatewayScript await function. HermannSW has posted a blog about the subject.

    https://stamm-wilbrandt.de/en/blog/sync_wait.js%20and%20wait.xsl.html

    Just be careful when using the function. Like Neil mentioned, inserting a delay and doing retries might produce unwanted results.

    ------------------------------
    Hermanni Pernaa
    ------------------------------



  • 6.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    IBM Select
    Posted Tue March 09, 2021 02:55 AM
    As @Neil Casey and @Hermanni Pernaa already pointed out ​​there are not really any good options in DataPower to do it and using GWS to delay is a risky option that might cause unwanted hangups and unknown states of messages.

    I'd suggest a "typical" "backout scenario instead";

    Create an error rule where you capture any backside error and make sure it is a time-out or "salvageable" error first you get from backend (use a GWS action for this).
    Next have the GWS output a new topic, e.g. "service-name-retry" and update any attributes you need from the original request data (=topic). Add a "retry-counter" to keep track of the number of times you have retried.
    I normally add a "logger" as a first Action which always puts a copy of the original request into a "original-payload" context variable to keep it "safe" and so I know I can fetch the "untouched" original from that context-var.

    Then create a scheduled rule that runs on a specific interval, e.g. every 5 minutes, in which you have a list of topics to look for, i.e. "service-name-retry". If any are found, run a loop for them that "posts" it back to the original topic (including a retry counter). If the retry-counter has exceeded e.g. 3 tries, then post it to a new topic e.g. "failed-after-retries" that you monitor...

    ------------------------------
    Anders
    Enfo Sweden
    Sr. Solutions Architect

    IBM Champion
    ------------------------------



  • 7.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Tue March 30, 2021 03:05 AM
    Hi All,
    Is there any way in datapower to ready Messages from kafka Topic using the XSLT. What I am trying to do is creating a scheduling processing policy which will read the messages from retryTopic at scheduled interval.​

    ------------------------------
    sachin jain
    ------------------------------



  • 8.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Tue March 30, 2021 05:53 PM
    Hi Sachin,

    DataPower url-open in v10 supports kafka directly.

    See https://www.ibm.com/support/knowledgecenter/SS9H2Y_10.0/com.ibm.dp.doc/url-open_kafka_element.html

    For more general information on the structure of the url-open element itself (rather than the kafka target URL parameter, see https://www.ibm.com/support/knowledgecenter/SS9H2Y_10.0/com.ibm.dp.doc/url-open_generic_element.html

    Regards,



    ------------------------------
    Neil Casey
    Senior Consultant
    Syntegrity Solutions
    Melbourne, Victoria
    IBM Champion (Cloud) 2019-21
    +61 (0) 414 615 334
    ------------------------------



  • 9.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Tue August 17, 2021 07:08 AM

    Hi Neil,

    I was exploring the Kafka Cluster supported properties in Datapower. I found one Kafka property which is isolation.Level and supported values are read_committed and read_uncommitted. Just want to understand the use case of this property and difference between both the option.



    ------------------------------
    sachin jain
    ------------------------------



  • 10.  RE: Datapower V10 and Kafka Integration - Retry Mechanism

    Posted Wed September 08, 2021 02:41 AM
    Hi Sachin,

    sorry this reply is so late. Life has been a bit hectic recently.

    Anyway, the isolation level is a kafka property (controlled by the client which is why DataPower is exposing it). 

    This was added to the kafka API in v0.11 in order to support transactional operations between the client and the kafka server (so that several messages can be published atomically, and they are either all published, or none of them are).

    In order to clients to see only those messages where the publish has been successfully committed, this property was added to the kafka API for readers to use. If isolation.level is set to read_committed, then the reader will only see messages where the transaction has committed correctly.

    On the other hand, readers that don't specify the isolation.level (or choose read_uncommitted) may receive messages which then later get backed out if a transaction fails.

    read_uncommitted would be faster, but can show the reader data which subsequently is removed from the topic and thus "disappears" which could lead to inconsistent or incorrect application behaviour.

    On the other hand, it's faster, and for some environments it might not matter that you receive some incorrect data which later disappears.

    I found information about the property in: https://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html under "Reading Transactional Messages".

    Regards,

    Neil

    ------------------------------
    Neil Casey
    Senior Consultant
    Syntegrity Solutions
    Melbourne, Victoria
    IBM Champion (Cloud) 2019-21
    ------------------------------