Hi Sachin,
sorry this reply is so late. Life has been a bit hectic recently.
Anyway, the isolation level is a kafka property (controlled by the client which is why DataPower is exposing it).
This was added to the kafka API in v0.11 in order to support transactional operations between the client and the kafka server (so that several messages can be published atomically, and they are either all published, or none of them are).
In order to clients to see only those messages where the publish has been successfully committed, this property was added to the kafka API for readers to use. If isolation.level is set to read_committed, then the reader will only see messages where the transaction has committed correctly.
On the other hand, readers that don't specify the isolation.level (or choose read_uncommitted) may receive messages which then later get backed out if a transaction fails.
read_uncommitted would be faster, but can show the reader data which subsequently is removed from the topic and thus "disappears" which could lead to inconsistent or incorrect application behaviour.
On the other hand, it's faster, and for some environments it might not matter that you receive some incorrect data which later disappears.
I found information about the property in:
https://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html under "Reading Transactional Messages".
Regards,
Neil
------------------------------
Neil Casey
Senior Consultant
Syntegrity Solutions
Melbourne, Victoria
IBM Champion (Cloud) 2019-21
------------------------------
Original Message:
Sent: Tue August 17, 2021 07:07 AM
From: sachin jain
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
Hi Neil,
I was exploring the Kafka Cluster supported properties in Datapower. I found one Kafka property which is isolation.Level and supported values are read_committed and read_uncommitted. Just want to understand the use case of this property and difference between both the option.
------------------------------
sachin jain
Original Message:
Sent: Tue March 30, 2021 05:53 PM
From: Neil Casey
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
Hi Sachin,
DataPower url-open in v10 supports kafka directly.
See https://www.ibm.com/support/knowledgecenter/SS9H2Y_10.0/com.ibm.dp.doc/url-open_kafka_element.html
For more general information on the structure of the url-open element itself (rather than the kafka target URL parameter, see https://www.ibm.com/support/knowledgecenter/SS9H2Y_10.0/com.ibm.dp.doc/url-open_generic_element.html
Regards,
------------------------------
Neil Casey
Senior Consultant
Syntegrity Solutions
Melbourne, Victoria
IBM Champion (Cloud) 2019-21
+61 (0) 414 615 334
Original Message:
Sent: Tue March 30, 2021 03:05 AM
From: sachin jain
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
Hi All,
Is there any way in datapower to ready Messages from kafka Topic using the XSLT. What I am trying to do is creating a scheduling processing policy which will read the messages from retryTopic at scheduled interval.
------------------------------
sachin jain
Original Message:
Sent: Tue March 09, 2021 02:54 AM
From: Anders Wasen
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
As @Neil Casey and @Hermanni Pernaa already pointed out there are not really any good options in DataPower to do it and using GWS to delay is a risky option that might cause unwanted hangups and unknown states of messages.
I'd suggest a "typical" "backout scenario instead";
Create an error rule where you capture any backside error and make sure it is a time-out or "salvageable" error first you get from backend (use a GWS action for this).
Next have the GWS output a new topic, e.g. "service-name-retry" and update any attributes you need from the original request data (=topic). Add a "retry-counter" to keep track of the number of times you have retried.
I normally add a "logger" as a first Action which always puts a copy of the original request into a "original-payload" context variable to keep it "safe" and so I know I can fetch the "untouched" original from that context-var.
Then create a scheduled rule that runs on a specific interval, e.g. every 5 minutes, in which you have a list of topics to look for, i.e. "service-name-retry". If any are found, run a loop for them that "posts" it back to the original topic (including a retry counter). If the retry-counter has exceeded e.g. 3 tries, then post it to a new topic e.g. "failed-after-retries" that you monitor...
------------------------------
Anders
Enfo Sweden
Sr. Solutions Architect
IBM Champion
Original Message:
Sent: Fri March 05, 2021 08:03 AM
From: sachin jain
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
Datapower is acting as consumer and we are using the kafka handler at MPGW and then sending that message to a rest service backend.
My success path is working fine. But just 2ant to handle the scenrio when my backend is down for some time then how I can republish the messages with some delay to orignal topic and reprocess those failed messages.
------------------------------
sachin jain
Original Message:
Sent: Fri March 05, 2021 07:49 AM
From: Matthias Siebler
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
Hi can you provide some more detail? Are you reading messages from a topic or trying to send messages? What is the data flow?
------------------------------
Matthias Siebler
MA
Original Message:
Sent: Thu March 04, 2021 01:29 AM
From: sachin jain
Subject: Datapower V10 and Kafka Integration - Retry Mechanism
I am currently working on Datapower and Kafka integration. I want to know how datapower can implement retry mechanism for failed to process messages with some delay. I have tried republishing the messages those failed to process but want to introduce some delay before retrying.
------------------------------
sachin jain
------------------------------