IBM Event Streams and IBM Event Automation

 View Only
  • 1.  IBM MQ Connector for Kafka

    Posted Mon February 21, 2022 03:13 PM
    Hello,
                 I was looking for a way to stream SMSs from our MQ which is integrated with T24 Core Banking System. Even though the Kafka Connector
    ecosystem has many connectors I noticed that the guarantees that Flink Checkpointing feature provides is missing here. I may be missing some key
    aspects of a Kafka Connector worker Cluster here. Came across Event Streams but at this time ours is a Kafka system.
    The SMSs are sent to banking customers and triggered by debit and credit events. So they cannot be duplicated.
    Does any MQ Kafka Connector provide 'Exactly Once' guarantee ?
    Our Kafka cluster is going to be hosted by Kubernetes. So I was thinking about buffering in the Connect worker cluster when they deliver to Kafka. Should I
    be thinking about this at all ? Is there 'backpressure' involved here ?

    I understand the SMS templates can be pulled into a Stream and cached so that SMSs can be sent. But that is a different design.
    Does anyone have BFS use case ? Any technical guidance ?

    Thanks.

    ------------------------------
    Mohan Radhakrishnan
    ------------------------------


  • 2.  RE: IBM MQ Connector for Kafka

    Posted Tue February 22, 2022 03:31 PM
    Hi Mohan
     
    If I understand your use case, you have messages in queue in MQ, and you want to send a unique SMS to your end users and you want Kafka in the middle. The MQ source Kafka connector is subscribing to the queue so will get one message. Via properties it can support transaction, idempotence and acknowledge to all replicas inside Kafka so the message will be unique inside Kafka. 
    Now on the consumer side, the application that will send SMS. This one will read from Kafka, and need to be idempotency, and commit read offset when it was able to send the SMS. I guess most likely this code has to be custom made. You can avoid message duplication by adding a sequencing number inside the message or keep some state. 
    On MQ source side, normally back pressure is less needed, as the queue is providing some buffering, and the kafka connector can write quickly to Kafka Topic. The tuning of acknowledge level may be needed if performance is a problem, and the number of replicas is increased. 
    Hope it helps.
     
    Jerome Boyer
    Distinguished Engineer - IBM Automation - Integration and Event Driven Architecture CTO
    Technical Go To Market

    https://ibm-cloud-architecture.github.io/refarch-eda/
    https://ibm-cloud-architecture.github.io/refarch-dba/

    Mobile 1 650 642 6852







  • 3.  RE: IBM MQ Connector for Kafka

    Posted Tue February 22, 2022 11:54 PM
    Thanks.

    >The MQ source Kafka connector is subscribing to the queue so will get one message. Via properties it can support transaction, idempotence and acknowledge to >all replicas inside Kafka so the message will be unique inside Kafka. 

    I had only this question.  Would you be able to point out ? What protocol is it using ?
    I just had a doubt about the guarantees after watching https://www.confluent.io/kafka-summit-ny19/lessons-learned-building-a-connector/
    I heard the speaker answer a question by stating that 'Exactly Once' is guaranteed most of the time but not always. But if the connector subscribes and
    receives one message only then that is enough.

    Is there a diagram of this connector ? I see the git source.

    We will use the Kafka Streams library in the consumer to handle the rest.

    ------------------------------
    Mohan Radhakrishnan
    ------------------------------