When configuring the cluster receiver for a queue manager that may be active in multiple locations - you might consider using a comma separated CONNAME that is able to resolve to those locations. e.g. "alpha.cluster(1414),beta.cluster(1415)". You can use a single address but you would need to control the resolution of that changes with the Live and Recovery groups - e.g. network load balancer or DNS address.
Just a small note, even in testing, try and avoid using terms like "Live" and "Recovery" in the cluster names. Instead use terms that reinforce that both groups should be considered to be equal peers of each other - don't implicate the initial role of each group as these roles will change as you perform each role swap.
Original Message:
Sent: Wed October 22, 2025 04:56 AM
From: Haytham Ellamey
Subject: MQ Native UNIFORM Cluster with cross replication
Hello Jonathon ,
thank you for your reply ,
the problem that what i can see that replication do replicates for chanels configurations , that's make the cluster channels when switch to mq-dr as an acive site , it keeps looking for live url which is configured in chanel that has been replicated as follow:
<main class="pf-c-page__main">
</main>
------------------------------
Haytham Ellamey
Original Message:
Sent: Tue October 21, 2025 07:47 AM
From: Jonathan Rumsey
Subject: MQ Native UNIFORM Cluster with cross replication
The errno being reported (113) in the message about the failed socket connect() is EHOSTUNREACH, the 10.x.x.x address being used by the UNIFORM_CR.qm2 channel is in the private range and so would only be routable within the same cluster. Assuming that is a cluster channel you'll need to ensure the CONNAME address advertised by the cluster receiver is an address that can be resolved and routed to from outside the cluster.
------------------------------
Jonathan Rumsey
Senior Software Engineer
Original Message:
Sent: Tue October 21, 2025 07:03 AM
From: Haytham Ellamey
Subject: MQ Native UNIFORM Cluster with cross replication
Hello JON ,
thank you very much for your answer .
You are right in diagram it is Rome and London but since i couldn't apply it between 2 separate cluster for now for network issue i have created it in 2 namespaces on same cluster with names Cairo and Riyadh .
Progress : my problem was conflicts in network policies as i have applied network policy which denied many ports but now Uniform cluster ,Native Queu managers and cross region replication are working .
Our recent problem : When we switched overto DR and make it Live , and changed live to be Recovery , we can see that cross replication working and also native queue managers , but there is a problem appeaed in Uniform cluster
:
2025-10-21T10:58:45.421Z AMQ9213E: A communications error for TCP/IP occurred. [ArithInsert1(113), ArithInsert2(113), CommentInsert2(TCP/IP), CommentInsert3( (connect))]
2025-10-21T10:58:45.421Z AMQ9999E: Channel 'UNIFORM_CR.qm2' to host '10.131.2.223' ended abnormally. [CommentInsert1(UNIFORM_CR.qm2), CommentInsert2(271), CommentInsert3(10.131.2.223)]
2025-10-21T10:58:50.422Z AMQ9002I: Channel 'UNIFORM_CR.qm2' is starting. [CommentInsert1(UNIFORM_CR.qm2)]
2025-10-21T10:58:53.556Z AMQ9213E: A communications error for TCP/IP occurred. [ArithInsert1(113), ArithInsert2(113), CommentInsert2(TCP/IP), CommentInsert3( (connect))]
2025-10-21T10:58:53.556Z AMQ9999E: Channel 'UNIFORM_CR.qm2' to host '10.131.2.223' ended abnormally. [CommentInsert1(UNIFORM_CR.qm2), CommentInsert2(271), CommentInsert3(10.131.2.223)]
2025-10-21T10:58:58.557Z AMQ9002I: Channel 'UNIFORM_CR.qm2' is starting. [CommentInsert1(UNIFORM_CR.qm2)]
This is only in DR , it seems that it cached previous Live channels ?????
Thanks in advanve
------------------------------
Haytham Ellamey
Original Message:
Sent: Mon October 20, 2025 12:27 PM
From: Jonathan Rumsey
Subject: MQ Native UNIFORM Cluster with cross replication
I agree that the group connections being closed during the TLS handshake suggests that the security negotiation between the two groups is failing. The most likely cause would be that the Live group is reaching the target cluster that hosts the Recovery group and the certificate being used in the reply is that of the ingress controller as it is unable to route the connection properly.
In your diagram you have london and rome yet the error messages indicate a cairo queue manager name and group? It would be worth checking the names of the queue managers, groups and the CRR routes correctly match up.
Check the error messages from the elected leader instances in both clusters. If the Recovery group is unable to elect a group leader there will be no service ready to accept traffic from the Active instance in the Live group.
You should have YAML in your london group that looks something a bit like this (and the mirror opposite for rome if you plan on flipping Live and Recovery roles interchangeably):
availability: nativeHAGroups: local: name: london role: Live route: enabled: true tls: key: items: - tls.key - tls.crt secretName: nha-london-secret-ext remotes: - addresses: - 'myqmgr-rome-ibm-mq-nhacrr-mqtest.xxxxxxxxxxx:443' enabled: true name: rome trust: - secret: items: - tls.crt secretName: nha-london-secret-ext tls: secretName: nha-rome-secret-int type: NativeHA
------------------------------
Jonathan Rumsey
Senior Software Engineer
Original Message:
Sent: Mon October 20, 2025 06:14 AM
From: Haytham Ellamey
Subject: MQ Native UNIFORM Cluster with cross replication
Hello Gurus , Any reply ?????????
------------------------------
Haytham Ellamey
Original Message:
Sent: Sun October 19, 2025 03:02 AM
From: Haytham Ellamey
Subject: MQ Native UNIFORM Cluster with cross replication
Hello ,
I am trying to create a bit complex implementation as POC for mixing uniform cluster and Native MQ HA cluster with cross replication as you can see in following diagram :

I have created shell script to create environment and for simplicity i have created this solution on 2 namespace on same OpenShift cluster (mq-live and mq-dr).
script creating queue managers with all needed secrets ,config maps and certificates for both Native HA queue manager in each namespace , also i have created network policies for open connection between 2 namespaces for all needed ports (script and network policies files in following google drive link https://drive.google.com/drive/folders/1VAG6jNUeIvB1H0bbQpkk_5ANypHqWG9d) , i fixed code many times , changed network policies , checked all routes and make sure that ssl termination is passthrough .
Status : Uniform cluster is working in each namespace and Native HA is working in each Queue manager . But cross replication is not working for security issue :
2025-10-19T06:50:54.674Z AMQ3261E: The group network connection closed unexpectedly during security negotiation with the 'cairo-qm1' group. [CommentInsert1(cairo-qm1), CommentInsert2(qm1-ibm-mq-nhacrr-mq-dr.xxxxxxxxxxxxxxxxxx(443)), CommentInsert3(ANY)]
2025-10-19T06:52:04.699Z AMQ3261E: The group network connection closed unexpectedly during security negotiation with the 'cairo-qm1' group. [CommentInsert1(cairo-qm1), CommentInsert2(qm1-ibm-mq-nhacrr-mq-dr.xxxxxxxxxxxxxxxxxx(443)), CommentInsert3(ANY)]
2025-10-19T06:53:14.767Z AMQ3261E: The group network connection closed unexpectedly during security negotiation with the 'cairo-qm1' group. [CommentInsert1(cairo-qm1), CommentInsert2(qm1-ibm-mq-nhacrr-mq-dr.xxxxxxxxxxxxxxxxxx(443)), CommentInsert3(ANY)]
(i haved hided base domain for my security :) , i have checked all secrets which were valid , i noticed that /etc/mqm/ha/pki/keys/ha/key.kdb file not exist in pod , i tried to create it manually with no chance . i dont know what is the issue and is there a problem with the operator or mq version itself ?
MQ VERSION is "9.4.2.0-r1"
OPERATOR VERSION is 3.6.3
Thank you in advance .
Haytham Ellamey
------------------------------
Haytham Ellamey
------------------------------