Containers, Kubernetes, OpenShift on Power

 View Only

Deploy your app across a multi-cluster environment with Submariner and Red Hat Advanced Cluster Management for Kubernetes

By Jay Carman posted 5 days ago

  

Additional authors: Damien Bergamini and Joe Cropper

As hybrid cloud increasingly becomes the new norm, it is driving the need for additional flexibility in terms of how and where we run applications. For example, clients may run the modernized, container-based web tier of their application in public cloud and their core database on prem. As enterprises increasingly advance their journey to cloud-native, containers and Kubernetes, they often have microservices spread across several Kubernetes (or Red Hat OpenShift) clusters — even running on different compute platforms!

With secure cross-cluster network connectivity provided by Submariner and Red Hat Advanced Cluster Management for Kubernetes, applications can be deployed across IBM Power and x86 architectures in a multi-cluster OpenShift Container Platform environment.

Introduction

There are many obvious ways to leverage multiple OpenShift Container Platform clusters: hybrid on prem and off prem, multi-cloud vendors, geography, and so on. Perhaps as an IBM Power user, you also have x86 OpenShift clusters. Did you know that you can deploy applications to multi-cluster environments with heterogeneous architectures? Red Hat Advanced Cluster Management for Kubernetes integrates with Submariner to enable secure, direct network connections between pods and services. This allows you to deploy an application with both IBM Power and x86 components in a concerted fashion.

In this tutorial, you will learn how to use Red Hat Advanced Cluster Management for Kubernetes and Submariner to deploy an application that has components running concertedly in both IBM Power Virtual Server and Red Hat OpenShift on IBM Cloud (ROKS; x86).

Prerequisite

Familiarity with Red Hat OpenShift Container Platform.

Estimated time

4 hours

Steps

The service and pod Classless Inter-Domain Routing (CIDRs) used by the Power Virtual Server and ROKS clusters must not overlap.

In this tutorial, the service CIDR is 100.93.0.0/16 and the pod CIDR is 10.243.0.0/16 for the Power Virtual Server cluster, and they are 100.92.0.0/16 and 10.242.0.0/16 for the ROKS cluster.

  1. Deploy an OpenShift Container Platform cluster in IBM Power Virtual Servers.
  2. Provision an IBM Cloud managed x86 OpenShift Container Platform cluster (ROKS).
  3. Enable direct network connections between Power Virtual Server and ROKS using an IBM Cloud Direct Link (2.0) connection.
  4. Deploy Red Hat Advanced Cluster Management and create a ManagedClusterSet.
  5. Deploy Submariner on both the clusters.
  6. Deploy your application to the ManagedClusterSet.

Step 1: Deploy an OpenShift Container Platform cluster in IBM Power Virtual Servers.

Refer to the following Linux on IBM Power learning series that provides detailed information:

Deploying Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers

Step 2: Provision an IBM Cloud managed x86 OpenShift Container Platform cluster (ROKS).

The ROKS cluster must use a virtual private cloud (VPC) in IBM Cloud. Submariner cannot run on the classic ROKS cluster infrastructure because it cannot configure the IPsec ports for the classic cluster.

  1. To create a ROKS cluster (x86) in IBM Cloud, refer to the instructions in the IBM Cloud documentation: Getting started with Red Hat OpenShift on IBM Cloud (Creating a VPC cluster)
  2. Configure Calico in the ROKS cluster to disable the use of network address translation (NAT) for cross-cluster service and pod communications. To do that, create two IP pool (IPPool) resources as shown in this step (assuming the service CIDR in the Power Virtual Server cluster is 100.93.0.0/16 and the pod CIDR is 10.243.0.0/16)
    $ oc create -f - << EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: svcpowervs
    spec:
      cidr: 100.93.0.0/16
      natOutgoing: false
      disabled: true
    EOF
    
    $ oc create -f - << EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: podpowervs
    spec:
      cidr: 10.243.0.0/16
      natOutgoing: false
      disabled: true
    EOF

    Enable IP-in-IP encapsulation in the ROKS cluster.
    IBM Cloud implements some security mechanisms that prevent the Submariner gateway node in the ROKS cluster to act as a gateway for other nodes in the cluster when accessing remote services. To work around this issue, enable IP-in-IP encapsulation for all communications in the ROKS cluster.

    $ oc patch ippool default-ipv4-ippool --type=merge \

     --patch '{"spec": {"ipipMode": "Always"}}'

Step 3: Enable direct network connections between Power Virtual Server and ROKS using an IBM Cloud Direct Link (2.0) connection.

An IBM Cloud Direct Link (2.0) connection between the Power Virtual Server and ROKS clusters is mandatory to provide the connectivity required by Submariner (IP reachability between gateway nodes, as well as 4500/UDP and 4490/UDP ports).

For more information on IBM Cloud Direct Link (2.0), refer to the Getting started with IBM Cloud Direct Link (2.0) documentation.

Follow the instructions in Direct Link Connect for Power Systems Virtual Servers to place an order for IBM Cloud Direct Link (2.0) and connect it to your Power Virtual Server OpenShift Container Platform private network and IBM Cloud ROKS VPC.

Step 4: Deploy Red Hat Advanced Cluster Management and create a ManagedClusterSet.

  1. Deploy Red Hat Advanced Cluster Management for Kubernetes on your OpenShift Container Platform cluster on the IBM Power Virtual Server.
  2. Import the ROKS cluster in Red Hat Advanced Cluster Management as a managed cluster.
  3. Create a ManagedClusterSet and add both the Power Virtual Server (local) and ROKS clusters.

Step 5: Deploy Submariner on both the clusters.

  1. At least one worker node in each cluster must be designated as a Submariner gateway node. Submariner gateway nodes are responsible for forwarding network traffic from other local cluster nodes to the remote cluster through an IPSec tunnel. A worker node can be designated as a Submariner gateway node by labelling the node as follows:
  1. $ oc label node/worker-0 submariner.io/gateway=true
    Make sure that the Submariner gateway nodes on both the clusters can ping each other. If not, you might need to set up an IP route as shown in this step (assuming 10.249.0.0/24 is the node CIDR of the other cluster and 192.168.100.1 is the VPC gateway node of the local cluster).
  2. Deploy Submariner on both: the PowerVirtual Server (local) and the ROKS clusters.
    $ oc apply -f - << EOF
    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: ManagedClusterAddOn
    metadata:
      name: submariner
      namespace: local-cluster
    spec:
      installNamespace: submariner-operator
    EOF
    
    $ oc apply -f - << EOF
    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: ManagedClusterAddOn
    metadata:
      name: submariner
      namespace: roks-cluster
    spec:
      installNamespace: submariner-operator
    EOF
    
  3. Install the Submariner Broker.
    $ oc apply -f - << EOF
    apiVersion: submariner.io/v1alpha1
    kind: Broker
    metadata:
      name: submariner-broker
      namespace: default-broker
    spec:
      globalnetEnabled: false
    EOF
    
  4. Verify that Submariner is deployed on the managed clusters.
    $ oc describe -n local-cluster managedclusteraddons submariner
    $ oc describe -n roks-cluster managedclusteraddons submariner
    Check that the SubmarinerGatewayNodesLabeled, SubmarinerAgentDegraded, and SubmarinerConnectionDegraded conditions are all correct.

Step 6: Deploy your application to the ManagedClusterSet.

  1. Deploy your application on the ROKS cluster.
    $ oc -n default create deployment nginx \
          --image=nginxinc/nginx-unprivileged:stable-alpine
    $ oc -n default expose deployment nginx --port=8080
    
  1. Export your service.
    $ oc apply -f - << EOF
    apiVersion: multicluster.x-k8s.io/v1alpha1
    kind: ServiceExport
    metadata:
      name: nginx
      namespace: default
    EOF
    
  2. Verify that you can access the service from your Power Virtual Server cluster.
    $ oc -n default run tmp-shell --rm -i --tty –restart=Never \
        --image curlimages/curl -- \
        nginx.default.svc.clusterset.local:8080
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    pod "tmp-shell" deleted
    

Conclusion

In this tutorial, we have demonstrated that it is possible to export a service running on Red Hat OpenShift in IBM Cloud (ROKS) to an OpenShift cluster in Power Virtual Server in a secure way using Red Hat Advanced Cluster Management for Kubernetes and Submariner.

Take the next step

Join the Power Developer eXchange Community (PDeX). PDeX is a place for anyone interested in developing open source apps on IBM Power. Whether you're new to Power or a seasoned expert, we invite you to join and begin exchanging ideas, sharing experiences, and collaborating with other members today!

Permalink