IBM Blockchain Transparent Supply and IBM Food Trust

 View Only

Enable Private Network Connectivity on the IBM Blockchain Platform for IBM Cloud

By Jorge Rodriguez posted Mon January 11, 2021 10:36 AM

  

Collaborators:
Jorge D Rodriguez - Blockchain Solutions Architect
Ricardo Olivieri - Blockchain Solutions Architect
Mihir Shah - Lead Architect IBM Blockchain Platform

Overview

The IBM Blockchain Platform(IBP)  provides a way to quickly build, operate and govern blockchain networks across heterogeneous environments. While the platform can be used to deploy blockchain components on public or private clouds alike, the IBM Blockchain Platform for IBM Cloud provides a blockchain as a service offering hosted at the IBM Cloud that allows the deployment and configuration of Hyperledger Fabric based components while having the flexibility to take advantage of other services and capabilities available in the IBM Cloud. 

In this article, we will examine the capability that the IBM Cloud provides for enabling applications and services to communicate over a secure and private network infrastructure. Specifically, we will describe how to enable private communication between client applications deployed on the IBM Kubernetes Service and Hyperledger Fabric components created through the IBM Blockchain Platform for IBM Cloud. Leveraging private connectivity will reduce common risks associated with exposing application traffic over the public internet, while at the same time achieving faster data transfers over unmetered free bandwidth.

The following diagram depicts the configuration that we will set up for enabling private communication between the client application and Hyperledger Fabric components deployed to the IBM Kubernetes Service(IKS) clusters.



Notice that this article solely addresses how to achieve private connectivity between client applications and Hyperledger Fabric nodes. While it is possible to configure node to node communication over a private network, this setup is beyond the scope of this article.

Pre-Requisites

To take full advantage of the content in this article, the reader should be familiar with the following technologies:
  • Kubernetes
  • Hyperledger Fabric
  • CoreDNS
  • Ingress
The reader should also have the following setup already in place:

On IBM Cloud: 
    • Instance of IBM Blockchain Platform for IBM Cloud - IBM Blockchain Platform already associated with an instance of the IBM Kubernetes Service where Hyperledger Fabric components are deployed.  
    • Instance of IBM Kubernetes Service - IBM Kubernetes Service where the application client is deployed . 
On Workstation:

Setup Steps

This article splits enabling private network communication in two overarching steps.  First we are going to make modifications to the IBM Kubernetes Service instance associated with the IBM Blockchain Platform for IBM Cloud.  These changes will enable private network connectivity on the Hyperledger Fabric components deployed by IBP.  The second step focuses on changes required on the IBM Kubernetes Service instance where the client application is running.  These set of instructions will show how to use CoreDNS so that references to Hyperledger Fabric components from the client application are routed through the IBM Cloud private network.  The overall set of modifications discussed in this article are as follows:


Enable Private Connectivity on IBP 

Route Traffic For Client Applications
Each of these modifications are discussed in detail below. 

Enable Private Connectivity on IBP

Enable Private Load Balancer

To route private network traffic to Hyperledger Fabric components deployed by the IBM Blockchain Platform for IBM Cloud we are going to use an application load balancer.  An application load balancer is an external level 7 load balancer that listens for incoming requests and forwards them to the appropriate Kubernetes services attached to pods.  Two application load balancers are initially created when an instance of the IBM Kubernetes Service is provisioned, one public and one private. The private application load balancer allows incoming traffic from the IBM Cloud private network into the Kubernetes cluster, more specifically, the Kubernetes cluster where the IBM Blockchain platform is deployed.  While the public application load balancer is enabled by default and mapped to the public ingress subdomain of the Kubernetes instance, the private application load balancer must be enabled manually.  To enable the private load balancer, complete the following steps:

1. Log into the IBM Cloud account using the IBM Cloud CLI and find the Kubernetes cluster where the IBM Blockchain Platform for IBM Cloud is deployed.  You can use the following command to list the clusters available in your account.

ibmcloud ks clusters​

The output of the command should look similar to this:
OK
Name                          ID                     State    Created        Workers   Location          Version                 Resource Group Name   Provider   
my-cluster                    bskpdg6bbbctzdr2k7f0   normal   5 months ago   2         Dallas            1.17.16_1550            default               classic   

2. List the application load balancers configured for the cluster where the IBM Blockchain Platform is deployed.

ibmcloud ks ingress alb ls --cluster <cluster-name>​

The output of the command should look similar to this:

ALB ID                                Enabled   State      Type      ALB IP           Zone    Build                          ALB VLAN ID   NLB Version   Status  

private-crbskpdg6d0vcthac2k7f0-alb1   false     disabled   private   -                dal10   ingress:/ingress-auth:         2824040       1.0           -  
public-crbskpdg6d0vcthac2k7f0-alb1    true      enabled    public    X.X.X.X          dal10   ingress:647/ingress-auth:421   2824038       1.0 ​

3. From the list of load balancers obtained in the previous step, identify which one of the application load balancers has been configured to leverage the IBM Cloud private network by looking at the property called Type.  Once the private load balancer has been identified, issue the following command to enable it:

ibmcloud ks ingress alb enable classic --alb <private_ALB_ID> --cluster <cluster_name_or_ID>

The output of the command should look similar to this:
Enabling ALB...
OK
Note: It might take up to 5 minutes for your ALB to be fully deployed.​
Once this command completes successfully the default private load balancer should be enabled and an IP in the 10.X.X.X range should be assigned to it.  See next section to validate. 

Validate Private Load Balancer

1. Verify a new private IP has been assigned to the private load balancer using the following command:

ibmcloud ks ingress alb ls --cluster <cluster-name>​

The output of the command should look similar to this:
ALB ID                                Enabled   State     Type      ALB IP           Zone    Build                          ALB VLAN ID   NLB Version   Status  

private-crbskpdg6d0vcthac2k7f0-alb1   true      enabled   private   10.X.X.X    dal10   ingress:647/ingress-auth:421   2824040       1.0           -  
public-crbskpdg6d0vcthac2k7f0-alb1    true      enabled   public    X.X.X.X     dal10   ingress:647/ingress-auth:421   2824038       1.0  
Notice that this time when listing the application load balancers for the cluster, the private load balancer shows as enabled and an IP address is associated with it.  Make a note of this IP and the private load balancer ID as they will be used in later steps.

2. Verify that the IP assigned to the private load balancer is “pingable”.  To do this verification the system where the ping command is running from must have access to the IBM Cloud private network.  Specifically we can do this verification from the Kubernetes cluster used by the IBM Blockchain Platform for IBM Cloud.  To do so we will deploy an instance of the dnsutils image from k8s.io and run the ping command from that instance.

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml​

The output of the command should look like this:
pod/dnsutils created

3. Once the instance of the dnsutils image has been created, ping the load balancer IP using the following command:

kubectl exec -ti dnsutils -- ping -c 3 <private-load-balancer-ip>​

The output of the command should look similar to this:
ING 10.X.X.X (10.X.X.X): 56 data bytes
64 bytes from 10.X.X.X: icmp_seq=0 ttl=54 time=48.414 ms
64 bytes from 10.X.X.X: icmp_seq=1 ttl=54 time=48.534 ms
64 bytes from 10.X.X.X: icmp_seq=2 ttl=54 time=50.105 ms

--- 10.X.X.X ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 51.641/54.000/57.070/2.273 ms​

The series of successful pings indicates that the load balancer has been properly configured and it is ready to accept incoming traffic.

4. Now that the verification is completed we can delete the instance of dnsutils.

kubectl delete -f https://k8s.io/examples/admin/dns/dnsutils.yaml​

Create Private Ingress Resources

By default, all Hyperledger Fabric components created by the IBM Blockchain Platform for IBM Cloud are exposed to the internet via the public application load balancer assigned to the cluster.  This configuration is achieved through the definition of a series of ingress resources created by the IBM Blockchain Platform specifically to associate the public application load balancer with the Kubernetes backend services that route the network traffic to the actual pods running Hyperledger Fabric components.  In order to expose the same Hyperledger Fabric components over the IBM Cloud private network, equivalent ingress resources must be created with an association to the cluster's private application load balancer.  These ingress resources will enable a new route where incoming traffic can flow from the IBM Cloud private network into the Hyperledger Fabric components.  The following steps describe the procedure to easily do this.


1. Get a list of all ingress resources defined in the IBM Blockchain Platform namespace.  

kubectl get Ingress --namespace ibpinfra -o yaml | egrep -v 'resourceVersion|uid|creationTimestamp|status:|loadBalancer:|ingress:|ip:' > private-ingresses.yaml​
Notice that the command provided above is cleaning up unnecessary details of the existing ingress definitions so that the output can be used as a template to create the new private ingress resources.

One important consideration here is that the list of ingress resources defined in this namespace is independent on whether or not there are Hyperledger Fabric components already created in the IBM Blockchain Platform instance being used.  This is advantageous because the work being done in this step does not have to be repeated when new Hyperledger Fabric Components are created. 

2. Edit the file generated in the previous step to modify the names used in the ingress resource definition.  The following command can be used to modify the definitions programatically. 

sed -i -e 's/ingress-/ingress-private-/g' private-ingresses.yaml​

3. In order to create the association between the new ingress resources and the private load balancer, append an ingress.bluemix.net/ALB-ID annotation to each ingress resource defined in the private-ingresses.yaml file.  Use the ID of the private application load balancer that was enabled in the previous steps as the parameter for the annotation.  The following command can be used to create the association programatically. 

sed -i -e '/annotations/a\
\     ingress.bluemix.net/ALB-ID: "<private_ALB_ID>"

' private-ingresses.yaml​
Make sure to keep the spacing, new lines and special characters as typed above when running the command to preserve the proper YAML format.   Replace <private_ALB_ID> with the proper load balancer ID.  Once completed, verify that a new line has been appended under the annotations: stanza for each ingress resource.

4. Create the new ingress resources defined in the private-ingresses.yaml file. 

kubectl apply -f private-ingresses.yaml​

The output of the command should look similar to this:
ingress.extensions/ingress-private-7050 created
ingress.extensions/ingress-private-7051 created
ingress.extensions/ingress-private-7052 created
ingress.extensions/ingress-private-7054 created
ingress.extensions/ingress-private-7443 created
ingress.extensions/ingress-private-8443 created
ingress.extensions/ingress-private-9443 created​

5. List the ingress resources defined in the IBM Blockchain Platform namespace to make sure the new private Ingress resources have been created properly.

kubectl get Ingress --namespace ibpinfra​

The output of the command should look similar to this:
NAME                  HOSTS  ADDRESS         PORTS  AGE
ingress-7050          *      XXX.XXX.XXX.XXX  80     35d
ingress-7051          *      XXX.XXX.XXX.XXX  80     35d
ingress-7052          *      XXX.XXX.XXX.XXX  80     35d
ingress-7054          *      XXX.XXX.XXX.XXX  80     35d
ingress-7443          *      XXX.XXX.XXX.XXX  80     35d
ingress-8443          *      XXX.XXX.XXX.XXX  80     35d
ingress-9443          *      XXX.XXX.XXX.XXX  80     35d
ingress-private-7050  *      10.XXX.XXX.XXX   80     5m13s
ingress-private-7051  *      10.XXX.XXX.XXX   80     5m12s
ingress-private-7052  *      10.XXX.XXX.XXX   80     5m12s
ingress-private-7054  *      10.XXX.XXX.XXX   80     5m12s
ingress-private-7443  *      10.XXX.XXX.XXX   80     5m12s
ingress-private-8443  *      10.XXX.XXX.XXX   80     5m12s
ingress-private-9443  *      10.XXX.XXX.XXX   80     5m11s​
Notice that the IP address assigned to the new private ingress resources should be the IP address of the private application load balancer enabled on previous steps. 

Route Traffic For Client Applications

By now all necessary steps required to route traffic from the IBM Cloud private network to components created by the IBM Blockchain Platform are completed. However, attempts to establish a connection with any of these Hyperledger Fabric components via the private route will fail. The reason for the failure is that server-side TLS is configured by default on every component deployed by the IBM Blockchain Platform for IBM Cloud. This configuration enforces the use of the default cluster’s public ingress subdomain as part of the request hostname for all incoming connections to the Hyperledger Fabric components. 

While this problem can be solved by registering a custom domain name to the private application load balancer IP and generating TLS certificates that include the proper Subject Alternative Name (SANs) per component, a capability that is indeed supported by the IBM Blockchain Platform, this article will show how to use a more simple technique called split DNS.  The split DNS technique will allow us to keep the default TLS configuration generated by the IBM Blockchain platform and map the cluster's public ingress subdomain to the private application load balancer IP configured in previous steps. Using the split DNS technique allows existing components where TLS certificates are already generated to accept traffic via the private application load balancer and also let client applications to readily use the default connection profile generated by the IBM Blockchain Platform without having to edit the different endpoints listed in the JSON file with the custom SANs. 

At a high level the basis of a split DNS technique is that a hostname can map to different IP addresses depending on the environment where the resolution is happening. More specific to this discussion, using this technique will allow for hostnames assigned to Hyperledger Fabric components in the IBM Blockchain Platform for IBM Cloud to map to the private load balancer IP of the cluster instead of the default public IP assigned to the cluster’s public subdomain. This will allow for client application traffic to be routed through the IBM Cloud private network and TLS certificates requirements to be met when connectivity is established. 

The next set of steps provide details on how to implement this technique when the client application is running on the IBM Kubernetes Service. Notice that up to this point all configuration changes described in this article have been completed on the Kubernetes cluster where the IBM Blockchain Platform for IBM Cloud is running.  The next section will focus on configuration changes on the Kubernetes cluster where the application client is running since the overwritten name resolution must happen when the application client tries to connect to the Hyperledger Fabric components.

Update CoreDNS Settings

1. Before we make modifications to the cluster where the application client is running, we must obtain the public ingress subdomain assigned to the Kubernetes cluster where the IBM Blockchain Platform is running. The public ingress subdomain assigned to the cluster can be retrieved using the following command:

ibmcloud ks cluster get --cluster <cluster-name> | egrep 'Ingress Subdomain:'
The output of the command should look similar to this:
Ingress Subdomain:   <cluster_name>.<hash>-0000.<region>.containers.appdomain.cloud

2. Now that the public ingress subdomain has been identified, we need to overwrite the resolution of that subdomain so that it maps to the private IP of the application load balancer.  We will do this using the Kubernetes built in DNS system where the client applications are deployed. While there are different ways to achieve this, we will use the CoreDNS capabilities and in particular the template CoreDNS plugin to overwrite the resolution of the ingress subdomain. To change the configuration of the CoreDNS service first let's get the configuration details from the ConfigMap.

kubectl get configmap coredns –namespace kube-system -o yaml > coredns.yaml​

3. Edit the coredns.yaml file to add and the new entry that will overwrite the name resolution for the ingress subdomain obtained in the previous steps. Append the following snippet of code under the existing entry of the Corefile: stanza keeping the proper indentation of the YAML file. 

   <cluster-ingress-subdomain>:53
   {
       errors
       template IN A . {
           answer "{{ .Name }} 60 IN A <private-load-balancer-ip>"
       }
       cache 30
       reload
       loadbalance

   }​

Substitute <cluster-ingress-subdomain> with the ingress subdomain obtained in the previous steps and <private-load-balancer-ip> with the private load balancer IP enabled in previous sections. The overall file should look similar to this:
...​

 Corefile: |
   # Add your CoreDNS customizations as import files.
   .:53 {
       errors
       health {
           lameduck 10s
       }
       ready
       kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
       }
       prometheus :9153
       forward . /etc/resolv.conf
       cache 30
       loop
       reload
       loadbalance
   }
   <cluster-ingress-subdomain>:53
   {
       errors
       template IN A . {
           answer "{{ .Name }} 60 IN A <private-load-balancer-ip>"
       }
       cache 30
       reload
       loadbalance
   }
kind: ConfigMap

...​

4. Apply the modifications to the Kubernetes cluster.

kubectl apply -f coredns.yaml​

Validate CoreDNS Settings

Now that the CoreDNS configuration has been changed, we need to make sure that the new settings are working properly.  Specifically we have to ensure that the resolution of the default ingress subdomain for the IBP instance maps to the private load balancer IP. 

1. This validation can be done by running an instance of the dnsutils image in the Kubernetes cluster where the application client is running and using the dig command to verify the result.   First let's deploy an instance of the dnsutils. 

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml​

The output of the command should look similar to this:
pod/dnsutils created​

2. Once the dnsutils pod has been created, use the dig command to query the resolution of the public ingress subdomain as follows:

kubectl exec -ti dnsutils -- dig <cluster-ingress-subdomain>​

The output of the command should look similar to this:
; <<>> DiG 9.11.6-P1 <<>> <cluster_name>.<hash>-0000.<region>.containers.appdomain.cloud
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7826
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available


;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 21c8fcb65b830d1c (echoed)
;; QUESTION SECTION:
;<cluster_name>.<hash>-0000.<region>.containers.appdomain.cloud. IN A


;; ANSWER SECTION:
<cluster_name>.<hash>-0000.<region>.containers.appdomain.cloud. 30 IN A 10.X.X.X


;; Query time: 0 msec
;; SERVER: 172.21.0.10#53(172.21.0.10)
;; WHEN: Mon Sep 28 21:12:14 UTC 2020

;; MSG SIZE rcvd: 247​

If the CoreDNS configuration was successfully applied the ANSWER section of the dig command should show a 10.X.X.X IP resolution for the ingress subdomain.

3. Now that the verification is completed we can delete the instance of dnsutils.

kubectl delete -f https://k8s.io/examples/admin/dns/dnsutils.yaml​

Conclusion

 It is a common requirement for production environments to keep communication between applications, services and other infrastructure components within the boundaries of a private network infrastructure.  In this article, we went over the steps required to enable private network traffic, via the IBM Cloud private network, between client applications and IBM Blockchain Platform deployments on the IBM Kubernetes Service. This configuration can offer additional security, faster data transfer speeds, and significant bandwidth cost reductions for communication traffic between applications and the Hyperledger Fabric components that make up your blockchain network.

#Featured-area-1-home

#Blockchain
#SupplyChain
#Featured-area-1
#IBMBlockchainTransparentSupplyandIBMFoodTrust
1 comment
149 views

Permalink

Comments

Thu May 06, 2021 06:21 AM

interesting and very insightful article.