Setup remote file system accessThe procedure to set up remote file system access involves the generation and exchange of authorization keys between the two clusters. In addition, the administrator of the GPFS™ cluster that owns the file system needs to authorize the remote clusters that are to access it, while the administrator of the GPFS cluster that seeks access to a remote file system needs to define to GPFS the remote cluster and file system whose access is desired.
Below summarizes the commands that the administrators of the two clusters need to issue so that the nodes in accessingCluster can mount the remote file system fs1, which is owned by owningCluster, assigning rfs1 as the local name with a mount point of /rfs1.
Specify the private IP addresses to be accessed by GPFSUse the mmchconfig command, subnets attribute, to specify the private IP addresses to be accessed by GPFS.
Figure 1 describes an AIX® cluster named CL1 with nodes named CL1N1, CL1N2, and so forth, a Linux cluster named CL2 with nodes named CL2N1, CL2N2, and another Linux cluster named CL3 with a node named CL3N1. Both Linux clusters have public Ethernet connectivity, and a Gigabit Ethernet configured with private IP addresses (10.200.0.1 through 10.200.0.24), not connected to the public Ethernet. The InfiniBand Switch on the AIX cluster CL1 is configured using public IP addresses on the 7.2.24/13 subnet and is accessible from the outside.
With the use of both public and private IP addresses for some of the nodes, the setup works as follows:
1 All clusters must be created using host names or IP addresses that correspond to the public network.
2 Using the mmchconfig command for the CL1 cluster, add the attribute: subnets=7.2.24.0.
This allows all CL1 nodes to communicate using the InfiniBand Switch. Remote mounts between CL2 and CL1 will use the public Ethernet for TCP/IP communication, since the CL2 nodes are not on the 7.2.24.0 subnet.
GPFS assumes subnet specifications for private networks are independent between clusters (private networks are assumed not physically connected between clusters). The remaining steps show how to indicate that a private network is shared between clusters.
3 Using the mmchconfig command for the CL2 cluster, add the subnets='10.200.0.0/CL2.kgn.ibm.com;CL3.kgn.ibm.com' attribute. Alternatively, regular expressions are allowed here, such as subnets='10.200.0.0/CL[23].kgn.ibm.com'. See note 2 for the syntax allowed for the regular expressions.
This attribute indicates that the private 10.200.0.0 network extends to all nodes in clusters CL2 or CL3. This way, any two nodes in the CL2 and CL3 clusters can communicate through the Gigabit Ethernet.This setting allows all CL2 nodes to communicate over their Gigabit Ethernet. Matching CL3.kgn.ibm.com with the cluster list for 10.200.0.0 allows remote mounts between clusters CL2 and CL3 to communicate over their Gigabit Ethernet.
4 Using the mmchconfig command for the CL3 cluster, add the subnets='10.200.0.0/CL3.kgn.ibm.com;CL2.kgn.ibm.com' attribute, alternatively subnets='10.200.0.0/CL[32].kgn.ibm.com'.
This attribute indicates that the private 10.200.0.0 network extends to all nodes in clusters CL2 or CL3. This way, any two nodes in the CL2 and CL3 clusters can communicate through the Gigabit Ethernet.
Matching of CL3.kgn.ibm.com with the cluster list for 10.200.0.0 allows all CL3 nodes to communicate over their Gigabit Ethernet, and matching CL2.kgn.ibm.com with that list allows remote mounts between clusters CL3 and CL2 to communicate over their Gigabit Ethernet.
Use the subnets attribute of the mmchconfig command when you wish the GPFS cluster to leverage additional, higher performance network connections that are available to the nodes in the cluster, or between clusters.
Figure 1. Use of public and private IP addresses in three GPFS clusters
#specrumscale#network#remoteclustermount#Datasecurity#Softwaredefinedstorage