IBM Spectrum Scale can function equally well within a wide range of network environments, from completely open to highly secure. It is important for all implementations, to understand network port usage of each functionality within Spectrum Scale. Many functions require access to very specific ports whereas other functions must be told which ports to use. Open environments may choose to disable firewalls completely, allowing any and all ports to communicate both within and from externally into the Spectrum Scale cluster. Highly secure environments may choose to lock down each OS participating both within the Spectrum Scale cluster and those accessing the cluster from an external entry point, opening only the ports necessary for active functions.
Default firewall state (running or inactive), default open/closed ports, default firewall zones, and default port/service mappings differ amongst the various operating systems supported by Spectrum Scale. This article will not delve into OS defaults, but instead, will show various functionalities within Spectrum Scale and detail exact ports needing to be opened within the firewall.
Two networks are referenced throughout this article.

The internal intra-cluster network: this is a network used within a Spectrum Scale cluster for node to node communication. It is critical for the most basic cluster operations such as bringing nodes online and active, mounting the shared file system(s), and passing of data between all nodes back to the NSD servers and down to the attached disk(s). Day to day cluster administration tasks and data exchange occur over this network.
The external / client-facing network: this is a network used when choosing to externally present Spectrum Scale nodes and services. This network can be the same as the intra-cluster network but often-times is separated in order to reduce accessibility down to a specific set of services. Protocols such as SMB, NFS, and Object may be used to externally access data shared by a Spectrum Scale cluster through one of these external client-facing networks. GUI management of a cluster may be surfaced to this network, thus allowing remote cluster management. Cross-site replication via AFM, Backup and tiering of data to a Spectrum Protect server are also functions that will necessitate access to one or more external networks.
Examples of how to open firewall ports
While the most basic examples will open firewall ports on all networks, this may be considered insecure for many implementations. If a higher level of security is desired, it will be necessary to restrict port traffic to only the required network and/or adapters. Refer to the iptables examples for opening ports only to specific subnets or specific adapters.
RedHat 7.x & CentOS 7.x
List currently open ports
firewall-cmd --list-ports
List zones
firewall-cmd --get-zones
List the zone containing eth0
firewall-cmd --get-zone-of-interface=eth0
Opens port 1191 for TCP traffic
firewall-cmd --add-port 1191/tcp
Opens port 1191 for TCP traffic after reboot. Use this to make changes persistent
firewall-cmd --permanent --add-port 1191/tcp
Opens a range a range of ports
firewall-cmd --permanent --add-port 60000-61000/tcp
Turns off/on the firewall
systemctl stop firewalld
systemctl start firewalld
SLES12
Launch the firewall configuration utility
yast firewall
Ubuntu & Debian
Opens port 1191 for TCP traffic:
sudo ufw allow 1191/tcp
Opens a range of ports:
sudo ufw allow 60000-61000/tcp
Turns off/on the Uncomplicated Firewall:
sudo ufw disable
sudo ufw enable
Windows 2008R2
Find the Firewall utility here:
Control Panel / Administrative Tools / Windows Firewall with Advanced Security
Add new rules for Inbound / Outbound Rules as necessary
iptables (most Linux distributions, including those listed above)
Most Linux distributions use iptables to set firewall rules and policies. Before using these commands, check to see which firewall zones may be enabled by default. Depending upon zone setup, the INPUT and OUTPUT terms may need to be renamed to match a zone for the desired rule. Refer to the Red Hat 7.x example below for one such case.
List the current firewall policies:
sudo iptables -S
sudo iptables -L
Opens port 1191 (GPFS) for inbound TCP traffic from internal subnet 172.31.1.0/24:
sudo iptables -A INPUT -p tcp -s 172.31.1.0/24 --dport 1191 -j ACCEPT
Opens port 1191 (GPFS) for outbound TCP traffic to internal subnet 172.31.1.0/24:
sudo iptables -A OUTPUT -p tcp -d 172.31.1.0/24 --sport 1191 -j ACCEPT
Opens port 445 (SMB) for outbound TCP traffic to external subnet 10.11.1.0/24 and only for adapter
eth1:
sudo iptables -A OUTPUT -o eth1 -p tcp -d 10.11.1.0/24 --sport 445 -j ACCEPT
Opens port 445 (SMB) for inbound TCP traffic to a range of CES IPs (10.11.1.5 through 10.11.1.11)
and only for adapter eth1:
sudo iptables -A INPUT -i eth1 -p tcp -m iprange --dst-range 10.11.1.5-10.11.1.11 --dport 445 -j
ACCEPT
Allows an internal network, eth1, to talk to an external network, eth0:
sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
RedHat 7.x specific. Opens Chef port 8889 for inbound traffic from subnet 10.18.0.0/24 on eth1
within the public zone:
iptables -A IN_public_allow -i eth1 -p tcp -s 10.18.0.0/24 --dport 8889 -j ACCEPT
Save firewall rule changes to persist across a reboot:
sudo iptables-save
Spectrum Scale Installation & basic cluster operation
The Spectrum Scale Install Toolkit can be used for installation of a basic Spectrum Scale cluster, creation of NSDs/filesystems, setup of performance monitoring tools, installation and activation of the management GUI, and even the addition of a Cluster Export Service (CES) Protocol stack.
While each functionality has its own port needs, the Install Toolkit itself will use the following ports:
Install Toolkit port requirements Port Number | Protocol | Service Name | Components involved in communication |
8889 | TCP | Chef | Intra-cluster and installer server |
10080 | Repository | Chef | Intra-cluster and installer server |
123 | UDP | NTP | intra-cluster or external depending NTP server location |
Chef is the underlying technology driving the Install Toolkit. During installation, a chef server is started on the installation server, and repositories are created to house the various Spectrum Scale components. Each node being installed by the Install Toolkit must be able to establish a connection to the repository and the chef server itself.
Typically, the Install Toolkit is run on the intra-cluster network from a single node to all other nodes. An alternate method is available in which the installation server is designated as a node or server outside of the cluster with full access to at least one of the cluster nodes. The Install Toolkit will coordinate the installation from this external location, passing all necessary commands to the internal cluster node for it to actively assist with installation of the rest of the cluster. In this case, port 8889 and 10080 must be opened both to the intra-cluster network on all nodes AND on the external client-facing network.
NTP is not necessary but time sync among nodes is highly recommended and, in fact, required for protocol nodes.
Basic GPFS cluster operation port requirements Port Number | Protocol | Service Name | Components involved in communication |
1191 | TCP | GPFS | Intra-cluster |
(user selected range) | TCP | GPFS Ephemeral Port Range | Intra-cluster |
22 | TCP | SSH | Intra-cluster & administrative access to the cluster nodes |
The SSH port, 22, is used for command execution and general node to node configuration as well as administrative access.
The primary GPFS daemons (mmfsd & mmsdrserv), by default, listen on port 1191. This port is essential for basic cluster operation. The port can be changed manually by using the mmsdrservPort configuration variable: mmchconfig mmsdrservPort=PortNumber.
The ephemeral port range of the underlying OS is used when Spectrum Scale creates additional sockets to exchange data among nodes. This occurs while executing certain commands and is dynamic, based upon the point in time needs of the command as well as other concurrent cluster activities. A user can define an ephemeral port range manually by using the tscCmdPortRange configuration variable: mmchconfig tscCmdPortRange=LowNumber-HighNumber.
If the Install Toolkit is used, the ephemeral port range is automatically set to 60000-61000. Firewall ports should be opened according to the ephemeral port range defined to GPFS.
A sign of an improperly configured ephemeral port range is a hang with commands such as mmlsmgr, mmcrfs.
GUI
There are two Spectrum Scale GUIs.
A one-time use Install GUI is available via the Install Toolkit for a first-time installation in which a cluster has not yet been formed. The Install GUI is optionally available from a web browser using either HTTP or the HTTPs protocol. Although the GUI can be accessed on the internal network if appropriate firewall ports are opened, it's most likely use case is to be accessed external to the cluster.
The management GUI is probably the most familiar to everyone because it is used for day to day cluster activities such as viewing overall cluster state, events, and administering filesets, snapshots, protocols, ILM, ACLs, and diagnostic data. Similar to the Install GUI, the management GUI is most likely desired to be accessible via an external network. Therefore, each GUI node (there can be up to 3) must have the appropriate firewall ports opened to the external network.
GUI port requirements Port Number | Protocol | Service Name | Components involved in communication |
9080 | TCP | (optional) Install GUI | HTTP external network |
9443 | TCP | (optional) Install GUI | HTTPS external network |
80 | TCP | MGMT GUI | HTTPS external network |
443 | TCP | MGMT GUI | HTTPS external network |
4444 | TCP | MGMT GUI | localhost only |
(See Figure 4) | TCP | Zimon Collector | intra-cluster |
Performance monitoring collectors must be deployed for the management GUI to collect performance data. Refer to Figure 4 below for the Performance Monitoring Tool port requirements. The collector runs by default on port 4739, 9084, 9085; all of which can be re-configured within ZIMonCollector.cfg.
Performance Monitoring Tool
Spectrum Scale Performance Monitoring tools consist of sensor and collector components. By default, sensors are installed and activated on all nodes, whereas collectors are installed and activated on GUI nodes. Sensors send their data to collectors and this necessitates the opening of incoming firewall ports on the collector nodes. In a typical cluster configuration, all sensors and collectors are internal to a cluster and thus, self contained within the internal cluster network.
Performance monitoring sensor and collector port requirements Port Number | Protocol | Service Name | Components involved in communication |
4739 | TCP & UDP | Performance Monitoring Tool | Intra-cluster (used for GUI operation as well) |
8123 | TCP | Object Metric collection | Intra-cluster |
8124 | TCP | Object Metric collection | Intra-cluster |
8125 | TCP | Object Metric collection | Intra-cluster |
8126 | UDP | Object Metric collection | Intra-cluster |
8127 | TCP | Object Metric collection | Intra-cluster |
9084 | TCP | Performance Monitoring Tool | Any node that wants to query the database (used for GUI operation as well) |
9085 | TCP | Performance Monitoring Tool | Intra-cluster (used for GUI operation as well) |
Important notes from the Spectrum Scale Knowledge Center:
- The 4739 port needs to be open when a collector is installed
- The 9085 port needs to be open when there are two are more collectors
- If the 9084 port is closed, accessing the collector to debug or to connect external tools or, even another instance of the GUI, remotely is not possible, except from the node where the GUI and collector are installed.
Transparent Cloud Tiering (TCT)
Spectrum Scale Transparent Cloud Tiering (TCT) allows the Spectrum Scale cluster to interface directly with cloud storage types such as Amazon, Cleversafe, Openstack. Configuration of TCT requires the ability for TCT nodes to talk to each other within the network. It also requires a connection to the external network, which is dependent upon the provider of Object Storage.
Transparent Cloud Tiering (TCT) port requirements Port Number | Protocol | Service Name | Components involved in communication |
8085 | TCP | TCT | Intra-cluster |
(Object storage provider dependent) | TCP | TCT | TCT connection to Object storage provider on the external network. Typically HTTPS (443) or HTTP (80) |
The internal port used by Transparent Cloud Tiering can be changed from 8085 to any other port with the following command: mmcloudgateway config.
The provider of Object Storage being connected to with Transparent Cloud Tiering should be contacted to understand exact port needs on the external network. While this connection typically occurs with HTTPS (443) or HTTP (80), it is highly dependant upon the chosen provider.
Cluster Export Services (CES)
Cluster Export Services functionality allows enablement of 3 protocols: NFS, SMB, Object. CES protocols require a floating pool of IPs to be defined. The IPs are allowed to float between designated protocol nodes and can actively move around in cases of failures. CES IPs are automatically assigned and aliased to existing network adapters on protocol nodes during startup.
Because these CES IPs present access to data shared by NFS, SMB, and Object, it is important to consider them when designing a firewall implementation.
Example of aliased CES IPs via the ip addr command:
eth1:
In the above example, eth1 pre-exists with an established route and IP: 10.11.1.122. This is assigned and must be accessible prior to any CES configuration. Once CES services are active, CES IPs are then automatically aliased to this base adapter, thus creating eth1:0 and eth1:1. The floating CES IPs assigned to the aliases are 10.11.1.5 and 10.11.1.8. Both CES IPs are allowed to move to other nodes in case of failure. This automatic movement, combined with the ability to manually move CES IPs, may cause a variance in the number of aliases and CES IPs among protocol nodes.
iptables commands can reference ethernet adapters, subnets, individual IPs, but not adapter aliases. Due to this, it is recommended to create inbound/outbound rules based upon the subnet, all CES IPs themselves, and/or the base adapter. These rules must account for all CES protocols desired. Continue reading below for specific port requirements of NFS, SMB, and Object.
NFS file protocol (CES)
NFS is one of 3 protocols provided by Spectrum Scale cluster export services (CES). NFS is typically used to give data access to a client on an external network. All Spectrum Scale protocol nodes running NFS must therefore have firewall ports open to allow connections from clients. Likewise, the client accessing the Spectrum Scale cluster must have firewall ports open to allow incoming and outgoing connections.
Spectrum Scale AFM functionality uses NFSv3.
NFS file protocol port requirements Port Number | Protocol | Service Name | Components involved in communication |
2049 | TCP & UDP | NFSv4 or NFSv3 (and AFM) | NFS clients and IBM Spectrum Scale protocol node |
111 | TCP & UDP | statd (required only by NFSv3 and AFM) | NFS clients and IBM Spectrum Scale protocol node |
User defined static port | TCP & UDP | mnt (required only by NFSv3 and AFM) | NFS clients and IBM Spectrum Scale protocol node |
User defined static port | TCP & UDP | rpc (required only by NFSv3 and AFM) | NFS clients and IBM Spectrum Scale protocol node |
User defined static port | TCP & UDP | nlm (required only by NFSv3 and AFM) | NFS clients and IBM Spectrum Scale protocol node |
User defined static port | TCP & UDP | rquota (NFSv3 or NFSv4) | NFS clients and IBM Spectrum Scale protocol node |
Important notes from the Spectrum Scale Knowledge Center: NFSV3 uses dynamic ports for NLM, MNT, and STATD services. When a NFSv3 server is used with the firewall, these services must be configured with static ports.
The following recommendations are applicable:
- Set static ports for MNT, NLM, and STATD services that are required by the NFSv3 server by using the
- Allow all external communications on TCP and UDP port 111 by using the protocol node IPs.
- Allow all external communications on the TCP and UDP port specified by
- Restart NFS after changing these parameters using:
- Use
- Remount any existing clients, as a port change may have disrupted connections
SMB file protocol (CES)
SMB is one of 3 protocols provided by Spectrum Scale cluster export services (CES). SMB is typically used to give data access to a client on an external network. All Spectrum Scale protocol nodes running SMB must therefore have firewall ports open to allow connections from clients. Likewise, the client accessing the Spectrum Scale cluster must have firewall ports open to allow incoming and outgoing connections.
Because SMB relies upon the internal CTDB component for storing of configuration information, it is necessary for all protocol nodes to have their CTDB port open within the internal cluster network.
SMB file protocol port requirements Port Number | Protocol | Service Name | Components involved in communication |
445 | TCP | Samba | SMB clients and IBM Spectrum Scale protocol node |
4379 | TCP | CTDB | Intra-cluster (between protocol nodes only) |
Object protocol (CES)
Object is one of 3 protocols provided by Spectrum Scale cluster export services (CES). Object is typically used to give data access to a client on an external network. All Spectrum Scale protocol nodes running Object must therefore have firewall ports open to allow connections from clients. Likewise, the client accessing the Spectrum Scale cluster must have firewall ports open to allow incoming and outgoing connections.
Many Object operations require communication within the Object node itself. These situations can be seen below in Figure 8 with the ports necessitating local host accessibility.
In order for authentication with Object by way of Keystone, it is necessary for all Object clients to have access to the Keystone ports.
Object protocol port requirements Port Number | Protocol | Service Name | Components involved in communication |
8080 | TCP | Object Storage Proxy | Object clients and IBM Spectrum Scale protocol node |
6200 | TCP | Object Storage (local account server) | Local host |
6201 | TCP | Object Storage (local container server) | Local host |
6202 | TCP | (Object Storage (local object server) | Local host |
6203 | TCP | (Object Storage (object server for unified file and object access) | Local host |
11211 | TCP & UDP | Memcached | Local host |
5000 | TCP | Keystone Public | Authentication clients and object clients |
35357 | TCP | Keystone Internal/Admin | Authentication clients and object clients and Keystone administrator |
5431 | TCP & UDP | postgresql-obj | intra-cluster (between protocol nodes only) |
Active File Management (AFM)
Spectrum Scale Active File Management allows one or more Spectrum Scale clusters (or a non-Spectrum Scale NFS source) to exchange file data. File data exchange between clusters can accomplish many goals, one of which is to allow for disaster recovery.
AFM can be implemented using the NFS protocol or NSD protocol.
NFS AFM implementation: Refer to Figure 6 for NFSv3 requirements.
NSD AFM implementation: Refer to Figure 2 for basic GPFS cluster operation port requirements.
Spectrum Scale remote mounting of file systems
Spectrum Scale clusters are able to access file systems of other Spectrum Scale clusters via remote mounts. This can occur in two ways:
#1: All nodes in the Spectrum Scale cluster requiring access to another cluster's file system must have a physical connection to the disks containing file system data. This is typically via SAN. Although outside the topic of firewalls, a SAN that is open to multiple clusters should also be subject to scrutiny from a security point of view.
OR
#2: All nodes in the Spectrum Scale cluster requiring access to another cluster's file system must have a virtual connection through an NSD server.
In both case #1 and case #2, all nodes in the cluster requiring access to another cluster's filesystem must be able to open a TCP/IP connection to every node in the other cluster. Refer to Figure 2 for the basic GPFS cluster operation port requirements. Keep in mind that each cluster participating in a remote mount may reside upon the same internal network OR a separate network from the host cluster. This means from a firewall standpoint, that the host cluster may need ports opened to a number of external networks, dependent upon how many separate clusters are accessing the host.
Spectrum Protect connectivity via mmbackup and HSM
mmbackup is a Spectrum Scale function allowing backup of items such as file systems and file sets to an externally located Spectrum Protect server.
Hierarchical Storage Management (HSM) is used extensively with the policy engine to allow automatic storage tiering to an external disk or tape pool residing within a Spectrum Protect server.
Both functions necessitate an open path for communication between the nodes designated for use with mmbackup or HSM policies and the external Spectrum Protect server. The port requirement below can be viewed within the dsm.sys configuration file as well.
Spectrum Protect via HSM and mmbackup port requirements Port Number | Protocol | Service Name | Components involved in communication |
1500 | TCP | TSM | TSM BA client communication with server |
See Spectrum Protect documentation for additional information and port requirements specific to the server end.
Spectrum Archive connectivity
Spectrum Archive software will reside upon a node or group of nodes within a Spectrum Scale cluster. This necessitates that each Spectrum Archive node allow communication with the rest of the cluster using the same ports listed within Figure 2 above: basic GPFS cluster operation port requirements. Included ports are 1191, 22, and the ephemeral port range. In addition to this, Spectrum Archive will communicate with RPC.
Spectrum Archive can connect to tape drives via a SAN or direct connect. Although outside the topic of firewalls, the fibre channel connectivity used by Spectrum Archive node(s) should be reviewed from a security point of view.
References
#IBMSpectrumScale#Softwaredefinedstorage