File and Object Storage

 View Only

IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes.

By SANDEEP PATIL posted Mon July 27, 2020 02:25 PM

  
IBM Spectrum Scale requires SSH configuration across it’s cluster for its management and administration. Each administration node must be able to run administration commands without the use of a password and without producing any extraneous messages. Also, most of the administration commands must run at the root level.

Spectrum Scale supports sudo based administration with central ability to restrict the Spectrum Scale administration nodes in a cluster. This blog presents an example that illustrates how to configure an existing Spectrum Scale cluster to use sudo wrappers and adminmode central features.

Objective
Customer deployments request Spectrum Scale administration privilege for a normal user (say, ‘gpfsadmin’) and not as ‘root’ and would request to restrict only one node (in this example) in cluster to run the administration commands for security.

Setup
We have 3 node basic setup with the following hostnames. All nodes are running RHEL 8:
host-192-168-0-123
host-192-168-0-27
host-192-168-0-36

Approach/Plan
• Set the Spectrum Scale cluster configuration to use ‘adminCentral’.
• Use sudo wrapper in order for an administrator non-root user account (gpfsadmin) to manage most Spectrum Scale activities via mm* commands.
• Designate a single node in the cluster (host-192-168-0-36 ) where the administrator can manage the Spectrum Scale cluster. Typically, it is recommended to set two nodes or more as the administrative nodes.

SSH Setup Suggestion
Based on your deployment or organization security needs, the system administrator may be required to:
- Ensure no direct root access to the cluster using SSH.
- Avoid storing the SSH private keys in the home directory of root.
We will address both of these in this blog.

Assumption
The Spectrum Scale cluster is configured and uses root login between nodes without the use of a password and without producing extraneous messages. Also, the current remote shell command uses ssh and scp. This means that the cluster administrator is using root user and the administrative commands can be executed from node in the cluster.

Let’s start

Note: Before starting the activity, we would (backup & clear) the ssh keys and the authorized_keys for the root user from all the nodes in the cluster.
Based on your overall deployment setup, it is important to understand that this step will not prevent break you from the root user from ssh login into the nodes.

Step 1: We want to assign node “host-192-168-0-36” as an administrative node from where administrator can manage the Spectrum Scale cluster. In this example, we also want to ensure that the SSH private key is not stored in the default location (which is /root/.ssh/id_rsa) but in a customized location (i.e. /opt/ssh_keys/root_sandeep/id_rsa) which may be required by internal security requirements in some cases. If this is not a requirement then we will suggest to keep the SSH private key it in the default location.

- Login as root on host-192-168-0-36 .
- Use ssh-keygen -t rsa with appropriate options.

root@host-192-168-0-36 root_sandeep]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /opt/ssh_keys/root_sandeep/id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /opt/ssh_keys/root_sandeep/id_rsa.
Your public key has been saved in /opt/ssh_keys/root_sandeep/id_rsa.pub.
The key fingerprint is:
SHA256:zYUOEana9Kp9hPGBIQHJM9StNGaRmrbbCXeFbrIrXp0 root@host-192-168-0-36
The key's randomart image is:
+---[RSA 3072]----+
| oo+o= oo |
| = O o .. . |
| O + =. . . |
| + . * o= . |
| . . = *S.+ |
| o +.*.+ |
| =.*Eo |
| o.+.. . |
| ...oo.. |
+----[SHA256]-----+
[root@host-192-168-0-36 root_sandeep]# ls
id_rsa id_rsa.pub


The above step ensures that the SSH keys are created in a custom folder.

Step 2: Copy the ssh public key created in the above step and install it as an authorized key on the other nodes in the cluster. This will allow the root user on node “host-192-168-0-36” to do passwordless ssh and scp to all the other nodes in the cluster – but the reverse will not be true. This will allow the administration commands to be executed only on node “host-192-168-0-36”.

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub host-192-168-0-123
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@host-192-168-0-123's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'host-192-168-0-123'"
and check to make sure that only the key(s) you wanted were added.

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub host-192-168-0-27
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@host-192-168-0-27's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'host-192-168-0-27'"
and check to make sure that only the key(s) you wanted were added.

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub host-192-168-0-36
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@host-192-168-0-36's password:

Number of key(s) added: 1


Now try logging into the machine, with: "ssh 'host-192-168-0-36'"
and check to make sure that only the key(s) you wanted were added.


Step 3: Since the root user’s SSH private key is stored in the non-default location, we need to ensure that the SSH daemon on “host-192-168-0-36” is aware of the identity key location by updating the SSH configuration file:

[root@host-192-168-0-36 root_sandeep]# vim ~/.ssh/config
IdentityFile /opt/ssh_keys/root_sandeep/id_rsa

Restart sshd service:

[root@host-192-168-0-36 ~]# service sshd restart
Redirecting to /bin/systemctl restart sshd.service

Step 4: Let us test if Ensure that the root user from “host-192-168-0-36” is able to login to other nodes in the cluster using SSH without prompting for password:

[root@host-192-168-0-36 root_sandeep]# ssh root@host-192-168-0-123
Activate the web console with: systemctl enable --now cockpit.socket

This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --register

Last login: Sun Jun 21 18:55:00 2020 from 192.168.0.36

[root@host-192-168-0-36 root_sandeep]# ssh root@host-192-168-0-27
Activate the web console with: systemctl enable --now cockpit.socket

This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --register
Last login: Sun Jun 21 16:45:42 2020 from 192.168.0.36
[root@host-192-168-0-27 ~]#



Step 5: Verify that the cluster configuration adminMode is set to central by the command
mmlsconfig | grep admin. If not, set it using mmchconfig adminMode=central:

[root@host-192-168-0-36 ~]# mmlsconfig | grep admin
adminMode central
[root@host-192-168-0-36 ~]#



Step 6: Verify that the root user on “host-192-168-0-36” is able to run Spectrum Scale administrative commands that needs SSH access across the cluster without prompting for a password:

[root@host-192-168-0-36 root_sandeep]# mmgetstate -a

Node number Node name GPFS state
-------------------------------------------
1 host-192-168-0-123 active
2 host-192-168-0-27 active
3 host-192-168-0-36 active


This is now all set and ensures that the root from “host-192-168-0-36” can run Spectrum Scale administrative commands without prompting for password.
Note: If you run the administrative commands as root from the other nodes in the cluster, you will need to provide the root password. We don’t want that because our intent is to “host-192-168-0-36” only to be the node used for administration.


Step 7: Setup sudo wrapper configuration:
- Create a user “gpfsadmin” and a group “gpfs” on all Spectrum Scale nodes in cluster. Here The user “gpfsadmin” will be the sudo user that we will use for Spectrum Scale administration. Run the following commands on all the nodes in the cluster. The user and group ID can be selected based on your organization requirements:

#groupadd gpfs -g 1010
#useradd -m gpfsadmin -G gpfs -u1011


Then, set the common password for gpfsadmin user on all the Spectrum Scale nodes
#passwd gpfsadmin


Step 8: On all the Spectrum Scale nodes, add the following lines into the sudoers file by running the command visudo (/etc/sudoers):

# Preserve GPFS environment variables:
Defaults env_keep += "MMMODE environmentType GPFS_rshPath GPFS_rcpPath mmScriptTrace GPFSCMDPORTRANGE GPFS_CIM_MSG_FORMAT"

# Allow members of the gpfs group to run all commands but only selected commands without a password:
%gpfs ALL=(ALL) PASSWD: ALL, NOPASSWD: /usr/lpp/mmfs/bin/mmremote, /usr/bin/scp, /bin/echo, /usr/lpp/mmfs/bin/mmsdrrestore

# Disable requiretty for group gpfs:
Defaults:%gpfs !requiretty


In the example above, the first line preserves the environment variables that the Spectrum Scale administration commands need to run. The second line allows the users in the gpfs group to run administration commands without being prompted for a password. The third line disables requiretty. When this flag is enabled, sudo blocks the commands that do not originate from a TTY session.


Step 9: Allow root on “host-192-168-0-36” to perform passwordless SSH to other nodes in the cluster via gpfsadmin user, copy the contents of the root ssh public key (/opt/ssh_keys/root_sandeep/id_rsa.pub) file to the authorized_keys for the user gpfsadmin (/home/gpfsadmin/.ssh/authorized_keys) on all the Spectrum Scale nodes.
Set the owner and group permission for the authorized_keys file to gpfsadmin.


While step 9 can be done manually, it is best to make use of ssh-copy-id command which copies the files and set’s the required permissions:

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub gpfsadmin@host-192-168-0-36
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gpfsadmin@host-192-168-0-36's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'gpfsadmin@host-192-168-0-36'"
and check to make sure that only the key(s) you wanted were added.

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub gpfsadmin@host-192-168-0-123
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gpfsadmin@host-192-168-0-123's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'gpfsadmin@host-192-168-0-123'"
and check to make sure that only the key(s) you wanted were added.

[root@host-192-168-0-36 root_sandeep]# ssh-copy-id -i /opt/ssh_keys/root_sandeep/id_rsa.pub gpfsadmin@host-192-168-0-27
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/opt/ssh_keys/root_sandeep/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gpfsadmin@host-192-168-0-27's password:

Number of key(s) added: 1


Now try logging into the machine, with: "ssh 'gpfsadmin@host-192-168-0-27'"
and check to make sure that only the key(s) you wanted were added.

Step 10: Set the Spectrum Scale cluster to use sudo via mmchcluster:

[root@host-192-168-0-36 ~]# mmchcluster --use-sudo-wrapper
mmsetrcmd: Command successfully completed
mmsetrcmd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@host-192-168-0-36 ~]#

To verify that the cluster is using sudo, issue the mmlscluster command as shown below:

[root@host-192-168-0-36 ~]# mmlscluster

GPFS cluster information
========================
GPFS cluster name: gpfs1.local
GPFS cluster id: 10999364594707861438
GPFS UID domain: gpfs1.local
Remote shell command: sudo wrapper in use
Remote file copy command: sudo wrapper in use
Repository type: CCR

Node Daemon node name IP address Admin node name Designation
--------------------------------------------------------------------------------------------------------
1 host-192-168-0-123.openstacklocal 192.168.0.123 host-192-168-0-123.openstacklocal quorum-manager
2 host-192-168-0-27.openstacklocal 192.168.0.27 host-192-168-0-27.openstacklocal quorum-manager
3 host-192-168-0-36.openstacklocal 192.168.0.36 host-192-168-0-36.openstacklocal quorum-manager



Step 11: Setup validation. The steps below validate the sudo setup and the central administration mode from a single node in the cluster:

1. The root user on “host-192-168-0-36” must be able to run commands without being prompted for a password on any node in the cluster, for example:

[root@host-192-168-0-36 ~]# ssh host-192-168-0-27 -l gpfsadmin /bin/whoami
gpfsadmin
[root@host-192-168-0-36 ~]# ssh host-192-168-0-123 -l gpfsadmin /bin/whoami
gpfsadmin


2. Verify the root on “host-192-168-0-36” is able to login as gpfsadmin on all nodes on the Spectrum Scale cluster without being prompted for password:

[root@host-192-168-0-36 ~]# ssh gpfsadmin@host-192-168-0-123
Activate the web console with: systemctl enable --now cockpit.socket

This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --register

Last login: Sun Jun 21 19:02:03 2020 from 192.168.0.36

[root@host-192-168-0-36 ~]# ssh gpfsadmin@host-192-168-0-27
Activate the web console with: systemctl enable --now cockpit.socket

This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --register

Last login: Sun Jun 21 16:25:17 2020 from 192.168.0.36
[gpfsadmin@host-192-168-0-27 ~]$


3. sshwrap and scpwrap are Spectrum Scale sudo wrapper scripts for the remote shell command and remote copy command. To verify that the sudo environment is configured correctly, we need to verify that they can run without errors as the gpfsadmin user. Run the following commands from “host-192-168-0-36” as gpfsadmin user targeting other nodes of the cluster:

gpfsadmin@host-192-168-0-36 root]$ sudo /usr/lpp/mmfs/bin/mmcommon test sshwrap host-192-168-0-123
[sudo] password for gpfsadmin:
mmcommon test sshwrap: Command successfully completed
[gpfsadmin@host-192-168-0-36 root]$
[gpfsadmin@host-192-168-0-36 root]$ sudo /usr/lpp/mmfs/bin/mmcommon test scpwrap host-192-168-0-27
mmcommon test scpwrap: Command successfully completed
[gpfsadmin@host-192-168-0-36 root]$


4. Final Step (if all of the above steps pass):
Run Spectrum Scale administrative commands as gpfsadmin user from “host-192-168-0-123”. To test this, we ran mmgetstate -a, a command that contacts all nodes to query the status of the file system daemon:

[gpfsadmin@host-192-168-0-36 root]$ sudo /usr/lpp/mmfs/bin/mmgetstate -a

Node number Node name GPFS state
-------------------------------------------
1 host-192-168-0-123 active
2 host-192-168-0-27 active
3 host-192-168-0-36 active
[gpfsadmin@host-192-168-0-36 root]$


Bingo! We now have met all the above objectives and are able to administer Spectrum Scale cluster using sudo user (gpfsadmin) only from one node (administrative node: host-192-168-0-36).

Additional Notes
If your organization requires the disablement of “root” ssh login using password, set the PermitRootLogin setting in the sshd_config file to prohibit-password and restart the sshd service on all the nodes:

vi /etc/ssh/sshd_config
Add:
PermitRootLogin prohibit-password

This will ensure that root login using password is disabled using SSH thus making the system more secure.

IMPORTANT NOTES
- When setting the PermitRootLogin parameter, it is important to choose a value that works for the overall deployment and security goals. Misconfiguring the PermitRootLogin parameter may lock the root user out from logining to the node via SSH. In the above example, we set PermitRootLogin to yes on “host-192-168-0-36” and PermitRootLogin to prohibit-password on the rest of the nodes in the cluster. Also note that setting PermitRootLogin improper value may disable the Spectrum Scale administration functionality.
- You can opt to stop using sudo wrapper scripts in the IBM Spectrum Scale cluster. To stop using sudo wrappers, run the mmchcluster command with the --nouse-sudo-wrapper option as shown below:

[gpfsadmin@host-192-168-0-36 root]$ sudo /usr/lpp/mmfs/bin/mmchcluster --nouse-sudo-wrapper



Conclusion
In this blog we have shown how to change an existing IBM Spectrum Scale cluster being operated by root to sudo based administration with limited set of nodes acting as administrative nodes.


Reference Links
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adm_configsudo.htm

https://man7.org/linux/man-pages/man5/sshd_config.5.html

https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1pdg_deploymentpdprereqmissing.htm#scenarioinstallationdeploymentandupgraderelatedproblemsduetomissingprerequisites__Prompt-lessSSHSetup





















0 comments
16 views

Permalink