File and Object Storage

 View Only

Spectrum Scale NAS at home part 1: Building

By MAARTEN KREUGER posted Wed April 28, 2021 09:53 AM


You can install the software on any x86_64 Linux based system, but some distributions are better supported than others. RHEL/CentOS works best, followed by SLES, then Ubuntu. Some features do not work with some OS's, some only on certain levels of Spectrum Scale or the Operating System, or even the kernel within an OS. It's complicated. Check table 13 and 17 at Q2.1 :

My test system is a simple NUC Running Ubuntu 20.04 LTS with an Intel Celeron and 4GB of RAM, built-in SSD and USB attached SATA drive. More RAM/CPU is always welcome, depending on how many and which features you're using. For production use we recommend at least 64GB RAM, or 256GB if you run a lot of services. Also, Spectrum Scale is a clustered filesystem so you can have like a thousand systems in your cluster, We'll just use one to start with.

Download the latest code from:

Let's jump straight into the installation procedure, Unzip and extract the RPM+Debian Packages:

# unzip
  inflating: Spectrum_Scale_Developer-
  inflating: Spectrum_Scale_Developer-
 extracting: Spectrum_Scale_Developer-
  inflating: SpectrumScale_public_key.pgp
# chmod +x Spectrum_Scale_Developer-
# ./Spectrum_Scale_Developer-

 This unpacks the software RPMs/Debs into /usr/lpp/mmfs/<version>. Why /usr/lpp/mmfs/ and not /opt? That's because Spectrum Scale was originally developed for use on AIX in the nineties, and that's where it went. So. Tradition.

You can look up the full installation instructions in the documentation:

Or follow along with my steps. There are two ways to install, manual or automatic. If you have a lot of systems to install automatic is really nice, but we'll create a singleton cluster so manual it is.

First step is to create the apt sources for Spectrum Scale (or yum repos if on RHEL/SLES):

NB: There is bug in this script in v5.1 (Sorry) when using Ubuntu, change line 142;
from:  osVersion      = linux_dist[1][:2]
to:       osVersion      = ""

# /usr/lpp/mmfs/ --repo
Creating repo: /etc/apt/sources.list.d/ganesha.list
Creating repo: /etc/apt/sources.list.d/gpfs.list
Creating repo: /etc/apt/sources.list.d/object.list
Creating repo: /etc/apt/sources.list.d/smb.list
Creating repo: /etc/apt/sources.list.d/zimon.list
Creating repo: /etc/apt/sources.list.d/gpfs2.list

 As these repositories are not signed, we need to enable unsigned repositories, install prerequisites, and then install GPFS itself.

# apt clean
# apt -o Acquire::AllowInsecureRepositories=true -o Acquire::AllowDowngradeToInsecureRepositories=true update
# apt install build-essential openssh-server arping net-tools
# apt install gpfs.base gpfs.gpl gpfs.gskit gpfs.afm.cos gpfs.compression gpfs.gui gpfs.gss.pmsensors gpfs.gss.pmcollector gpfs.nfs-ganesha gpfs.smb

# /usr/lpp/mmfs/ --clean
# apt
# apt update
# reboot

Before we start building the cluster, we need to prepare the system.

First, add Spectrum Scale to the PATH:

# echo "export PATH=/usr/lpp/mmfs/bin:\$PATH" > /etc/profile.d/

# source /etc/profile.d/

Next we'll manually build the kernel extension for GPFS to test if it works. Perhaps a C-compiler is not installed, or there is a kernel problem that needs fixing. We'll make this process automatic at start time later.

# mmbuildgpl

If you get compilation errors, it's either because of missing software, which it will tell you about, or unsupported kernel levels. If you installed the latest and greatest kernel it might be too high. For instance, my clean Ubuntu install's kernel was too high according to the FAQ. I installed an older version, changed the preferred  kernel in grub using grub-customizer, and rebooted:

# uname -r
# apt install linux-image-5.4.0-65-generic linux-modules-extra-5.4.0-65-generic linux-headers-5.4.0-65-generic
# grub-customizer
# reboot
# uname -r

Next step is to get the Operating System  prerequisites in order:

  1. Make the IP address you're using static or fix the assignment in your DHCP server. (really important, do not skip this step!)
  2. NTP time synchronization, check with timedatectl
  3. DNS/hosts,check with ping `hostname` and/or host `hostname`
    1. I'm adding two entries to /etc/hosts, one static for GPFS, one floating for NAS access:
      1. scalenode1
      2. nas1
    1. Firewall, either disable or configure correctly
      1. Ubuntu: ufw disable
      2. RHEL/SLES: systemctl disable firewalld --now
    2. SSH permissions for issuing commands as root to all my cluster nodes:
      1. Allow root to execute remote commands:
        1. edit /etc/ssh/sshd_config and set "PermitRootLogin" to "without-password"
        2. restart sshd: systemctl restart sshd
      2. ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
      3. cat /root/.ssh/ >> /root/.ssh/authorized_keys
      4. ssh -o StrictHostKeyChecking=no `hostname` date


Now we're ready to create the Spectrum Scale cluster:

# mmcrcluster -N `hostname`:manager-quorum -A

# mmchlicense server --accept -N all

# mmchconfig autoBuildGPL=yes

# mmlscluster

# mmstartup

 Check the state of the node, it may be in "arbitrating" mode for a while, it should go "active" automatically. If not, check your firewall, or check the logfile /var/mmfs/gen/mmfslog.

# mmgetstate

 Node number  Node name        GPFS state
       1      scalenode1       active


Now the cluster is running, we can create a filesystem. For this we need block devices, these can be all kinds of full devices (iSCSI, USB, SAS, SAN, NVMe) or partitions on those devices. Run lsblk to get a list. 

I have an internal MMC device which is a bit of an issue, as Spectrum Scale only looks for "regular" devices, check your devices with: mmdevdiscover. As my device is not listed, I'll need to add it using a custom script:

# cat > /var/mmfs/etc/nsddevices <<EOF
cat /proc/partitions | grep -v loop | grep '[0-9]' | while read x x x part
   echo \$part generic
return 1
# chmod +x /var/mmfs/etc/nsddevices

Now we can define the partition as an NSD (Network Shared Disk) which we do with a stanza file:

# cat > local.nsd << EOF

 We name the NSD "mmc", and specify the partition. The server option is the list of servers that have direct access to this device, which is just this system, you need iSCSI or a FC-SAN to have shared acccess from multiple systems. The usage is default, we'll put both data (file data) and metadata (directories, inodes, structures, logs) on this device. The failureGroup is 1, as this is our first and only server. This value guides the data replication feature of GPFS to place copies on multiple systems. The pool is the default "system" pool which is mandatory for putting metadata in.

# mmcrnsd -F local.nsd

# mmlsnsd -M

  Disk name  NSD volume ID      Device          Node name    Remarks
 mm        C0A8B2C760746935   /dev/mmcblk1p2  scalenode1   server node


The NSD is now created, which means an NSD Identification number is written to the partition, and the device is registered in the cluster administration. Next job is to create a filesystem using this NSD.

 We'll build a default file system, with Automount, nfs4 ACLs enabled, default replicas set to 1 for data and metadata, and to 3 as a maximum. The mountpoint is set to /nas1, which neatly matches the special device name. You can change these later if you want, but not the maximum replica settings.

# mmcrfs nas1 -F local.nsd -A yes -k nfs4 -r 1 -R 3 -m 1 -M 3 -T /nas1

 The file system is now ready! We just need to mount it:

# mmmount nas1

# mmlsdisk nas1
disk         driver   sector failure  holds    holds                storage
name         type       size group    metadata data  status  avail  pool
------------ -------- ------ -------- -------- ----- ------- ------ -------
mmc          nsd         512        1      yes   yes   ready up     system

# df -h /nas1
Filesystem      Size  Used Avail Use% Mounted on
nas1             29G  1,4G   28G   5% /nas1

Major changes like adding or removing disks or nodes can be done online via the command line. More user-oriented actions like adding an NFS or SMB export, creating snapshots, or setting file management policies can also be done using the GUI.

Stopping and starting the cluster is done using the following commmands:

# mmshutdown

# mmstartup

The next blog will show the creation of a Windows and a NFS share: Part 2: Adding an NFS and SMB Share


1 comment



Thu April 29, 2021 03:17 AM

Great series of blog articles Maarten, I appreciate this.