File and Object Storage

 View Only

Multi-protocol data access with IBM Storage Scale S3, NFS, SMB, POSIX and CSI

By Madhu Punjabi posted Tue August 20, 2024 01:17 AM

  

Authors: @Anandhu Karattuparambil, @Madhu Punjabi, @Pravin Ranjan, @Ramya C

IBM Storage Scale simplifies multi-protocol data access by allowing different clients to seamlessly access the same instance of data using various protocols like S3, NFS, SMB, POSIX and CSI.

With multi-protocol data access, the changes made through one protocol are accessible with other protocols without duplicating data. This simplifies data management for enterprise applications which generate large amounts of data, and require easy access to the data without creating data silos. For example, an object ingested by a S3 application can be accessed and modified as a file by a NFS client. Similarly, a file ingested by a NFS client can be accessed and modified as an object by a S3 application.

Let us look at the details of multi-protocol data access support with IBM Storage Scale.

Architecture: IBM Storage Scale cluster configured with S3, NFS, SMB, POSIX protocols and accessed remotely by IBM Storage Scale CSI/ CNSA cluster

Here, we have (on left side) an IBM Storage Scale cluster with one or more CES nodes and on each node we have enabled S3, NFS and SMB protocol services. This cluster is accessed remotely by an IBM Storage Scale CSI/ CNSA cluster (on right side). In addition there may be one or many application pods running with one/more PVCs mounted as volumes. A user ‘User1’ can access the same instance of data using protocols S3, NFS, SMB, POSIX and CSI.

Deploying the IBM Storage Scale cluster with protocols S3, NFS, POSIX and SMB

Multi-protocol data access with S3 is supported with IBM Storage Scale 5.2.1 onwards. To install IBM Storage Scale 5.2.1, please check the IBM Documentation at https://www.ibm.com/docs/en/storage-scale/5.2.1

Deploying the IBM Storage Scale CSI/ CNSA cluster

To install the IBM Storage Scale Container Native Storage Access (CNSA) including IBM Storage Scale Container Storage Interface (CSI) cluster, please check the IBM Documentation at https://www.ibm.com/docs/en/scalecontainernative

To install only the IBM Storage Scale CSI cluster, please check the IBM Documentation at https://www.ibm.com/docs/en/scalecsi

Use cases with different protocols

  1. Multi-protocol data access with S3 and CSI
  2. Multi-protocol data access with S3 and NFS
  3. Multi-protocol data access with S3 and SMB
  4. Multi-protocol data access with S3 and POSIX

Multi-protocol data access with S3 and CSI

Enable the S3 protocol on IBM Storage Scale cluster and setup the IBM Storage Scale CNSA/CSI cluster. Let us look at examples of accessing the same instance of data using S3 protocol and CSI.

On CNSA/CSI cluster

Create storage class and PVC as shown below.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: dynamic-sc-u1
namespace: ibm-spectrum-scale-csi-driver
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "gpfs0"
reclaimPolicy: Delete
---
 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc-u1
namespace: ibm-spectrum-scale-csi-driver
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: dynamic-sc-u1

Create a pod pod-u1 as shown below. CSI application pod will write some data at the PVC mounted as volume (mapped to directory/S3 bucket on IBM Storage Scale file system).

apiVersion: v1
kind: Pod
metadata:
name: pod-u1
namespace: ibm-spectrum-scale-csi-driver
labels:
app: alpine
spec:
containers:
- name: pr-container-1
image: ubuntu
command: ["sh", "-c", "echo Hello, World! > /mnt/gpfs0/demo-dir/pod-u1.txt && sleep 1000"]
#securityContext:
# runAsUser: 10005
# runAsGroup: 11000
volumeMounts:
- name: mypvc
mountPath: /mnt/gpfs0/demo-dir
nodeName: prcsi-22
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: dynamic-pvc-u1
readOnly: false

On IBM Storage Scale cluster

Create the S3 account and bucket using mms3 command as shown below.

# mms3 account create static-acc-u1 --uid 1000 --gid 1000 --newBucketsPath /mnt/gpfs0/demo-dir
# mms3 bucket create static-bucket --accountName static-acc-u1 --filesystemPath /mnt/gpfs0/demo-dir

On S3 client

Using AWS CLI client setup the alias for S3 account.

# alias s3-user1='AWS_ACCESS_KEY_ID=xxxxxxxxx AWS_SECRET_ACCESS_KEY=xxxxxxxx aws --endpoint https://192.168.2.102:6443 --no-verify-ssl s3'

Now create one more bucket and copy the file.txt to the newly created bucket

# s3-user1 ls s3://bucket-dynamic-u1
# s3-user1 mb s3://static-bucket
# s3-user1 cp file.txt s3://static-bucket

On CNSA/CSI cluster

Create the persistent volume(PV) for the directory path that is required for access and apply the PV as shown below.

kind: PersistentVolume
metadata:
name: static-pv-u1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
csi:
driver: spectrumscale.csi.ibm.com
volumeHandle: "0;0;13445038716869086952;1764000A:665B1277;;gpfs0;/mnt/gpfs0/demo-dir"

Create a PVC which will bound to the PV.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-pvc-u1
spec:
volumeName: static-pv-u1
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

Create a pod static-pod-u1 with required user ID and group ID to access the directory provided via the above PVC. Now login to the pod using below 'kubectl' command and validate the content of the file.

# kubectl exec static-pod-u1 -n ibm-spectrum-scale-csi-driver -it -- /bin/sh
# cat /mnt/gpfs0/demo-dir/file.txt

Multi-protocol data access with S3 and NFS

Enable the S3 and NFS protocols on IBM Storage Scale cluster. Let us look at examples of accessing the same instance of data using S3 and NFS protocols.

Creating S3 Accounts and Buckets

Establishing S3 accounts and buckets is crucial for data storage and management. Create a S3 account and bucket using the mms3 command.

# mms3 account create account1 --userName  nfsuser10005 --newBucketsPath /ibm/fs1/s3user1000-dir
Account is created successfully. The secret and access keys are as follows.
 Access Key   Secret Key
---------            ----------     
54PgbXSoAu88vEnj9Tcl xQrdojRnsUworhhOLQbRjfFg66aelRT3NlcEPytW

#verify the user bucket path permissions
# ls -lad /ibm/fs1/ s3user1000-dir
drwxrwx--- 2 nfsuser10005 nfsgroup 4096 Jun 9 22:16 /ibm/fs1/s3user1000-dir

# mms3 bucket create bucket1 --accountName  account1 --filesystemPath  /ibm/fs1/s3user1000-dir/data-share-das-posix-nfs-csi

Create and populate a file in the S3 Bucket

#  echo "this is an update from the S3 user nfs10005" > /ibm/fs1/s3user1000-dir/data-share-das-posix-nfs-csi/test.txt

On S3 client

To interact with the S3 buckets, set up alias for the AWS S3 client using the previously generated access and secret keys.

# alias s3u10005='AWS_ACCESS_KEY_ID=54PgbXSoAu88vEnj9Tcl AWS_SECRET_ACCESS_KEY=xQrdojRnsUworhhOLQbRjfFg66aelRT3NlcEPytW    aws  --endpoint http://10.11.9.109:6001 s3'    

List contents of the S3 Bucket and upload an object to the S3 bucket.

# s3u10005 ls s3://data-share-das-posix-nfs-csi
test.txt

# echo "this is an update from the S3 user nfs10005" >> /root/object-das-posix-nfs-csi
# s3u10005 cp /root/object-das-posix-nfs-csi s3://data-share-das-posix-nfs-csi
upload: ./object-das-posix-nfs-csi to s3://data-share-das-posix-nfs-csi/object-das-posix-nfs-csi

# s3u10005 ls s3://data-share-das-posix-nfs-csi
test.txt
object-das-posix-nfs-csi

Setting Up NFS Exports at IBM Storage Scale cluster

To allow NFS clients to access the S3 bucket data, configure NFS exports on one of the CES nodes. Now create an NFS Export and list all NFS exports.

# mmnfs export add /ibm/fs1/s3user10005-dir/data-share-das-posix-nfs-csi --client "*(Access_Type=RW,Protocols=3:4,Squash=no_root_squash,anonymous_uid=10005,anonymous_gid=11000,Transports=TCP:UDP)"
mmnfs: The NFS export was created successfully
# mmnfs export list 
Path                                                   Delegations  Clients  
-----------------------------------------------------  -----------  -------    
/ibm/fs1/s3user1000-dir/data-share-das-posix-nfs-csi  NONE         *
 

Accessing data via NFS client

Configure NFS clients to access and update data stored in the S3 buckets by setting up users and groups on NFS Client. Also mount the NFS export to a directory.

# groupadd -g 11000 nfsgroup
# useradd nfsuser10005 -d /home/nfsuser10005 -u 10005 -g 11000
# id nfsuser10005
uid=10005(nfsuser10005) gid=11000(nfsgroup) groups=11000(nfsgroup)  
# mkdir -p /mnt/nfsuser10005-data-share
# mount -o vers=3 -t nfs <ces-ip>:/ibm/fs1/s3user10005-dir/data-share-das-posix-nfs-csi /mnt/nfsuser10005-data-share
## Note: NFS client can mount with vers=4.0 as well
 

 Verify mounted directory contents and read/update existing object uploaded with S3. 

# cd /mnt/nfsuser10005-data-share/
# pwd
/mnt/nfsuser10005-data-share
# ls -lrt
total 1
-rw-rw---- 1 nfsuser10005 nfsgrp11000 100 May 24 06:00 object-das-posix-nfs-csi
 
#Read and Update Files
# echo "this is an update from the NFS client user nfs10005" >> ./object-das-posix-nfs-csi 
# cat object-das-posix-nfs-csi
this is an update from the S3 user nfs10005
this is an update from the NFS client user nfs10005
# ls -lrt 
rw-rw---- 1 nfsuser10005 nfsgrp11000 152 May 24 06:44 object-das-posix-nfs-csi
 

Read object data from S3 client

# s3u10005 cp s3://data-share-das-posix-nfs-csi/object-das-posix-nfs-csi /tmp/nfsuser-data-share
 
#Verify object contents
# cat /tmp/nfsuser-data-share
this is an update from the S3 user nfs10005
this is an update from the NFS client user nfs10005
 

Multi-protocol data access with S3 and SMB

Enable the S3 and SMB protocols on IBM Storage Scale cluster. Let us look at examples of accessing the same instance of data using S3 and SMB protocols.

On IBM Storage Scale cluster

Setup the file system to handle NFSV4 ACLS. Create the user authentication type as user_defined (AD/LDAP users are not yet supported with S3 protocol with IBM Storage Scale 5.2.1). Also create the required user and group.

# mmchfs fs1 -k nfs4
 
# mmuserauth service create --data-access-method file --type userdefined
 
# sudo groupadd -g 4000 group4000
# sudo useradd -u 4000 -g 4000 user4000; id user4000
# /usr/lpp/mmfs/bin/smbpasswd -a user4000
 

Create a S3 account using mms3 command. Also create a fileset to be used as a SMB share and add the SMB export.

# mms3 account create account4000 --uid 4000 --gid 4000 --newBucketsPath /mnt/fs1/account4000
 
# mmcrfileset fs1 share4000
# mmlinkfileset fs1 share4000 -J /mnt/fs1/account4000/share4000
 
#create the SMB export and list using below commands
# mmsmb export add share4000 /mnt/fs1/account4000/share4000
# mmsmb export list
export      path                             browseable   guest ok   server smb encrypt
share4000   /mnt/fs1/account4000/share4000   yes          no         auto

Create a S3 bucket in the same export path as the SMB share.

# mms3 bucket create bucket4000 --accountName account4000 --filesystemPath /mnt/fs1/account4000/share4000
Note: The directory '/mnt/fs1/account4000/share4000' for bucket already exists. Skipping update of ownership and the setting of permissions of the directory for the user with uid:gid=4000:4000
Bucket bucket4000 created successfully

On S3 client

Set a alias using the AWS S3 CLI client for the S3 account and upload an object.

# alias s3-bucket="AWS_ACCESS_KEY_ID=0YSP9GfLhao4XVyChqgK AWS_SECRET_ACCESS_KEY=Zae3C/kBTXeCw9bWF0MsZ9rpwdalaGDYjOXvvbkE aws --endpoint https://x.x.x.x:6443 --no-verify-ssl"
Upload the objects to bucket using below command
 
# s3-bucket s3 cp file1.txt s3://bucket4000
upload: ./file1.txt to s3://bucket4000/file1.txt
 
# s3-bucket s3 cp file2.txt s3://bucket4000
upload: ./file2.txt to s3://bucket4000/file2.txt
 
#List the bucket objects using below command:
# s3-bucket s3 ls s3://bucket4000
2024-07-30 04:36:02         14 file1.txt
2024-07-30 04:35:40         14 file2.txt
 

On Windows client

Connect to the SMB share using 'net use' command or explorer.

C:\Users\Administrator> net use Z: \\10.11.99.97\share4000 /u:user4000 cluster
The command completed successfully.
 

List the bucket content in the SMB share.

Multi-protocol data access with S3 and POSIX

Enable the S3 protocol on IBM Storage Scale cluster. Let us look at examples of accessing the same instance of data using S3 protocol and POSIX.

Creating S3 Accounts and Buckets

Establishing S3 accounts and buckets is crucial for data storage and management. Create a S3 account and bucket using the mms3 command.

# mms3 account create account1 --userName  nfsuser10005 --newBucketsPath /ibm/fs1/s3user1000-dir
Account is created successfully. The secret and access keys are as follows.
 Access Key   Secret Key
---------            ----------     
54PgbXSoAu88vEnj9Tcl xQrdojRnsUworhhOLQbRjfFg66aelRT3NlcEPytW

#verify the user bucket path permissions
# ls -lad /ibm/fs1/ s3user1000-dir
drwxrwx--- 2 nfsuser10005 nfsgroup 4096 Jun 9 22:16 /ibm/fs1/s3user1000-dir

# mms3 bucket create bucket1 --accountName  account1 --filesystemPath  /ibm/fs1/s3user1000-dir/data-share-das-posix-nfs-csi

Create and populate a file in the S3 Bucket

#  echo "this is an update from the S3 user nfs10005" > /ibm/fs1/s3user1000-dir/data-share-das-posix-nfs-csi/test.txt

On S3 client

To use the S3 bucket, set up alias for the AWS S3 client using the previously generated access and secret keys.

# alias s3u10005='AWS_ACCESS_KEY_ID=54PgbXSoAu88vEnj9Tcl AWS_SECRET_ACCESS_KEY=xQrdojRnsUworhhOLQbRjfFg66aelRT3NlcEPytW    aws  --endpoint http://10.11.9.109:6001 s3'    

List contents of the S3 Bucket and upload an object to the S3 bucket.

# s3u10005 ls s3://data-share-das-posix-nfs-csi
test.txt

# echo "this is an update from the S3 user nfs10005" >> /root/object-das-posix-nfs-csi
# s3u10005 cp /root/object-das-posix-nfs-csi s3://data-share-das-posix-nfs-csi
upload: ./object-das-posix-nfs-csi to s3://data-share-das-posix-nfs-csi/object-das-posix-nfs-csi

# s3u10005 ls s3://data-share-das-posix-nfs-csi
test.txt
object-das-posix-nfs-csi

Uploading and accessing data via NSD client

On the NSD Client node create the user (nfsuser10005) using below command. Also check and update the content of the folder data-share-das-posix-nfs-csi.

# ls -lrt /ibm/fs1/s3user10005-dir/data-share-das-posix-nfs-csi/ 
total 1
-rw-rw---- 1 nfsuser10005 nfsgrp11000 44 Jul 24 04:00 object-das-posix-nfs-csi
 
#Now append the message to the object
# echo "this is an update from posix client as the nfsuser10005" >> /ibm/fs1/s3user10005-dir/data-share-das-posix-nfs-csi/object-das-posix-nfs-csi

Read object data from S3 client

# s3u10005 cp s3://data-share-das-posix-nfs-csi/object-das-posix-nfs-csi /tmp/nfsuser-data-share
 
#Verify object contents
# cat /tmp/nfsuser-data-share
this is an update from the S3 user nfs10005
this is an update from the NFS client user nfs10005
 

Conclusion

The above use cases show the same instance of data at IBM Storage Scale file systems could be read and updated as S3 objects or files using protocols S3, NFS, SMB, POSIX and CSI.

Note: At the time of writing this blog, multi-protocol data access was tested with basic authentication only. It is not tested with any external authentication mechanism on IBM Storage Scale.

References:

  1. https://www.ibm.com/docs/en/storage-scale/5.2.1
  2. https://www.ibm.com/docs/en/scalecontainernative
  3. https://www.ibm.com/docs/en/spectrum-scale-csi
  4. https://www.redhat.com/en/technologies/cloud-computing/openshift/container-platform
  5. https://www.ibm.com/docs/en/spectrum-scale-csi?topic=provisioning-creating-persistent-volume-pv

#IBMStorageScale

#AmazonS3

0 comments
35 views

Permalink