IBM Storage Ceph

IBM Storage Ceph

Connect, collaborate, and share expertise on IBM Storage Ceph

 View Only

Hands-On Integrating IBM Storage Ceph with PoINT Archive Gateway

By Daniel Alexander Parkes posted Tue April 22, 2025 09:16 AM

  

Installing PoINT Archival Gateway on RHEL 9.3

PoINT Archival Gateway (PAG) can run on many nodes (Enterprise Edition) or a single node (Compact Edition). The steps below cover a Compact deployment on RHEL 9.3.

Installation starts by copying the installation tarball to the server, extracting it, installing .NET runtimes, and registering the systemd services so PAG runs automatically:

# scp PagCompactInstall-4.1.228.tar.gz root@linux1:/root/PAG/
# ssh root@linux1
# cd /root/PAG/
# tar -zxvf PagCompactInstall-4.1.228.tar.gz
# tar -zxvf PAG-CGN-FULL-4.1.228.tar.gz -C /
# tar -zxvf PAG-GUI-FULL-4.1.228.tar.gz -C /
# dnf install dotnet-runtime-8.0 aspnetcore-runtime-8.0 -y
# cp -pr /opt/PoINT/PAG/CGN/PagCgnSvc.service /etc/systemd/system
# cp -pr /opt/PoINT/PAG/CGN/pag-cgn.conf /etc/opt/PoINT/PAG/CGN/pag-cgn.conf
# cp -pr /opt/PoINT/PAG/GUI/PagGuiSvc.service /etc/systemd/system
# cp -pr /opt/PoINT/PAG/GUI/pag-gui.conf /etc/opt/PoINT/PAG/GUI/pag-gui.conf

After extraction, update the configuration files so that IP addresses, ports, and the license key match your environment. The S3 REST API settings reside in /etc/opt/PoINT/PAG/CGN/pag-cgn.conf. A minimal edit looks like:

# vi /etc/opt/PoINT/PAG/CGN/pag-cgn.conf
[Administration Address]
CGN-GUI-MY-IP=10.251.0.35

[S3 REST API Addresses]
CGN-HTTP-S3-FQDN=linux1.cephlabs.com
CGN-HTTP-S3-IP=10.251.0.35
CGN-HTTP-S3-PORT-NOSSL=4080
CGN-HTTP-S3-PORT-SSL=4443
CGN-HTTP-S3-SSL-CERT-NAME=FILE:PAG.pfx
CGN-HTTP-S3-SSL-CERT-PWD=

[License]
CGN-Configuration-Key=QWYHM-W1787-5SD3X

Do the same for the GUI service in /etc/opt/PoINT/PAG/GUI/pag-gui.conf:

# vi /etc/opt/PoINT/PAG/GUI/pag-gui.conf
[Administration Address]
GUI-DB-IP=10.251.0.35
GUI-DB-PORT=4000

Enable and start both services:

# systemctl enable --now PagCgnSvc
# systemctl enable --now PagGuiSvc

You can now open the PAG GUI via HTTPS on 10.251.0.35:4443, log in with default credentials, enter your license key, and activate the software under System Management → Information. Create a storage partition as the first logical container for data residing on tape.

Storage Partition list
Figure 1 — Storage Partitions overview

Next, create an Object Repository (equivalent to a bucket). Click Create Object Repository and complete the dialog:

Create Object Repository dialog

The GUI now shows the list of existing repositories (buckets) and, after you drill into one, its details:

Repository list
Repository details

Finally, create an application user with HMAC credentials so that Ceph RGW can authenticate against PAG:

User creation dialog

Integrating PAG as a Storage Class within Ceph

Ceph RGW treats external back‑ends as cloud‑tier placements. Add and configure a new point-tape storage class:

# radosgw-admin zonegroup placement add --rgw-zonegroup=default \
    --placement-id=default-placement --storage-class=point-tape \
    --tier-type=cloud-s3

# radosgw-admin zonegroup placement modify --rgw-zonegroup default \
    --placement-id=default-placement --storage-class point-tape \
    --tier config=endpoint=http://linux1.cephlabs.com:4080, \
access_key=9FD33A27642C45480260, \
secret="YvFLFqQD+fZF+2gwVD4hbbgYzNoo4QeUiprhh0Tv", \
target_path=cephs3tape, \
multipart_sync_threshold=44432, \
multipart_min_part_size=44432, \
retain_head_object=true,region=default,allow_read_through=true

List the placement to verify:

# radosgw-admin zonegroup placement list

Note: If you are not running Ceph multisite, restart RGW so the changes apply:

# ceph orch restart rgw.default

Bucket Creation & Lifecycle Policy

Create a bucket called dataset:

# aws --profile tiering --endpoint https://s3.cephlabs.com \
    s3 mb s3://dataset --region default

Save this lifecycle policy as point-tape-lc.json:

{
  "Rules": [
    {
      "ID": "Testing LC. move to tape after 1 day",
      "Prefix": "",
      "Status": "Enabled",
      "Transitions": [
        { "Days": 1, "StorageClass": "point-tape" }
      ]
    }
  ]
}
# aws --profile tiering s3api put-bucket-lifecycle-configuration \
    --lifecycle-configuration file://point-tape-lc.json --bucket dataset
# aws --profile tiering s3api get-bucket-lifecycle-configuration --bucket dataset
{
  "Rules": [
    {
      "ID": "move to tape after 30 days",
      "Prefix": "",
      "Status": "Enabled",
      "Transitions": [
        { "Days": 1, "StorageClass": "point-tape" }
      ]
    }
  ]
}

Testing the Integrated Setup

Upload a sample file and list objects:

# aws --profile tiering s3 cp 10mb_file s3://dataset/
upload: ./10mb_file to s3://dataset/10mb_file

# aws --profile tiering s3api list-objects-v2 --bucket dataset
{
  "Contents": [
    {
      "Key": "10mb_file",
      "LastModified": "2025-03-24T15:40:55.879Z",
      "ETag": "\"75821af1e9df6bbc5e8816f5b2065899-2\"",
      "Size": 10000000,
      "StorageClass": "STANDARD"
    }
  ]
}

Once the lifecycle daemon triggers:

# radosgw-admin lc list
# aws --profile tiering s3api list-objects-v2 --bucket dataset
{
  "Contents": [
    {
      "Key": "10mb_file",
      "LastModified": "2025-03-24T15:43:02.891Z",
      "ETag": "\"75821af1e9df6bbc5e8816f5b2065899-2\"",
      "Size": 0,
      "StorageClass": "point-tape"
    }
  ]
}

Verify the object on the PAG back‑end:

# aws --profile points3 --endpoint http://linux1.cephlabs.com:4080 \
    s3api head-object --bucket cephs3tape --key dataset/10mb_file

Object Retrieval Workflow

Temporary restore (3 days):

# aws --profile tiering s3api restore-object \
    --bucket dataset --key hosts --restore-request Days=3
# aws --profile tiering s3 ls s3://dataset
2025-03-24 11:43:02   10000000 10mb_file
# aws --profile tiering s3api head-object \
    --bucket dataset --key 10mb_file
{
  "AcceptRanges": "bytes",
  "Restore": "ongoing-request=\"false\", expiry-date=\"Thu, 27 Mar 2025 15:45:25 GMT\"",
  "LastModified": "2025-03-24T15:43:02Z",
  "ContentLength": 10000000,
  "StorageClass": "point-tape"
}

Permanent restore:

# aws --profile tiering s3 cp 20mb_file s3://dataset/
upload: ./20mb_file to s3://dataset/20mb_file
# aws --profile tiering s3api head-object \
    --bucket dataset --key 20mb_file | grep StorageClass
"StorageClass": "point-tape"
# aws --profile tiering s3api restore-object \
    --bucket dataset --key 20mb_file --restore-request {}
# aws --profile tiering s3api head-object --bucket dataset --key 20mb_file
{
  "AcceptRanges": "bytes",
  "LastModified": "2025-03-24T15:55:10Z",
  "ContentLength": 20000000,
  "StorageClass": "STANDARD"
}

Conclusion

By deploying PoINT Archival Gateway and registering it as a point‑tape cloud tier in IBM Storage Ceph, you can transparently migrate cold data to cost‑efficient tape and restore it on demand—all through familiar S3 commands—while retaining compliance, air‑gapped protection, and operational simplicity.

0 comments
9 views

Permalink