Executive summary
S3-compatible object storage is a critical component of modern IT environments and a frequent target of cyberattacks. While S3 provides strong native security controls, credential compromise remains a fundamental risk. Attackers who gain access to S3 credentials can delete or encrypt data, disable versioning, manipulate lifecycle policies, and weaken retention controls—often resulting in ransomware incidents, data loss, and prolonged outages.
Traditional S3 security mechanisms alone cannot fully protect against attacks executed with valid credentials. Organizations therefore require cyber‑resilient data protection that operates independently of S3 clients and credentials.
This document introduces an innovative approach to cyber‑resilient S3 object storage using IBM Storage Scale. The solution combines three core capabilities:
- S3-compatible object storage
- Safeguarded copies on fileset level that provide immutable, point-in-time protection
- Instant recovery through writable clones
By aligning S3 buckets with Storage Scale filesets, organizations can create bucket-level safeguarded copies. These safeguarded copies are immutable, space-efficient, and invisible to S3 clients, ensuring that even attackers with full S3 permissions cannot modify or delete data within safeguarded copies. Retention policies further guarantee protection against both malicious and accidental deletion.
In the event of a cyberattack, IBM Storage Scale enables multiple recovery options with minimal downtime. Organizations can rapidly restore access to data in read-only or read-write mode. Bucket configuration and policies can be retained by the restore process.
In summary, this solution delivers true cyber resilience for S3 object storage by combining immutable protection with fast recovery. It significantly reduces the impact of ransomware and credential-based attacks, ensures business continuity, and enables organizations to recover quickly and confidently after a cyber incident—without reliance on S3 credentials or trust in client-side operations.
Introduction to S3 object storage
S3 object storage (Simple Storage Service) is a highly scalable, durable object storage service designed to store and retrieve virtually any amount of data from anywhere. S3 was developed by Amazon and is the de facto standard object storage protocol. Unlike of traditional filesystems, S3 stores data as objects in buckets.
S3 Buckets are logical containers where objects are kept and managed. Buckets have globally unique names. Access permissions, lifecycle rules, logging, and encryption settings are typically configured at the bucket level.
S3 objects are the actual pieces of data stored inside a bucket. Each object consists of the data, metadata and unique key. Objects can range in size from a few bytes to multiple terabytes and are accessed using their bucket name and object key.
In short, S3 provides a simple, secure, and highly reliable foundation for storing and managing data at cloud scale. It supports a wide range of use cases, including backups, data lakes, static website hosting, application data, and large-scale analytics. With features like fine-grained access control, encryption at rest and in transit, lifecycle policies, and multiple storage classes for cost optimization, S3 lets organizations balance performance, security, and cost as their data needs evolve.
S3 object storage architecture
As shown in figure 1, S3 follows a client–service architecture where applications (S3 client) interact with a managed, globally distributed storage service (S3 service) using a well-defined API (S3 protocol).

Figure 1: S3 object storage architecture
The S3 client is any tool, SDK, or application that interacts with the S3 service using the S3 API. The S3 client authenticates with the S3 service using credentials, constructs and sends the S3 request (such as PUT, GET, DELETE) over HTTPS. The S3 client also handles retries, multipart uploads, and error handling (often built into SDKs). Credentials used by the S3 client are also know as access key and secret key.
The S3 protocol is a RESTful API built on standard web technologies. It uses HTTPS for secure communication and is based on standard verbs representing S3 request. Objects are addressed using bucket name and object key. The S3 protocol acts as the contract between clients and the S3 service.
The S3 service is the fully managed backend provided by the S3 object storage system that stores and manages data. The S3 service has a variety of functions including:
- Manages buckets, objects, metadata, and versions
- Enforces access control and security policies for buckets and objects
- Handles replication, encryption, lifecycle policies, and object locking
- Provides high availability and virtually unlimited scalability
S3 services provides a wide variety of security features allowing to enforce access control, encryption, data protection, monitoring, auditing, compliance and governance. However, credential compromise remains a fundamental risk. Attackers who gain access to S3 credentials can delete or encrypt data, disable versioning, manipulate lifecycle policies, and weaken retention controls—often resulting in ransomware incidents, data loss, and prolonged outages.
S3 cyber resilience challenges
Due to its widespread adoption, S3-compatible object storage has become an attractive target for cyberattacks. From a cyber‑resilience perspective, the primary weakness lies in credentials. S3 access relies on an access key and secret key that an S3 client uses to authenticate with the S3 service. If an attacker obtains these credentials, they can perform destructive actions depending on the permissions associated with them.
With read‑write access, an attacker can delete objects or encrypt them, effectively causing data loss or a ransomware-like event. If the compromised credentials grant full bucket permissions, the impact is even more severe. An attacker can disable bucket versioning, modify lifecycle policies to delete objects and object versions, or alter bucket configuration settings. They may also change Object Lock settings to reduce retention periods. While existing objects may retain their original retention settings, newly created objects could expire much sooner, weakening long‑term data protection.
This issue is not unique to object storage: credentials are the weak point of virtually all IT systems. Although mechanisms exist to centrally manage and protect S3 credentials—such as strong access controls, key rotation, and enforced multi‑factor authentication—their adoption is uneven. This is particularly true for on‑premises S3-compatible object storage environments, where security controls are often less mature than in public cloud platforms.
To effectively defend against modern cyber threats—particularly ransomware and credential compromise—a new approach is required. The following section introduces an innovative solution that protects objects in an immutable manner, independent of client credentials and S3 operations, and enables fast, reliable recovery in the aftermath of a cyberattack.
Safeguarded buckets with instance recovery
The solution is built on IBM Storage Scale, a high‑performance, massively parallel file system and global data platform [1]. IBM Storage Scale provides an integrated S3 object storage service that enables S3 clients to securely store and manage data. In addition, it offers safeguarded copies that capture all data at a specific point in time in an immutable manner [2].
IBM Storage Scale also supports the creation of writable clones from safeguarded copies, enabling instant data restoration and a rapid return to production operations.
By combining S3 object storage, immutable safeguarded copies, and instant recovery through writable clones, IBM Storage Scale enables S3 buckets to be effectively protected and quickly recovered following a cyberattack.
Note: In IBM Storage Scale, safeguarded copies are also referred to as immutable snapshots. The two terms are used interchangeably and describe the same underlying capability.
Solution configuration
The IBM Storage Scale S3 service stores buckets as directories and objects as files in a file system. An IBM Storage Scale file system can be further partitioned in filesets. A fileset is a directory in the file system allowing to create safeguarded copies on a fileset level. Buckets can be aligned to filesets as shown in figure 2.

Figure 2: S3 bucket on IBM Storage Scale fileset with SGC
There can be a 1:1 relation of buckets and filesets or a n:1 relation. Safeguarded copies can be created on a fileset level as shown in figure 2. A safeguarded copy in IBM Storage Scale is immutable and space efficient. This means that files and directories captured in the safeguarded copy cannot be changed or deleted. The safeguarded copy itself can be configured with a retention time during which the safeguarded copy cannot be deleted.
Aligning buckets to fileset buckets allows creating bucket-level safeguarded copies. A bucket-level safeguarded copy is not visible from the S3 client. An attacker who captured S3 client credentials can tamper with the objects and buckets but not with the safeguarded copy. Bucket-level safeguarded copies are the foundation for quick recovery after a cyber attack.
Creating bucket-level safeguarded copies
Referencing figure 2, the S3 clients store and retrieve objects in the bucket. Periodically or on-demand safeguarded copies are created on the fileset level. The fileset is aligned to a bucket.
There are different ways to create safeguarded copies with IBM Storage Scale by using the command line, REST API or GUI. In the examples below a safeguarded copy with the name sgc1 is created for file system name fs1 and fileset fset1 and expires on 31.12.2026 23:59:59. The path for fileset fset1 is /ibm/fs1/fset1. A bucket is configured on fileset fset1.
Command line:
The command must be executed on a cluster node with administrative access:
# mmcrsnapshot fs1 sgc1 –j fset1 --expiration-time 2026-12-31-23:59:59
REST API:
The REST API call using command curl can be executed on any server and requires API credentials for an API use in role snapAdmin denoted by xxxxx in the example below.
$ curl -k -X POST --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Basic xxxxx" -d@sgc1.json \
"https://host/scalemgmt/v2/filesystems/fs1/filesets/fset1/snapshots"
The configuration of the safeguarded copy includes the name and expiration time and is stored in file sgc1.json:
{
"snapshotName": "sgc1",
"expirationTime": "2026-12-31-23:59:59"
}
GUI
Logon to the GUI with a user in snapAdmin role. From the left menu select Files – Snapshot, select fileset fset1. Now click on Create Snapshot, select the path for fileset fset1, specify the snapshot name sgc1 and the expiration date. Click on Create.

Note, the expiration time for safeguarded copies must be at least 10 minutes later than the current time and not longer than 365 days.
List safeguarded copies
Once the safeguarded copy was created on the fileset level, it can be displayed using command line, REST API or GUI.
Command line
# mmlssnapshot fs1 -j fset1
REST API
$ curl -k -X GET --header 'Accept: application/json' \
--header 'Authorization: Basic xxxx’ \
'https://host/scalemgmt/v2/filesystems/fs1/filesets/fset1/snapshots'
GUI
Logon to the GUI with a user in snapAdmin role. Select Files – Snapshot and expand fileset fset1 in the table to view the available snapshots:

Scheduling safeguarded copies
Safeguarded copies can be created periodically using schedules. IBM Storage Scale has an integrated way to schedule creation of safeguarded copies by using snapshot rules. Alternatively, the creation of safeguarded copies can be automated using the IBM Storage Scale command line or REST API. A prototype of a project that includes scripts to create safeguarded copies can be found here [3].
Scheduling safeguarded copies using the snapshot rules in the GUI is simple and cumbersome at the same time. To create one safeguarded copy that is retained for 7 days requires to create 7 snapshot rules, one for each day. When two safeguarded copies are required per days to be retained for 7 days, then 14 snapshot rules are required.
To create snapshot rules, select Files – Snapshots, select the Snapshot Rules tab The following example shows one snapshot rule that creates a safeguarded copy at 6AM every Sunday and keeps it for 1 week (corresponds to 7 days).

Once all snapshot rules are created, associate the snapshot rules with the fileset fset1. Navigate to Files – Snapshots, select the Snapshots tab, select fileset fset1 and under Actions select Associate Rule. Multiple rules can be selected for a fileset.
Restoring bucket from safeguarded copy
Once safeguarded copies are created and available a bucket that was attacked can be restored from the safeguarded copy. Remember, the bucket is aligned to the fileset that has safeguarded copies.
There are three methods to recover a bucket from safeguarded copy with different characteristics:
The first two methods are quick, because no data is copied. Instead, data existing in the safeguarded copy is used immediately. The first two methods can also be used to restore the safeguarded copy into a new bucket that is used for testing while the original bucket remains unchanged. With the third method data is copied from the safeguarded copy path into the original bucket path. Depending on the amount of data in the original bucket, the restore operation can take minutes to several hours.
These methods are further explained in the next sections.
Restore bucket on safeguarded copy path
Configuring the S3 bucket on the safeguarded copy path is the simplest and quickest way to recover a bucket from safeguarded copy. However, objects in the recovered bucket are read-only. This method can be used for testing and validation that the objects are available and readable.
The original bucket can be configured on the safeguarded copy path, or a new bucket can be created. Because the resulting bucket is physically read-only, it is recommended to temporarily create a new bucket for testing.
To create a new bucket on the safeguarded copy path, determine the name of the safeguarded copy to be used for recovery. With the name of the safeguarded copy, the safeguarded copy path can be determined. For example, when fileset fset1 in file system fs1 is configured under path /ibm/fs1/fset1, then the path for safeguarded copy sgc1 is: /ibm/fs1/fset1/.snapshots/sgc1/. To create a new bucket named bucket1-test on this safeguarded copy path for S3 user user1 use the command:
# mms3 bucket create bucket1-test –account-name user1 \
–filesystemPath /ibm/fs1/fset1/.snapshots/sgc1
Once the new bucket is created, it can be used by the bucket owner user1 for testing and validation.
It is also possible to re-use the existing bucket by changing the path of the existing bucket to the safeguarded copy path. The advantage is that the bucket policies and configuration are retained. The disadvantage is that no objects can be written to the bucket (read-only). Assume the original and compromised bucket is named bucket1. To change the path for bucket1 execute the following command:
# mms3 bucket update bucket1 \
-–filesystemPath /ibm/fs1/fset1/.snapshots/sgc1
After successfully performing this command as IBM Storage Scale administrator, the S3 clients can list and read the content using the original bucket name. Furthermore, the policies associated with the original bucket are still effective.
Restore bucket from clone of safeguarded copy
Configuring the bucket on a clone of a safeguarded copy path is quick and a bit more complex. This method requires creating a clone of the safeguarded copy as shown in figure 3. A clone of a safeguarded copy is a new fileset that is configured in AFM Local Updates mode – the so-called clone fileset. The clone of a safeguarded copy is space efficient and allows read-write access.

Figure 3: Recovery from clone bucket
When objects existing in the safeguarded copy are read from the clone fileset, then these objects are retrieved from the safeguarded copy and stored in the clone fileset. When new objects are written into the clone fileset, then these objects are stored in the clone fileset only. If existing or new objects are deleted in the clone fileset, then these objects are deleted in the clone fileset and not in the safeguarded copy.
The file system path of the clone fileset can either be configured for the original bucket, or a new bucket is created on this path. For testing purposes, it is recommended to create a new bucket. Once testing is completed, the new bucket can be deleted, and the original bucket path can be changed to the clone fileset path.
To create a clone of a safeguarded copy, first determine the name of the safeguarded copy to be used for recovery. With the name of the safeguarded copy, the safeguarded copy path can be determined (see section Restore bucket on safeguarded copy path). In our example the safeguarded copy name sgc1 is used for recovery that is available on path: /ibm/fs1/fset1/.snapshots/sgc1/.
Next create a clone of the safeguarded copy. The clone is created in a new fileset that can reside in the same file system or in a different one. The clone is an AFM fileset that is configured in Local Updates mode and points to the path of the safeguarded copy. To create a clone fileset named fset1-clone in path /ibm/fs1/fset1-clone that point to the safeguarded copy path perform the following CLI command:
# mmcrfileset fs1 fset1-clone --inode-space new \
-p afmTarget=gpfs://ibm/fs1/fset1/.snapshots/sgc1/ \
-p afmEnableAutoEviction=no -p afmMode=local-updates \
-J /ibm/fs1/fset1-clone
After the clone fileset fset1-clone was created, the associated path /ibm/fs1/fset1-clone shows the content that was captured in the safeguarded copy sgc1.
In the next step, create a new bucket named bucket1-clone on the fileset path of clone fileset fset1-clone for for S3 user user1 use the command:
# mms3 bucket create bucket1-clone –account-name user1 \
–filesystemPath /ibm/fs1/fset1-clone
Once the new bucket is created, it can be used by the bucket owner user1 for testing, validation and even for production.
It is also possible to re-use the existing bucket by changing the path of the existing bucket to the path of the clone fileset. The advantage is that the bucket policies and configuration are retained and the bucket is readable and writable. Assume the original and compromised bucket is named bucket1. To change the path for bucket1 to the clone fileset, execute the following command:
# mms3 bucket update bucket1 \
-–filesystemPath /ibm/fs1/fset1-clone
After successfully performing this command as IBM Storage Scale administrator, the S3 clients can read and write objects using the original bucket name. Furthermore, the policies associated with the original bucket are still effective.
Useful scripts for automating the creation of safeguarded copies and clone filesets can be found here [3].
Restore original bucket
A safeguarded copy for a fileset can be restored using the IBM Storage Scale command line or GUI. Depending on the amount and size of files in the safeguarded copy, the duration of the restore can be between some minutes to multiple hours.
During the restore of a safeguarded copy, the files from the safeguarded copy path are copied into the fileset path. Files that are available in the fileset path and not in the safeguarded copy path are deleted. Hence the restore of a safeguarded copy puts the fileset back to the state when the safeguarded copy was created.
To restore the fileset fset1 from the safeguarded copy with name sgc1 perform the following command:
# mmrestorefs fs1 sgc1 -j fset1
After the restore completes, the S3 client can see the files from the safeguarded copy sgc1 in bucket1.
Appendix
References
[1] IBM Storage Scale
https://www.ibm.com/solutions/ai-storage
[2] IBM Storage Scale safeguarded copies
https://www.ibm.com/docs/en/storage-scale/6.0.0?topic=administering-protecting-file-data-storage-scale-safeguarded-copy
[3] Automation scripts for creating safeguarded copies and clones (IBM Internal)
https://github.ibm.com/IBM-Client-Engineering-EMEA/snap-clone