IBM releases IBM Storage Ceph 7.0
Now certified for WORM, with object lock, alongside with other new features and enhancements.
December 8, 2023
IBM has released IBM Storage Ceph version 7.0, currently general available.
This involves a major release update, following IBM's earlier IBM Storage Ceph release of 6.1.2.
This blog post contains a summary of the new features, functionalities and enhancements that have been realized with this new major release update of IBM Storage Ceph.
About IBM Storage Ceph
IBM Storage Ceph is an IBM-supported enterprise distribution of the open-source Ceph platform that provides massively scalable object, block, and file storage in a single system.
Since January 2023, IBM Storage Ceph has been part of the IBM Storage portfolio and software-defined storage. IBM Storage Ceph runs on industry-standard x86 server hardware and can start small and then scale into Petabytes scale.
IBM also offers IBM Storage Ready Nodes for easy and convenient implementation and use of IBM Storage Ceph.
New in this release
SEC and FINRA compliancy certification for WORM with object lock
IBM Storage Ceph 7.0 has been certified for Object lock, enabling WORM compliance for object storage.
The Object lock WORM certification has recently been assessed by Cohasset Inc.
Cohasset Inc. asserts that IBM Storage Ceph, when properly configured and used with Object Lock, has functionality that meets the electronic recordkeeping system requirements of SEC Rules 17a-4(f)(2) and 18a-6(e)(2) and FINRA Rule 4511(c), as well as supports the regulated entity in its compliance with SEC Rules 17a-4(f)(3)(iii) and 18a-6(e)(3)(iii).
Additionally, the assessed functionality of IBM Storage Ceph meets the principles-based requirements of CFTC Rule 1.31(c)-(d).
NFS support for CephFS
CephFS filesystem access, for non-native Ceph clients
Clients can now create, edit, and delete NFS exports from within the Ceph dashboard after configuring the Ceph filesystem. CephFS namespaces can be exported over NFS protocol, using NFS Ganesha service.
IBM Storage Ceph Linux clients can mount CephFS natively because the driver for CephFS is integrated in the Linux kernel by default.
With this new functionality, non-Linux clients can now also access CephFS, by using the NFS 4.1 protocol, by NFS Ganesha service.
New IBM Storage Ceph dashboard functionalities
New capabilities for object bucket related interactions and insights.
Setup and configuration
RADOS Gateway (RGW) multi-site can now be setup and configured from within the dashboard user interface. Offering more convenience, without needing to go through several manual command line setup steps.
Object bucket level interaction
RADOS Gateway (RGW) bucket level view and management:
Add- and remove object bucket tags, set ACL status to private or public, on a per single bucket basis.
Labeled performance counters per user/bucket, reporting into Prometheus monitoring.
Offering insights on user and bucket operations and statistics.
Multi-site synchronization status
Synchronization status, RADOS Gateway (RGW) operation metrics, client and replica traffic.
Dashboard user interface visibility and insight on RGW synchronization status.
CephFS volume management
Dashboard UI user interface interaction for:
Creation, listing, changing options, and deletion of CephFS filesystem(s), volumes, subvolume groups and subvolumes.
Access and encryption management options for CephFS resources.
Snapshots management, list all snapshots for a particular FS, volume subvolume or directory.
Create or delete a one-time snapshot, display occupied capacity by a snapshot.
CephFS monitoring
Health status, basic read/write throughput and capacity utilization for filesystems, volumes and subvolumes, including historical graphs.
Object storage related updates
Object storage for Machine Learning/Data Analytics–S3select
GA Support for three defined S3select data formats: CSV, JSON and Parquet.
Reduced query times as IBM Storage Ceph keeps improving features and integrations with market-leading analytics tools like Presto, Trino and other apps.
Analytical applications often use Apache Parquet, a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
IBM Storage Ceph provides improved performance for those applications, by pushing down S3select queries onto the RADOS Gateway (RGW).
Multi-site replication with bucket granularity
Bucket granularity means the ability to replicate a selected bucket or group of buckets to a different IBM Storage Ceph cluster.
This can be bi-directional replication of selected buckets in an active-active way.
Active-active replication between two sites, with bucket granularity.
Bucket granularity thus allows for selective replication and can be useful for use cases that involve edge, co-locations, or branch offices.
RGW policy-based data archive and migration to public cloud
This enables for object lifecycle transition into AWS compatible S3 cloud endpoints.
Allowing clients to create policies and move data that meets policy criteria to an AWS compatible S3 bucket for archive, for cost, and manageability reasons.
A client can move data that is more than <<X>> months/years old from the on-premises IBM Storage Ceph cluster to an Amazon AWS or MS Azure bucket. This process is policy based and will happen automatically, once the policy thresholds are met.
RGW improved multi-site performance with Object storage geo-replication
Improved performance of data replication and metadata operations.
Increased operational parallelism by optimizing and increasing the RadosGW daemon count, allowing for improved horizontal scalability.
RADOS core update
RADOS stands for: Reliable Autonomic Distributed Object Store.
IBM Storage Ceph now supports C2+2 Erasure Coded pools, with 4 server nodes.
A basic cluster with 4 nodes can now use erasure code in the backend. Before this release, more nodes were necessary.
Erasure Coding clusters can now start with 4 nodes and cluster extensions can conveniently be realized with N+1, thus with single node extensions at a time, according to business needs.
Technology preview
The here below mentioned functionalities/features are currently introduced in Technology Preview state.
These features can become general available in future releases of IBM Storage Ceph.
About Technology Preview
Technology Preview features are not supported with IBM production service level agreements (SLAs), might not be functionally complete, and IBM does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Tech Preview: NVMe over Fabrics for block storage
A new management layer via a new Ceph-NVMe-oF daemon, that coordinates configuration of NVMe-oF targets across multiple IBM Storage Ceph cluster nodes. There are no kernel dependencies, leveraging SPDK-based code.
This feature is to support storage for bare metal.
An NVMe-oF target and initiator combination to access storage from the Ceph RADOS subsystem. Example use-case: block storage for VMware consumption.
Clients interact with an NVMe-oF initiator and connect against an IBM Storage Ceph NVMe-OF gateway, which accepts initiator connections on its north end and connect into RADOS on the South end. The performance equals RBD native block storage usage.
Tech Preview: Object archive zone
IBM Storage Ceph archive zone receives all objects from Ceph production zones.
An archive zone keeps every version for every object, provides the user with an object catalogue that contains the full history of the object.
Archive zone provides immutable objects that cannot be deleted nor modified from RADOS gateway (RGW) endpoints.
Object archive zone offers the ability to recover data in production zones. It enables for recovery of any version of any object that existed on production sites.
In a case of data loss, ransomware or disaster recovery, still all valid versions of all objects can be recovered easily.
This functionality is also suitable for compliance related use cases.
Tech Preview: Archive zone bucket granularity
Bucket granularity allows clients to enable or disable replication to the archive zone on a per object bucket case. Distinctions can be made based on a single bucket granular level.
The goal of this functionality is to reduce data storage in the archive zone.
In example, a set of test/development buckets can probably be classified as non-business critical. System administrators may than decide to disable replication to the archive zone for these selected types of object data buckets.
Tech Preview: NFS to RADOS Gateway (RGW) back-end
NFS with RGW backend integration allows for object access, through use of the NFS protocol.
This can be useful for easy ingests of object data from legacy applications which do not natively support the S3 object API.
A practical use-case example can be for data scientists:
NFS access offers a method to easily ingest existing business data from Windows and/or Linux clients or applications into the IBM Storage Ceph object store.
This functionality then enables for an easy way to export results or digest data from analytics jobs and share results with non-S3 object clients, which instead can make use of NFS, an extended connectivity option to ingest or present S3 object data to different platforms and can be used concurrently, S3 next to NFS.
Summarizing
IBM has released IBM Storage Ceph 7.0 GA, with new features and functionalities.
The main important highlights of this 7.0 release are:
Security
SEC and FINRA rules compliancy certification for WORM with Ceph object lock, by Cohasset Inc.
Filesystem
NFS support for CephFS (Ceph file system).
Dashboard
Multiple new feature/functionalities in the IBM Storage Ceph Dashboard.
Object storage related
Object storage for Machine Learning/Data Analytics–S3select (CSV, JSON and Parquet)
Multi-site replication, with bucket level selective granularity
RGW policy-based data archive and migration to public cloud.
Performance
RGW improved multi-site performance with Object storage geo-replication.
Efficiency
IBM Storage Ceph now supports EC2+2 Erasure Coded pools, with 4 server nodes with N+1 expansion capability.
This means start with 4 nodes and then expand with 1 node at a time, when a business need arises. Scaling can go into Petabyte scale while maintaining to have performance. With each node expansion, CPU, Memory and NICs are added to the cluster, allowing our clients for linear scaling.
Technology Preview summary
- NVMe over Fabrics for block storage, for non-Linux clients, in example for VMware storage.
- IBM Storage Ceph archive zone, maintain immutable copies of every object, protection from ransomware or other disasters.
- Bucket granularity for IBM Storage Ceph Archive zone. Exclude non-business critical data.
- NFS with RGW backend integration, for object access, through use of the NFS protocol.
IBM Storage Ceph resources
Find out more about IBM Storage Ceph
IBM Redpaper publications
Why IBM?
Because data matters.
When planning a data strategy for new or existing applications it’s easy to focus on compute resources and applications without proper planning for the data that will drive the results for the applications.
Our products are all about solving hard problems faster with data. IBM helps customers achieve business value with a clear data strategy.
Our strategy is simple, unlock data to speed innovation, de-risk data to bring business resilience and help customers adopt value based data to bring cost and energy efficiencies.
Value needs to be delivered by connecting organizational data sources with business drivers to create business value that matters to the organization.
Many organizations focus on a single driver with a storage solution, but the best solution is driven by an infrastructure strategy than can accomplish most if not all drivers for maximum benefits.
Our story is not just about another storage product, but is about innovation and a comprehensive storage portfolio that is helping businesses drive more value throughout the organization.
Contact IBM
https://www.ibm.com/contact