We're excited to launch IBM Storage Ceph Workshop (SSCH1DG) - a paid, self-paced, hands-on technical training course that helps storage administrators plan, deploy, configure, and manage IBM Storage Ceph in enterprise environments. There are lectures (in a format similar to the now-retired Red Hat Ceph course), and live lab exercises to help participants build real-world Ceph skills across object, block, and file storage.
Duration: 3 days estimated (Self-paced) with 14-day lab access
Badge: IBM Storage Ceph Administrator Badge
Format: Lectures, quizzes and hands-on labs
This workshop is for all learners. Typically, storage administrators, engineers, and consultants who want to master IBM Storage Ceph deployment and management.
Course Overview
This workshop provides comprehensive details on Ceph cluster architecture, deployment, configuration, and performance tuning. Participants learn to integrate Ceph into production environments and use it for scalable object, block, and file storage solutions.
- Understand Ceph architecture, components, and interfaces
- Deploy and expand IBM Storage Ceph clusters with
cephadm
- Manage Ceph via dashboard, CLI, and service specifications
- Configure storage pools, OSDs, and CephX authentication
- Implement RADOS Gateway (object), RBD (block), and CephFS (file) storage
- Optimize, tune, and troubleshoot Ceph cluster performance
- Integrate Ceph with Red Hat OpenStack and OpenShift
Audience
This workshop is intended for:
- Clients - Storage administrators and engineers
- IBM and Business Partner technical personnel
- IT consultants and infrastructure architects
Prerequisites
- Basic Linux administration skills
- Familiarity with storage and networking concepts
- Experience with virtualization or cloud environments (helpful but not required)
Course Agenda
Part 1
- Unit 0: Course Introduction
- Unit 1: Introduction to IBM Storage Ceph
- Unit 2: Deploying IBM Storage Ceph
- Exercise 1: Deploying IBM Storage Ceph
- Exercise 2: Expanding the Ceph Cluster
Part 2
- Unit 3: Managing IBM Storage Ceph
- Exercises 3a-3c: Managing with Dashboard,
cephadm
, and Service Specs
- Unit 4: Configuring Storage in Ceph
- Exercise 4: Managing Ceph Storage and Device Classes
- Unit 5: Object Storage with Ceph
- Exercise 5: Ceph Object Storage Gateway (RGW)
Part 3
- Unit 6: Block Storage with Ceph
- Exercise 6: Ceph Block Storage (RBD) and NVMe
- Unit 7: File Storage with CephFS
- Exercise 7: Ceph File System (CephFS)
- Unit 8: Managing Data Protection and CRUSH Map
- Exercise 8: Managing Ceph Data Protection and CRUSH Map
- Unit 9: Optimizing, Tuning, and Troubleshooting Ceph
- Exercise 9: Optimizing, Tuning, and Troubleshooting Ceph
- Unit 10: Red Hat OpenStack and OpenShift Integration
Unit Overview
Unit 1: Introduction to IBM Storage Ceph
- Introduction to IBM Storage Ceph
- Ceph storage architecture
- Architecture in depth
- Getting started with labs (Video)
- Exercise 1: Deploying IBM Storage Ceph
Unit 2: Deploying IBM Storage Ceph
- Plan and deploy a cluster using
cephadm
- Expand existing cluster capacity
- Exercise 2: Expanding the Ceph Cluster
Unit 3: Managing IBM Storage Ceph
- Ceph Dashboard
cephadm
command-line interface
- Ceph Orchestrator service specifications
- Exercises 3a–3c: Managing with Dashboard,
cephadm
, and Service Specs
Unit 4: Configuring Storage in Ceph
- Ceph storage devices
- Creating and configuring pools
- Managing Ceph authentication
- Exercise 4: Managing Storage and Device Classes
Unit 5: Object Storage with Ceph
- Deploy Ceph Object Storage components
- Deploying the Ceph RADOS Gateway
- Gateway options and Beast frontend
- Exercise 5: Ceph Object Storage Gateway (RGW)
Unit 6: Block Storage with Ceph
- Managing RADOS Block Devices
- Introducing NVMe over Fabrics (NVMe/TCP)
- Deploying NVMe/TCP and client experience
- Exercise 6: Ceph Block Storage (RBD) and NVMe
Unit 7: File Storage with CephFS
- Introduction and deployment
- NFS client access
- CephFS and MDS deeper dive
- Dashboard screen capture
- Exercise 7: Ceph File System (CephFS)
Unit 8: Managing Ceph Data Protection and CRUSH Map
- Ceph Placement Groups
- CRUSH algorithm
- Managing OSD map
- Exercise 8: Managing Data Protection and CRUSH Map
Unit 9: Optimizing, Tuning, and Troubleshooting Ceph
- Performance overview and recommended practices
- Designing Ceph cluster best practices
- Tuning with performance tools
- Exercise 9: Optimizing, Tuning, and Troubleshooting Ceph
Unit 10: Red Hat OpenStack and OpenShift Integration
- OpenStack storage architecture
- Integrating Ceph in OpenStack
- Implementing OpenShift storage architecture
- Related courses
Badge quiz
Badge Description
This credential earner has successfully completed SSCH1DG (IBM Storage Ceph Workshop), demonstrating foundational skills in planning, deploying, and managing IBM Storage Ceph environments. Skills validated include configuring and managing clusters, implementing object, block, and file storage, applying data protection, tuning performance, and integrating with OpenStack and OpenShift.
Enrollment
Enrollment is open to clients, IBM Business Partners, and IBMers. Visit the IBM Training catalog and search for SSCH1DG to enroll in the IBM Storage Ceph Workshop.
Direct link: