Introduction to Block storage
A block is just a small, fixed-sized piece of data, like a 512-byte chunk. Imagine breaking a book into many small pages—each page is a "block."
When you put all the pages together, you get a complete book, just like combining many blocks creates a larger storage unit.
Where Do We Use Blocks?
Block-based storage is commonly used in devices like:
-
Hard drives (your computer's storage)
-
CDs/DVDs (discs you use for music or movies)
-
Floppy disks (older storage devices)
-
Tape drives (used in big data backup systems)
What Are Ceph Rados Block Devices (RBD)?
Ceph Rados Block Devices are a special type of storage in a Ceph cluster that works like a virtual hard drive.
Instead of storing data on a single device, it spreads the data across multiple storage nodes (called OSDs) in a Ceph cluster.
This makes it efficient, scalable, and reliable.
Why Are Ceph Block Devices Special?
Ceph Block Devices have some amazing features:
-
Snapshots – Like taking a photo of your data at a moment in time, so you can restore it later.
-
Replication – Your data is copied multiple times to prevent loss.
-
Data consistency – Ensures your data is accurate and safe.
How Do Ceph Block Devices Work?
They use a library called librbd to talk to storage nodes (OSDs) and manage data efficiently.
Who Uses Ceph Block Devices?
-
Cloud computing platforms like OpenStack rely on them.
-
Virtual machines (KVM/QEMU) use them for fast and scalable storage.
-
They can even work alongside Ceph’s Object Storage for a flexible storage solution.
In short, Ceph Block Devices provide fast, scalable, and reliable storage for modern computing needs, ensuring that data is always available, accurate and safe.

Ceph RBD Live Migration of Images
As a storage administrator, you have the power to seamlessly move (live-migrate) RBD images within your Ceph storage system.
Think of it like moving a file from one folder to another on your computer—but in this case, it happens in a large, distributed storage cluster.
Key Features
-
Migrate between different pools or within the same pool and namespaces
-
Migration between the pool, and namespaces among different ceph clusters
-
Support for various image formats (Native, QCOW, RAW)
-
Live migration of encrypted images
-
Integration with external data sources such as HTTP, S3, and NBD
-
Preservation of snapshot history and sparseness during migration
Note: The krbd kernel module currently does not support live migration.
What Can You Migrate?
With Ceph’s live migration feature, you can move RBD images:
✅ Between different storage pools (e.g., moving from a high-performance SSD pool to a cost-effective HDD pool).
✅ Within the same pool (e.g., reorganizing data for better management).
✅ Across different formats or layouts (e.g., upgrading from an older storage format to a newer one).
✅ From external data sources (e.g., migrating from a non-Ceph storage system into Ceph).
Live Migration: Import-Only Mode
Want to migrate data from an external source or storage provider? No problem! You can:
✅ Import data from a backup file.
✅ Pull data from a web URL (HTTP/HTTPS file).
✅ Move data from an S3 storage bucket.
✅ Connect to an NBD (Network Block Device) export.
How Does Live Migration Work?
When you start a live migration, here’s what happens behind the scenes:
🔹 Deep Copying – The system duplicates the entire image while keeping all historical snapshots.
🔹 Sparse Data Optimization – Only the actual used data is copied, saving storage space and speeding up the process.
🔹 Seamless Transition – The migration happens while the image remains usable, minimizing downtime.
🔹 Source Becomes Read-Only – The original image is locked so no new changes are made.
🔹 Automatic I/O Redirection – All applications and users automatically start using the new image without interruptions.
Why Is This Important?
🔹 Keeps data flexible – Move storage based on performance, cost, or organizational needs.
🔹 Ensures data integrity – Snapshot history and structure remain intact.
🔹 Works in real-time – Migration happens without disrupting workloads.
Step-by-Step Guide to Live Migrating Ceph RBD Images
Live migration of RBD images in Ceph allows you to move storage seamlessly between pools, Namespaces, and clusters in different formats with minimal downtime.
Let's break it down into three simple steps, along with the necessary commands to execute them.
🔹 Step 1: Prepare for Migration
In this step before starting the migration, a new target image is created and linked to the source image.
✅ If import-only mode is not enabled, the source image will be marked as read-only to prevent modifications.
✅ Any attempts to read uninitialized parts of the new target image will redirect the read operation to the source image.
✅ If data is written to an uninitialized part of the target image, Ceph automatically deep-copies the corresponding blocks from the source.
Syntax:
rbd migration prepare SOURCE_POOL_NAME/SOURCE_IMAGE_NAME TARGET_POOL_NAME/TARGET_IMAGE_NAME
Example:
rbd migration prepare source_pool/source_image target_pool/target_image
We can initiate the import-only live migration process by running the rbd migration prepare command with the --import-only and
either, --source-spec or --source-spec-path options, passing a JSON document that describes how to access the source image data directly on the command line or from a file
Create a JSON file:
Example
[ceph: root@rbd-client /]# cat testspec.json
{
"type": "raw",
"stream": {
"type": "s3",
"url": "https://host_ip:80/testbucket1/image.raw",
"access_key": "Access key",
"secret_key": "Secret Key"}
}
Prepare the import-only live migration process:
Syntax
rbd migration prepare --import-only --source-spec-path "JSON_FILE" TARGET_POOL_NAME/TARGET_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration prepare --import-only --source-spec-path "testspec.json" target_pool/target_image
We can check the status of the migration using rbd status command:
[ceph: root@rbd-client /]# rbd status target_pool/target_image
Watchers: none
Migration:
source: {"stream":{"access_key":"RLJOCP6345BGB38YQXI5","secret_key":"oahWRB2ote2rnLy4dojYjDrsvaBADriDDgtSfk6o","type":"s3","url":"http://10.74.253.18:80/testbucket1/image.raw"},"type":"raw"}
destination: target_pool/source_image (b13865345e66)
state: prepared
🔹Step 2: Execute Migration
Once the preparation is complete, Ceph starts deep copying all existing data from the source image to the target image.
✅ The migration runs in the background, so applications can start using the new target image immediately.
✅ Any new write operations are stored only on the target image, ensuring a seamless transition.
Syntax:
rbd migration execute TARGET_POOL_NAME/TARGET_IMAGE_NAME
Example:
rbd migration execute target_pool/target_image
🔹 Step 3: Finalizing the Migration
Once the data has been fully transferred, you need to commit or abort the migration.
✅ Option 1: Commit the Migration
Committing the migration removes all links between the source and target images.
-
If import-only mode was not used, the source image is automatically deleted.
-
The target image becomes fully independent.
Syntax:
rbd migration commit TARGET_POOL_NAME/TARGET_IMAGE_NAME
Example:
rbd migration commit target_pool/target_image
❌ Option 2: Abort the Migration
If needed, you can cancel the migration. This will:
-
Remove any cross-links between the images.
-
Delete the target image, restoring the system to its previous state in source image.
Syntax:
rbd migration abort TARGET_POOL_NAME/TARGET_IMAGE_NAME
Example:
rbd migration abort target_pool/target_image
The following example shows migrating data from One ceph cluster to another ceph cluster:
Here: c1: Cluster1
and
c2
: Cluster2
[ceph: root@rbd-client /]# cat /tmp/native_spec
{
"cluster_name": "c1",
"type": "native",
"pool_name": "pool1",
"image_name": "image1",
"snap_name": "snap1"
}
[ceph: root@rbd-client /]# rbd migration prepare --import-only --source-spec-path /tmp/native_spec c2pool1/c2image1 --cluster c2
[ceph: root@rbd-client /]# rbd migration execute c2pool1/c2image1 --cluster c2
Image migration: 100% complete...done.
[ceph: root@rbd-client /]# rbd migration commit c2pool1/c2image1 --cluster c2
Commit image migration: 100% complete...done.
Supported Image Formats
Live migration supports three primary formats:
-
Native Format – Uses Ceph's internal operations for efficient migration.
The native format does not include the stream object since it utilizes native Ceph operations. For example, to import from the image rbd/ns1/image1@snap1, the source-spec could be encoded as:
Example
{
"type": "native",
"pool_name": "rbd",
"pool_namespace": "ns1",
"image_name": "image1",
"snap_name": "snap1"
}
-
QCOW Format – Compatible with QEMU Copy-On-Write (QCOW) disk images.
We can use the qcow format to describe a QEMU copy-on-write (QCOW) block device. Both the QCOW v1 and v2 formats are currently supported with the exception of advanced features such as compression, encryption, backing files, and external data files. You can link the qcow format data to any supported stream source:
Example
{
"type": "qcow",
"stream": {
"type": "file",
"file_path": "/mnt/image.qcow"
}
}
-
Raw Format – Used for thick-provisioned block device exports.
we can use the raw format to describe a thick-provisioned, raw block device export that is rbd export –export-format 1 SNAP_SPEC. You can link the raw format data to any supported stream source:
Example
{
"type": "raw",
"stream": {
"type": "file",
"file_path": "/mnt/image-head.raw"
},
"snapshots": [
{
"type": "raw",
"name": "snap1",
"stream": {
"type": "file",
"file_path": "/mnt/image-snap1.raw"
}
},
] (optional oldest to newest ordering of snapshots)
}
Supported Streams
Live migration supports multiple stream types for importing external data sources:
You can use the file stream to import from a locally accessible POSIX file source.
Syntax:
{
<format unique parameters>
"stream": {
"type": "file",
"file_path": "FILE_PATH"
}
}
You can use the HTTP stream to import from a remote HTTP or HTTPS web server.
Syntax:
{
<format unique parameters>
"stream": {
"type": "http",
"url": "URL_PATH"
}
}
We can use the s3 stream to import from a remote S3 bucket.
Syntax:
{
<format unique parameters>
"stream": {
"type": "s3",
"url": "URL_PATH",
"access_key": "ACCESS_KEY",
"secret_key": "SECRET_KEY"
}
}
We can use the NBD stream to import from a remote NBD export.
Syntax:
{
<format unique parameters>
"stream": {
"type": "nbd",
"uri": "<nbd-uri>",
}
}
To know more on NBD:
nbd-uri
parameter must follow the NBD URI specification.,The default NBD port is 10809.
Use cases on RBD Live migration
1. Disaster Recovery and Data Migration
Scenario: A customer runs mission-critical applications on a primary Ceph cluster in one data center. Due to an impending maintenance window, potential hardware failure, or a disaster event, they need to migrate RBD images to a secondary Ceph cluster in a different location.
2. Cloud Bursting and Workload Distribution
Scenario: A customer operates a private Ceph cluster for routine workloads but occasionally requires extra capacity during peak usage. By migrating RBD images to an external Ceph cluster deployed in a cloud environment, they can temporarily scale operations.
3. Data Center Migration
Scenario: A customer is migrating their infrastructure from one physical data center to another due to an upgrade, consolidation, or relocation. All RBD images from the source Ceph cluster need to be moved to a destination Ceph cluster in the new location.
4. Compliance and Data Sovereignty
Scenario: A customer must comply with local data residency regulations requiring sensitive data to be stored within specific geographic boundaries. They need to migrate RBD images from a general-purpose Ceph cluster to one dedicated to the regulated region.
5. Multi-Cluster Load Balancing
Scenario: A customer runs multiple Ceph clusters to handle high traffic workloads. To prevent overloading any single cluster, they redistribute RBD images across the clusters as workload patterns shift.
6. Dev/Test to Production Migration
Scenario: Developers run test environments on a separate Ceph cluster. After testing is complete, production-ready RBD images need to be migrated to the production Ceph cluster without data duplication or downtime.
7. Hardware Lifecycle Management
Scenario: A Ceph cluster is running on older hardware nearing the end of its lifecycle. The customer plans to migrate RBD images to a new Ceph cluster with upgraded hardware for better performance and reliability.
8. Service Provider Multi-Tenancy
Scenario: A cloud service provider uses Ceph as backend storage and needs to migrate tenant data (RBD images) between clusters for reasons such as tenant location preference, cost optimization, or internal service redistribution.
Conclusion
Live migration of RBD images in IBM Storage Ceph provides a seamless and efficient way to move storage workloads without disrupting operations.
By leveraging native Ceph operations and external stream sources, administrators can ensure smooth and flexible data migration processes.
For more details please refer to the IBM Storage Ceph Documentation.
#Highlights
#Highlights-home