IBM Storage Ceph

IBM Storage Ceph

Connect, collaborate, and share expertise on IBM Storage Ceph

 View Only

SMB Meets Ceph: Rate Limiting (QoS)

By Mohit Bisht posted Mon January 26, 2026 11:35 PM

  

Introduction 

In modern enterprise environments, file storage is rarely used by a single workload or team. SMB deployments often serve a mix of users, applications, analytics jobs, and backup workloads each with different performance expectations. Ensuring predictable performance in such shared environments is a constant challenge. 

To address this, IBM Storage Ceph 9.0 introduces SMB Rate Limiting (Quality of Service – QoS). This feature enables administrators to control SMB bandwidth usage, prevent noisy-neighbor scenarios, and deliver consistent performance across workloads. 

In this blog, we’ll walk through the motivation behind SMB QoS, the problems it solves, its high-level architecture, and how it works in Ceph 9.0. 

The Problem: Uncontrolled SMB Workloads 

Without QoS, SMB workloads operate on a first-come, first-served basis. This can lead to several issues: 

  • A single heavy workload, such as a backup or large file copy, can consume most of the available bandwidth 

  • Other users may experience degraded performance or timeouts 

  • Performance becomes unpredictable and difficult to troubleshoot 

  • Meeting SLAs in multi-tenant environments becomes challenging 

As SMB adoption grows in enterprise and cloud deployments, these limitations become more visible and impactful. 

What Is SMB Rate Limiting (QoS)? 

SMB Rate Limiting, or QoS, allows administrators to control the bandwidth consumed by SMB shares. By enforcing limits at the share level, Ceph ensures that no single workload can overwhelm the system. 

With SMB QoS, administrators gain: 

  • Better control over SMB traffic 

  • Predictable and consistent performance 

  • Protection for critical workloads 

  • Improved overall cluster stability 

This capability is natively integrated into IBM Storage Ceph 9.0, eliminating the need for external traffic-shaping tools or complex per-node tuning. 

Goals of SMB QoS in Ceph 9.0 

The primary goals behind introducing SMB Rate Limiting in Ceph 9.0 are: 

  • Workload isolation – Prevent one SMB workload from impacting others 

  • Predictable performance – Ensure consistent throughput and response times 

  • Enterprise readiness – Support multi-tenant and mixed workload environments 

  • Simplified management – Centralized configuration and enforcement 

These goals align with real-world enterprise use cases where SMB is used for diverse and performance-sensitive workloads. 

Workflow Overview: SMB QoS in Ceph 9.0 

The SMB QoS workflow in Ceph 9.0 is straightforward: 

  1. A storage administrator creates an SMB cluster and one or more SMB shares 

  1. QoS values are configured per share using: 

  1. Imperative CLI commands, or 

  1. Declarative spec-based configuration 

  1. The updated QoS configuration is applied dynamically 

  1. SMB client traffic to that share is rate-limited based on the configured values 

Currently, QoS configuration is supported per SMB share, providing fine-grained control over bandwidth usage 

Updating QoS value 

Supported QoS parameter 

Imperative style cli command:  

ceph smb share update <cephfs volume> qos <smb cluster id> <share id>\ 
  --read-iops-limit=100 \ 
  --write-iops-limit=200 \ 
  --read-bw-limit=1048576 \ 
  --write-bw-limit=2097152 \ 
  --read-delay-max=5 \ 
  --write-delay-max=5 

Example:  

# ceph smb share update cephfs qos smb1 share1  --read_iops_limit 100 --write_iops_limit 100 --read_bw_limit 1048576 --write_bw_limit 2097152 

Declarative Style Spec File 

 - resource_type: ceph.smb.cluster 

  cluster_id: smb1 

  auth_mode: user 

  user_group_settings: 

  - source_type: resource 

    ref: ug1 

  placement: 

    label: smb 

- resource_type: ceph.smb.usersgroups 

  users_groups_id: ug1 

  values: 

    users: 

    - name: user1 

      password: passwd 

    groups: [] 

- resource_type: ceph.smb.share 

  cluster_id: smb1 

  share_id: share1 

  cephfs: 

    volume: cephfs 

    subvolumegroup: smb 

    subvolume: sv1 

    path: / 

    qos: 

      read_iops_limit: 300 

      write_iops_limit: 300 

      read_bw_limit: 1048576 

      write_bw_limit: 2097152 

      read_delay_max: 30 

      write_delay_max: 30 

# ceph smb apply -i /tmp/tmpg1n4547q.yaml 

Disable QoS value 

Imperative sytle cli method 

ceph smb share update cephfs qos <share_name> <share_id> \ 
  --read-iops-limit=0 \ 
  --write-iops-limit=0 \ 
  --read-bw-limit=0 \ 
  --write-bw-limit=0 \ 
  --read-delay-max=0 \ 
  --write-delay-max=0 

Example: 

# ceph smb share update cephfs qos foo bar \ 
  --read-iops-limit=0 \ 
  --write-iops-limit=0 \ 
  --read-bw-limit=0 \ 
  --write-bw-limit=0 \ 
  --read-delay-max=0 \ 
  --write-delay-max=0 

Declarative way spec file: 

- resource_type: ceph.smb.share 

  cluster_id: smb1 

  share_id: share1 

  cephfs: 

    volume: cephfs 

    subvolumegroup: smb 

    subvolume: sv1 

    path: / 

    qos: 

      read_iops_limit: 0 

      write_iops_limit: 0 

      read_bw_limit: 0 

      write_bw_limit: 0 

      read_delay_max: 0 

      write_delay_max: 0 

 

Key Benefits and Takeaways 

SMB Rate Limiting in IBM Storage Ceph 9.0 delivers several key benefits: 

  • Predictable SMB performance 

  • Effective workload isolation 

  • Enterprise-grade bandwidth control 

  • Simplified and centralized management 

By integrating QoS directly into the SMB service, Ceph 9.0 makes SMB deployments more robust, scalable, and production-ready. 

Conclusion 

As enterprise environments continue to grow and diversify, controlling SMB workload behavior becomes essential. SMB Rate Limiting (QoS) in IBM Storage Ceph 9.0 addresses this need by providing native, share-level bandwidth control that is easy to configure and manage. 

With this feature, organizations can confidently deploy SMB in multi-tenant and mixed workload environments while maintaining predictable performance and operational simplicity. 

 

0 comments
12 views

Permalink