IBM FlashSystem

IBM FlashSystem

Find answers and share expertise on IBM FlashSystem


#Storage
 View Only

FlashSystem Integration with OpenStack Cloud Platform using IBM SVf Cinder Driver

By Harsh Ailani posted 23 hours ago

  

Let's have a look at OpenStack !

OpenStack is an open source cloud computing platform designed to manage and automate pools of compute, storage, and networking resources in data centres.

Key Features:

  • Scalability: Easily scales up or down to accommodate varying workloads.
  • Flexibility: Offers modular architecture allowing customisation and integration with existing systems.
  • Interoperability: Supports various hypervisors, storage backends and networking technologies.
  • Open Source: Community-driven development fosters innovation and collaboration.

Use Cases:

  • Building public or private clouds for enterprises, research institutes, and service providers.
  • Providing Infrastructure as a Service (IaaS) for virtual machines, storage, and networking.
  • Supporting development and testing environments, high-performance computing and big data analytics.

Diving into Cinder - Block Storage Service !

Cinder is the Block Storage service in OpenStack that provides persistent, reliable storage for virtual machines and applications, independent of their lifecycle. It manages the full volume lifecycle—create, attach, detach, snapshot, backup, and replication—while integrating seamlessly with other OpenStack services like Nova and Glance.

By abstracting complex vendor-specific storage backends behind a unified API, Cinder offers flexibility and choice, enabling enterprises and users to use vendor-specific storages, such as IBM, NetApp, Pure, etc., for real world OpenStack deployments.

The Role of IBM in Powering OpenStack Cinder !

The volume management driver for Storage Virtualize family (SVf) offers various block storage services. It provides OpenStack Compute instances with access to IBM Storage Virtualize family (SVf) storage products. These products include the SAN Volume Controller, Storwize and FlashSystem family members built with IBM Storage Virtualize (including FlashSystem 5xxx, 7xxx, 9xxx).

The driver handles advanced features like thin provisioning, volume groups, consistency groups, replication, and high availability, while also ensuring performance optimization and data protection aligned with IBM’s enterprise storage capabilities.

IBM SVf Cinder plugin enables various volume operations supported by IBM Storage products:

  • Volume creation/deletion
  • Volume resize/retype/manage
  • Volume-Group creation/deletion
  • Volume and Volume-Group snapshots and clones
  • Volume and Volume-Group replication (FlashCopy and RemoteCopy Relationship)
  • Volume/host multi-attach
  • IOPs and QoS management
  • Host attachment/detachment/failover/failback

IBM Storage Virtualize family volume driver

Using the IBM SVf Cinder Plug-in

The IBM SVf Cinder plug-in acts as a translation layer between OpenStack Cinder’s standardized block storage APIs and IBM’s storage system commands. This integration enables users to seamlessly provision, attach, detach, snapshot, replicate, and manage volumes on IBM storage systems through the unified OpenStack interface.

Before configuring the driver, ensure that the OpenStack host and the IBM storage system have proper iSCSI or Fibre Channel (FC) connectivity established via the SAN switches.

The plug-in is activated through Cinder’s standard configuration file, /etc/cinder/cinder.conf, where the storage backend parameters are defined. These include details such as login credentials, storage pool, protocol, and other relevant configuration options. Once the backend is specified, Cinder uses it as the volume service for IBM storage within the OpenStack environment.

After completing the zoning and Cinder configuration, your OpenStack deployment will be fully integrated with IBM Storage, enabling robust and efficient block storage management.

OpenStack cinder.conf configuration

Sample of IBM Storage configuration as storage backend in cinder.conf:

[Cluster_Name]
#volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
enable_unsupported_driver=True
san_ip = <Cluster_IP>
san_login = <Username>
san_password = <Password>
storwize_svc_volpool_name = <Pool_Name>
volume_backend_name = Cluster_Name
volume_group = stack-volumes-default
#storwize_svc_connection_protocol = iscsi
storwize_svc_connection_protocol = FC
storwize_svc_allow_tenant_qos = True
replication_device = san_ip:<Cluster_IP>,backend_id:<Cluster_Name>,san_login:<Username>,san_password:<Password>,pool_name:<Pool_Name>

Configuration Restrictions

IBM Cinder driver works smoothly when cinder.conf is aligned with the storage system’s supported features, code level, and protocols, but mixing pools, misusing extra-specs, or running on unsupported firmware introduces restrictions.

Vendors can not introduce any custom OpenStack APIs in the OpenStack Community version to leverage their vendor-specific storage service. They are bound to utilise the APIs provided by the OpenStack Community.

0 comments
5 views

Permalink