IBM Storage Defender

IBM Storage Defender

Early threat detection and secure data recovery

ย View Only

Packing Too Much in One Suitcase?

By Nilesh Bhosale posted Tue May 06, 2025 11:59 AM

  

๐Ÿงณ Packing Too Much in One Suitcase? 

Picture 

Image Source: [Gray Line Alaska] 

Co-authors: Keigo Matsubara, Storage Technical Specialist; David Bohm, IBM Storage Protect Development 

๐Ÿ“š Background 

Are you tired of running into storage capacity issues with your IBM Storage Protect servers? You're not alone! Many users have expressed the need for a way to seamlessly rebalance nodes between multiple instances to avoid these headaches. Currently, IBM Storage Protect lacks virtual clustering, forcing a Backup/Archive (B/A) client to be tightly coupled with a specific server instance. But don't worry, we've got a solution to help you mitigate this situation and keep your storage running smoothly. 

Picture 

๐Ÿ”„ Rebalancing Nodes on IBM Storage Protect Servers 

๐Ÿ› ๏ธ Considerations 

  • ServerA: Your primary backup server (source replication server) 

  • ServerB: The target server for node migration 

 

โœ… Assumptions and Pre-requisites 

  1. Identifying Nodes for Migration: Use the admin command โ€˜Query occupancyโ€™ to pinpoint nodes that can reduce storage space occupation on ServerA. 

Deduplication percentage for a particular node may also be considered in the calculation since a high deduplication percentage could indicate little value in moving a node, because even if you move the node, the chunks shared with backup objects from other nodes would still stay back on the source Storage Protect server. For container storage pools for a particular node, GENERATE DEDUPSTATS and QUERY DEDUPSTATS admin commands can be used to help with this determination. Note: These are just guidelines to help take the decision, however selection of nodes for migration should be determined by the user. 

  1. Backup Downtime: Users must agree to a backup-downtime window during node migration. The backup down-time would be shorter if the delta between the two servers for the data & metadata related to the node is smaller. 

  1. Replication Setup: Ensure replication between ServerA and ServerB is already established and runs regularly. 

  1. Free Space on ServerB: ServerB must have sufficient free space to host the node's data going forward. 

  1. BA Client Connection: Nodes must connect to the SP Server using BA Client.  Nodes using other Storage Protect clients such as Data Protection for Microsoft SQL Server, Data Protection for VMware, and so on, are not in scope. 

  1. Replication to ServerC: Optionally, set up replication from ServerB to ServerC for data protection and redundancy. 

  1. Authentication: LDAP/AD authentication is not used for node/SP client authentication. 

 

๐Ÿ“ Workflow 

๐Ÿ”ง Preparation 

  1. Lock the Node: Stop backups/restores on the node by locking it on ServerA. 

  1. SSL Certificate Check: Ensure the node can connect to ServerB without SSL certificate errors. 

  1. Password Access: Verify if node can connect with passwordaccess generate setting in the dsm.opt/dsm.sys file. 

  1. Sync Servers: Ensure ServerA and ServerB are completely synchronized. 

  1. Deactivate Original STGRULE: Stop replicating data from ServerA to ServerB for all nodes. 

  1. Define New Storage Rule: Create a new storage rule to replicate data for the specific node from ServerA to ServerB. 

  1. Start Replication Rule: Begin the replication rule with 'forecereconcile=yes'. 

 

๐Ÿ” Data Integrity and Inventory 

  1. Inventory Expiration: Perform inventory expiration on both servers for the specific node. 

  1. Object Count: Check object counts on both servers for the specific node. 

Run select against backup_objects, archive_objects, spaceman_objects joining with replicated_objects table, for objects from the given NODEID, to see if there are objects still missing replication.

After inventory expiration process is complete on both the servers, check object count on both source and target replication servers, in backup_objects, archive_objects, spaceman_objects tables for the specific node: 

  • db2 "select count_big from backup_objects where nodeid=<nodeid>โ€ 

  • db2 "select count_big from archive_objects where nodeid=<nodeid>โ€ 

  • db2 "select count_big from spaceman_objects where nodeid=<nodeid>โ€ 

  1. Unresolved Chunks: Ensure there are no unresolved chunks on ServerB. Use admin command โ€˜SHOW UNRESOLVEDCHUNKSโ€™ to check for this. 

  1. Replication Groups: Check the status of in-flight replication groups. Admin command โ€˜SHOW REPLGROUPโ€™ can be used to get this information. 

  1. Retention sets: Note that retention sets do not get replicated. 

  

๐Ÿ”„ Switchover Clients 

  1. Update Configuration: Change the dsm.opt/dsm.sys file to point to ServerB as the primary server. 

  1. Define Schedule Associations: Set up schedule associations on ServerB like ServerA. 

  1. Restart Services: Restart all client services and demons. 

  1. Conduct Tests: Ensure the client operates correctly and backup/restore functions work as expected. 

 

๐Ÿงน Cleanup Instructions 

๐Ÿ—‘๏ธ On Source ServerA 

  1. Delete STGRULE: Remove the replication rule for the node from ServerA to ServerB. 

  1. Update Node: โ€˜Set REPLState=disabledโ€™ for the migrated node. 

  1. Remove Replnode Definition: Remove the replication node definition for the given node. 

  1. Decommission Node: Decommission the node on ServerA. 

๐Ÿ—‘๏ธ On Target ServerB 

  1. Remove Replication Relationship: Remove the replication relationship for the node associated with ServerA. 

  1. Resume Original STGRULE: Activate the original STGRULE on ServerA. 

 

๐ŸŒŸ Final Thoughts 

By following these steps, you can effectively rebalance nodes between IBM Storage Protect servers, ensuring optimal performance and avoiding storage capacity issues. This process not only helps in managing storage more efficiently but also provides a robust mechanism for data protection and redundancy. 

0 comments
56 views

Permalink