SAP HANA system replication is a mechanism ensuring the high availability of SAP HANA system. System replication is SAP's recommended configuration for addressing SAP HANA outage reduction due to planned maintenance, faults, and disasters.
- SAP HANA HSR is configured as Active-Passive Cluster using RedHat High Availability Add-On to create & maintain Pacemaker Clusters.
- The Primary node in the cluster will automatically own the Cluster virtual IP address.
- Use the HSR virtual IP address or DNS name for registering with Defender CDM.
- Additionally, also register the IP addresses or DNS names of the cluster nodes participating the cluster formation. This is required when performing IDR, and in case the Virtual IP address becomes unavailable.
- The Virtual IP address or name will be used during all other workflows – Inventory, Backup & IDBR restore Jobs.
Registering SAP HANA Cluster
To register an Application Server-SAP HANA, complete the following steps:
- Click the Configure tab. On the Views pane, select Sites & Providers, then select the Providers tab.
- In the Provider Browser pane, select Application Server.
- Right-click Application Server. Then click Register. The Register Application Server dialog opens.
- Select SAP HANA as the Application Type.
- Populate the fields in the dialog:
Site
A user-defined provider location, created in the Sites & Providers view on the Configure tab.
Name
A user-defined name for the SAP HANA server. This can be the same as the host name or it can be a meaningful name that is used within your organization to refer to the provider. Provider names must be unique.
Host Address
A resolvable IP address or a resolvable path and machine name. When registering an SAP HSR cluster, register using its virtual IP address or name as well as all the nodes that are participating in the cluster formation must be registered.
Port
The communications port of the provider you are adding. The format for the port number is 3<instance number>15. So, for example, if the instance number is 07, then enter the following port number: 30715.
System Credential / Database Credential(s)
Select or create your SAP HANA operating system and database credentials. See Identities overview if you want to define users before registering the provider.
System Credentials
Select or create your SAP HANA and database host operating system user’s credentials. The operating system user must have password less sudo privileges to execute commands as sXXadm user. For more information about sudo options, see SAP HANA requirements.
Key
IBM Storage Defender Copy Data Management supports the SSH key-based operating system authentication for SAP HANA database servers. Allows users to authenticate the SAP HANA application host by using the SSH key.
Important: The SAP HANA registration with IBM Storage Defender Copy Data Management only works when the SSH key pair is generated by using the ssh-keygen -t rsa -m PEM command.
Database Credentials
The database credentials can be either for SYSTEM user or normal user that exists in SYSTEMDB and SXX tenant database with same username, password, and appropriate permissions. For more information on creating user by using HDBSQL command line interface, see SAP HANA requirements.
6. Click OK. IBM Storage Defender Copy Data Management first confirms a network connection and then adds the provider to the database.
Inventory SAP HANA Cluster
Once registered, Defender CDM creates a high-level Inventory job and automatically catalogs the objects on the provider. Note that the Inventory job may take considerable time to complete.
Creating SAP HANA Cluster Backup job definition
- Click the Jobs tab. Expand the Database folder, then select SAP HANA.
- Click New, then select Backup. The job editor opens.
- Enter a name for your job definition and a meaningful description.
- From the list of available sites select one or more resources to back up.
Tip: You cannot select a database if it is not eligible for protection. Hover your cursor over the database name to view the reasons the database is ineligible, such as the database files, control files, or redo log files are stored on unsupported storage.
Note: For backing up the database on HSR Cluster, on selecting the database, a snapshot backup is completed using existing Primary system only.
- Select an SLA Policy that meets your backup data criteria.
Tip: For Sentinel scanning capabilities, the SLA policy must be Safeguarded Copy in case of IBM Storage FlashSystem and you need to select a previously registered Security Scan server.
- Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs tab. Select only one SLA policy from the list of available SLA policies.
- To create the job definition using default options, click Create Job. The job runs as defined by your triggers, or can be run manually from the Jobs tab.
- To edit options before creating the job definition, click Advanced. Set the job definition options.
- Optionally, expand the Notification section to select the job notification options.
- To edit Log Backup options before creating the job definition, click Log Backup. If Backup Logs is selected, IBM Storage Defender Copy Data Management backs up database logs then protects the underlying disks. Select resources in the Select resource(s) to add archive log destination field. Database logs are backed up to the directory entered in the Use Universal Destination Mount Point field, or in the Mount Point field after resources are selected. The destination must already exist, must reside on storage from a supported vendor, and the SAP HANA user needs to have full read and write access. For more information, see the SAP HANA requirements.
If multiple databases are selected for backup, then each of the servers hosting the databases must have their Destination Mount Points set individually. For example, if two databases, one from Server A and one from Server B, are added to the same job definition, and a single mount point named /logbackup is defined in the job definition, then you must create separate disks for each server and mount them both to /logbackup on the individual servers. When the mount point is changed, you must manually go in and clean up the previous log backup directory path.
To disable a log backup schedule on the SAP HANA server, edit the associated SAP HANA Backup job definition and deselect the checkbox next to the database on which you wish to disable the log backup schedule in the Select resource(s) for log backup destination field, then save and re-run the job. When the mount point is disabled, you must manually go in and clean up the log backup directory path.
Tip:
The job definition must be saved and re-run for mount point changes or disablement to take effect.
The default setting for pruning SAP HANA log backups is 7 days. This value may be adjusted in the property file located in /opt/virgo/repository/ecx-usr/com.syncsort.dp.xsb.serviceprovider.properties. Modify the application.logpurge.days parameter to the desired value. Finally, restart the virgo service by issuing the following command:
systemctl restart virgo.service
11.When you are satisfied that the job-specific information is correct, click Create Job. The job runs as defined by your triggers, or can be run manually from the Jobs tab.
Creating SAP HANA Cluster Restore job definition
- Click the Jobs tab. Expand the Database folder, then select SAP HANA.
- Click New, then select Restore. The job editor opens.
- Enter a name for your job definition and a meaningful description.
- Select a template. Available options include Instant Database Restore and Instant Disk Restore.
- Click Source. From the drop-down menu select Application Browse to select a source site and an application server to view available database recovery points. Select resources, and change the order in which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for application servers with available recovery points. Add copies to the job definition by clicking Add. Change the order in which the resources are recovered by dragging and dropping the resources in the grid.
Note: In Dell PowerMax array based restore jobs, user can select recovery points as the local or remote copy.
Defender CDM does not support in-place restore from a remote copy in case of Dell PowerMax.
- Click Copy. Sites containing copies of the selected data display. Select a site. By default the latest copy of your data is used. To choose a specific version, select a site and click Select Version. Click the Version field to view specific copies and their associated job and completion time. If recovery from one snapshot fails, another copy from the same site is used.
- Click Destination. Select a source site and an associated destination. Review the destination's database name mapping settings.
Note: The Restore (Instant Disk and Instant Database) operations for SAP HANA are not supported on an alternate host. You need to restore it on the same source host.
To Restore a database on a HSR Cluster, suspend the SAP HANA replication, so no automatic failover gets triggered. Suspend pacemaker service so that no failover is automatically triggered by the Pacemaker.
- To create the job definition using default options, click Create Job. The job can be run manually from the Jobs tab.
- To edit options before creating the job definition, click Advanced. Set the following job definition options.
Application Options
Policy Options
- Continue with next source on failure – Toggle the recovery of a resource in a series if the previous resource recovery fails. If unselected, the Restore job stops if the recovery of a resource fails.
- Automatically clean up resources on failure – Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
- Allow to overwrite and force clean up of pending old sessions – Enabling this option allows a scheduled session of a recovery job to force an existing pending session to clean up associated resources so the new session can run. Disable this option to keep an existing test environment running without being cleaned up.
- Allow to overwrite vDisk – In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the selected source.
- Job-Level Scripts – Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-level. A script can consist of one or many commands, such as a shell script for Linux based virtual machines or Batch and PowerShell scripts for Windows based virtual machines.
Storage Options
- Make Permanent – Set the default permanent restoration action of the job. All database recovery operations can leverage Instant or Test modes and then either be deleted or promoted to permanent mode. This behavior is controlled through the Make Permanent option.
- Enabled – Always make permanent through full copy FlashCopy®
- Disabled – Never make permanent
- User Selection – Allows the user to select Make Permanent or Cleanup when the job session is pending
Important:
- When you run a restore with Make Permanent, the data is vMotioned to the datastore where the virtual machine’s (VM) VMX file resides. The datastore to which the data and logs are moved may not be ideal or even supported by IBM Storage Defender Copy Data Management for future backups. In most cases, the data and logs will be moved to the same datastore as the VM operating system disk and this can result in subsequent backup failures. You should inspect the VM configuration after restore with Make Permanent completes and manually reconfigure the VM to move the data and logs disks to datastores that are supported for subsequent backups and not the datastore containing the VM operating system disk. Then run VM and application inventory jobs explicitly to capture the updated configuration for the application servers. Finally, you can run another backup job of the resource so that a snapshot is available for future restore jobs.
- When you run a restore with Make Permanent, it is recommended that an entry is added in the /etc/fstab file for your later reference.
- Revert – Set the source production SAP HANA machines to the snapshot. This may have an impact on database downtime and could present a risk to the source production machine.
- Enabled – Always revert
- Disabled – Never revert
- User Selection – Allows the user to select Revert or Cleanup when the job session is completed
Note: IBM Storage Defender Copy Data Management does not support the Revert function if the backup was created using an IBM Storage Virtualize for snapshot SLA policy. The Revert function is only available for backups that were created using the IBM Storage Virtualize SLA policy.
-
- Protocol Priority – If more than one storage network protocol is available, select the protocol to take priority in the job. Available protocols include iSCSI and Fibre Channel.
10. Optionally, expand the Notification section to select the job notification options.
- Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to create a job definition that starts the job immediately. Select Schedule job to start at later time to view the list of available schedules. Optionally select one or more schedules for the job. As each schedule is selected, the schedule's name and description displays.
Tip: To create and select a new schedule, click the Configure tab, then select Schedules. Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new schedule.
- When you are satisfied that the job-specific information is correct, click Create Job. The job runs as defined by your schedule, or can be run manually from the Jobs tab.
Steps to perform on CDM in the event of SAP HANA HSR primary node failover
If the Primary Node in the HSR Cluster failover, subsequent actions to be taken as below:
A failover is performed at SAP HANA level. The secondary server will become the new primary.
SAP HANA resources registered in Pacemaker are automatically updated to reflect the new primary.
Steps to perform on CDM UI:
- Re-run the inventory using the same virtual IP address of name for the SAP HANA HSR Cluster.
- Re-run the backup job with the new Primary after the failover.
- For Restore, Suspend SAP HANA replication, so no automatic failback will get triggered & select the appropriate restore type following as per below:
- Instant DB restore is to be performed by suspending the pacemaker service as CDM will attempt to shut down the running database during restore. No failover should be automatically triggered at Pacemaker.
- Instant Disk restore is performed by suspending the pacemaker service. No failover should be automatically triggered at Pacemaker.
Pre-requisites for Instant Disk Restore (IDR)
Before attempting CDM IDR, please follow these instructions:
On SAP Cluster:
- Shut down HDB & other dependent services on the secondary/passive node
- Stop cluster services (pcsd, pacemaker, corosync, sbd services):
a. systemctl stop pcsd
b. systemctl stop pacemaker
c. systemctl stop corosync
This will automatically stop the sbd & hdb services.
For IDR, the HDB & its dependent services also need to be stopped on the primary/active node
- Stop cluster services (pcsd, pacemaker, corosync, sbd services):
a. systemctl stop pcsd
b. systemctl stop pacemaker
c. systemctl stop corosync
This will automatically stop the sbd & hdb services.
On CDM:
- Start IDR job with default options.
- Post Restore, on the primary/active node start below services:
a. systemctl start pcsd
b. systemctl start pacemaker
c. systemctl start corosync
This will automatically start the sbd & hdb services on the primary/active node
- Repeat for the secondary/passive node. Once done, complete the replication configuration on both the nodes.
Pre-requisites for Instance Database Restore (IDBR)
Before attempting CDM IDBR, please follow these instructions:
On SAP Cluster:
- Shut down HDB & other dependent services on secondary node
- Stop cluster services (pcsd, pacemaker, corosync, sbd services):
a. systemctl stop pcsd
b. systemctl stop pacemaker
c. systemctl stop corosync
This will automatically stop the sbd & hdb services.
On CDM:
- Start IDBR job with default options.
Post Restore, CDM starts the SAP services, and the database is made available for user connections. After restore operation, database should be up & running.
- On the passive node start below services:
a. systemctl start pcsd
b. systemctl start pacemaker
c. systemctl start corosync
- Once done, complete the replication configuration on both the nodes.
External References
RedHat High Availability Add-On to create & maintain Pacemaker Cluster
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/index
SAP HANA System Replication
https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/afac7100bc6d47729ae8eae32da5fdec.html?locale=en-US