IBM QRadar

IBM QRadar

Join this online user group to communicate across Security product users and IBM experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Long term retention of Config and Data backups leveraging Microsoft Azure Blob Storage

By Cristian Ruvalcaba posted Wed February 16, 2022 04:52 PM

  

Hello Community!

This is 'the first sequel' to my previous article regarding IBM Cloud Object Storage! In this one, I will focus on Microsoft Azure's Blob Storage in regards to long term cool storage. Note that Microsoft Azure does provide several levels of storage capabilities with varying price points. I selected one in particular, but you may select any that meets your requirements! And away we go...

We all know the importance of data retention as well as the importance of doing this on a separate server, and ideally on a remote system or a cloud-based storage provider. Although QRadar does provide a quick and easy way to configure both data and configuration backups, it suffers from default storage location being on the active device. Often times, organizations struggle with spinning up a dedicated storage environment for on-premise longer term retention due to operational and capital costs associated with this, not to mention the maintenance related overhead!

The cloud-based pay-as-you-go or measured/metered pricing model can help with this and even better, cloud providers often have tiered pricing based on their own short- or long-term storage models.

And this is where Azure comes in! Azure can provide a great deal of help given their various storage models, in this case, the Blob storage. Not only does this model not require a mount point on your QRadar instance, but it also provides hot and cool models for storage containers, with cool not necessitating a lot of interaction. I figured it was about time I tried this out for myself given the lessons learned in my previous attempts using the IBM Cloud Object Storage, link here.

Please note that this method applies to the 'traditional on-premise' model of deployment for QRadar instances and that this script will need to be scheduled on all consoles and processors as that is typically where the relevant backup files are stored. This is true of both physical and virtual appliances, including cloud hosted virtual appliances (eg: AWS/Azure/GCP/IBM Cloud/etc).  This does NOT apply to QRadar on Cloud, the QRadar SaaS offering.

For this use case, I created a Microsoft email account as well as an Azure account associated to it. This first step may not be necessary if you already have an account (eg: hotmail.com or outlook.com) and the second step may not be necessary if you or your organization is already leveraging Azure services.

Access Azure Cloud Services:



Set up a Storage Bucket:

  • From the portal's Quick Start Center, at the top right corner, click on the 'Store, Backup or Archive data' button.
  • From there you'll be taken to the Quick Start storage options list. Select the option for "Get Started with Azure Blob Storage."
  • After several clicks moving around, I finally found out that my account had no access to 'free' services and I ended up creating a subscription.
  • Once all the relevant billing details and subscription items were configured and confirmed, I found myself in the dashboard I needed to be on.
  • Under the "Azure Services" section, I created a new "Storage account:"

  • This led me to the actual creation area:
  • Create the storage account using the 'Create' button, I created a custom account with specific available storage capabilities.
    1. Resource Group: To be named/created as needed.
    2. Storage Account Name: This needs to be a unique name, not unique to your subscription, but generally unique, I went through a few iterations just to get one that was unique and allowable.
    3. Region: I would recommend selecting the one geographically nearest your QRadar instances.
    4. Performance: This will depend on your use case, I selected 'Standard' for this example.
    5. Redundancy: There are some options here to be sure, but I chose the least redundant option for this example simply to save on some costs.
    6. Advanced Configurations:
      • Access Tier: For this, I selected 'cool' as the intended use here is to do long term retention with little to no access, unless absolutely necessary.
      • Allow Cross-Tenant Replication: I selected this option mainly because it gives more flexibility on where to store, at least, that's how I understood it given potential tenant limitations.
    7. For the rest of the main option areas, I left them as is, including retention details that should be adjusted to your needs. Default for soft deletion is 7 days, I left that in place for this example.
    8. Once this is complete, you'll see a screen that shows you status of the deployment of your configurations.
At this point, you're almost ready to start automating the long term storage of your data and configuration backups.

  • By this time, you should now see the Storage Account listed in your recent resources in your Azure dashboard. Click on that account and you'll be redirected to the account details:
  • On the left hand side menu, click on "Containers"
    • You will be able to create a container for your backups. It is possible to create several containers as necessary, one use case for that would be to have different containers for data backups and configuration backups. I grouped them all together in a single "backups" container. You will need the container name(s) for the API calls.
    • Note: There is a default container for logs. I left this one untouched.
  • On the same left hand side menu, scroll down to "Shared Access Signature" and click that option. This is where the shared signature and expiration details will be configured in order to use the available API calls, including the upload!
    1. Ensure that "Blob" is checked for Allowed Services.
    2. Ensure that "Object" is checked for Allowed Resource Types
    3. Ensure that "Write" is checked for Allowed Permissions
    4. Ensure that "Read/Write" is checked for Allowed Blob Index Permissions
    5. Set the start and expiry date and time for the access, this can depend on your internal policies as it relates to cloud access and limitations, in my case, I set it up to expire at the end of the year.
    6. If you choose to limit the egress points that can send data into the storage containers, it can be done in the Allowed IP Addresses. I left mine open as I am running this from a sandbox environment sitting behind a DHCP IP space from my ISP.
    7. Select your choice of Allowed Protocols, I chose to only allow for HTTPS.
    8. Select your routing tier. I left mine as the basic default.
    9. Select your signing key.
    10. Click on Generate SAS and Connection String
  • At this point, all relevant details have been configured and the SAS has been generated, scrolling down on the same window will show them. Note the item list below as the items you will need:
    1. Blob URL: For this, you will only need the DNS hostname provided, but without the URI, (eg. xxxx.blob.core.windows.net where the xxxx is the storage account) that part we'll grab from a different section.
    2. SAS string:This will be used in our API calls
    3. On 'Role' drop down, select writer.
    4. Click on 'Create Access Policy'
  • Collect Data Points necessary for uploading:
    • The "container name": We defined this a few steps above, and more than one can be used if you choose to have separate containers for data or configuration backups.
    • The "Blob URL": See item 1 in the preceding bullet point.
    • The "SAS" or "Shared Access Signature" string: See down the down arrow next to the service account created for this purpose.
    • NOTE: These items will need to be replaced in the script below as for now, it's simply place-holders.

When these steps are complete, we're ready to go! We just need to find a way to automate this… why not leverage the API capabilities of Azure Blob Storage with cURL? It took me a little while and some testing, but I still FAILED! This was mainly because of how I set up my test for the initial call... I named both backup files "test" and the second upload overwrote the first one. Once I updated the API call with unique names, I was able to upload both.

NOTE: I am using a single bucket and thus only have one listed below, but I recommend having a container for configuration backups and a separate containers for data backups. In that case, the script below would need to be updated to add that capability by adding additional variables and adjusting the cURL command for these two cases.

The Script:


#!/bin/bash
#Script Name: AzureBackupCOS.sh
#Script Location: /store/script <THIS IS MY DEFAULT LOCATION, STORE WHERE YOU DEEM APPROPRIATE>
#Version: 0.1
#Owner: Cristian Xavier Ruvalcaba
#Document Purpose: Automated uploads of latest backups (data and config) to Azure Blob Storage
# Define variables

# Configuration variable, available through the UI
# backupPath='/store/backup/' # Clear leading '#' character to define this as the path.

backupPath=$(grep "backup-directory-path" /opt/qradar/conf/backup-recovery-config.xml | awk -F"backup-directory-path=\"" '{print $2}' | awk -F"\"" '{print $1
}')'/' # This will pull the directory from the configuration file and make it usable.

# Environmental Variables

host=$(hostname)
date=$(date '+%Y-%m-%d')

# Latest Backup File Variables

backupData=$(ls $backupPath | grep -i backup | grep -i data | tail -n 1)
backupConfig=$(ls $backupPath | grep -i backup | grep -i config | tail -n 1)

# Define Azure Blob Storage Variables as defined when creating in Azure

DATE_NOW=$(date -Ru | sed 's/\+0000/GMT/')
AZ_VERSION="2021-04-10" # NOTE THAT THIS IS THE AZURE API VERSION DATE, THIS IS THE MOST RECENT I'VE FOUND AND MORE RECENT ONES MAY EXIST OR MAY SOON EXIST, THIS CAN BE UPDATED WHEN THAT TAKES PLACE.
AZ_BLOB_URL="<THIS IS YOUR OWN URL THAT WAS IDENTIFIED ABOVE>"
AZ_BLOB_CONTAINER="<THIS IS YOUR OWN CONTAINER NAME AS YOU DEFINE IT, NOTE THAT A SECOND CONTAINER NAME MAY BE NEEDED IF DOING SEPARATE CONTAINERS FOR DATA AND CONFIGURATION BACKUPS>"

AZ_BLOB_TARGET="${AZ_BLOB_URL}/${AZ_BLOB_CONTAINER}/"

# Need to check the following:
AZ_SAS_TOKEN="<THIS IS THE SAS STRING WE WERE PROVIDED BY AZURE>"


# cURL command to upload files

if [ -z "$backupConfig" ]
then
logger "No config file uploaded for $host on $date"
else
curl -v -X PUT -H "Content-Type: application/octet-stream" -H "x-ms-blob-type: BlockBlob" -H "x-ms-date: ${DATE_NOW}" -H "x-ms-version: ${AZ_VERSION}"
--data-binary @"$backupPath$backupConfig" "${AZ_BLOB_TARGET}$date-$host-configBackup.tgz${AZ_SAS_TOKEN}"
logger "Config file $host-$date-configbackup.tgz uploaded to $bucket on $endpoint for $host on $date"
fi

if [ -z "$backupData" ]
then
logger "No data file uploaded for $host on $date"
else
curl -v -X PUT -H "Content-Type: application/octet-stream" -H "x-ms-blob-type: BlockBlob" -H "x-ms-date: ${DATE_NOW}" -H "x-ms-version: ${AZ_VERSION}"
--data-binary @"$backupPath$backupData" "${AZ_BLOB_TARGET}$date-$host-dataBackup.tgz${AZ_SAS_TOKEN}"
logger "Data file $host-$date-databackup.tgz uploaded to $bukcet on $endpoint for $host on $date"
fi



The script above looks for the latest data and configuration backups, then sends them up to the cloud storage bucket. They will have the following naming structure:

date-hostname-[config|data]backup.tgz

You'll find these all in the container:



Once you finish this… congrats! You've just successfully uploaded the backup file(s).

I did a little extra sanity check this time. I downloaded the config backup from blob storage and ran an MD5 check on it:


You'll notice 564fef55147e139410cbfd2cb7902112 for the hash.

I went ahead and ran an md5sum on the original backup file:


Again, you'll notice 564fef55147e139410cbfd2cb7902112 as the hash.

Note: I uploaded the most recent files, in my case they happened to be genertaed the day before, hence the different "dates" seen on the file name listed in Azure and the one you see in the QRadar screenshot just above this note.

Now to the automation:

It's a matter of setting up a crontab entry to run daily at a specified time. I don't have a recommendation on this other than doing it during a lower event throughput time window so as to not tie up the NIC too much.

0 comments
39 views

Permalink