Cloud Global

 View Only

Setting up a distributed security information and event management (SIEM) solution using Wazuh

  • 1.  Setting up a distributed security information and event management (SIEM) solution using Wazuh

    Posted Mon January 29, 2024 01:21 PM
    Edited by NICK PLOWDEN Tue January 30, 2024 09:50 AM

    Authors: Adam Geiger (ageiger@us.ibm.com) / Dinakaran Joseph (dinakar@us.ibm.com) / Priyank Narvekar (priyankn@ca.ibm.com)

    This tutorial shows you one way to implement a security information and event management (SIEM) solution using an open-source platform called Wazuh.

    This tutorial is guidance, but you are solely responsible for installing, configuring, and operating IBM third-party software in a way that satisfies your requirements. In addition, IBM does not provide support for third-party software. So, if you choose to use Wazuh and encounter issues with the Wazuh software that require support, you should contact Wazuh

    To implement your SIEM host solution, you must complete the following high-level steps:

    1. Provision virtual server instances for distributed Wazuh cluster.
    2. Install the Wazuh cluster software.
    3. Provision an VPC Application Load Balancer.
    4. Configure custom decoders and rules.
    5. Configure the Wazuh server manager.
    6. Setup and configure Logstash to pull output from Activity Tracker and Flow Logs for VPC that is stored in Cloud Object Storage.

    Install and configure Wazuh agents on all of your virtual server instances except where the Wazuh server is installed.

    Before you begin

    Before you deploy and configure a distributed Wazuh SIEM solution using virtual server instance running on the VPC reference architecture, make sure you have completed the following prerequisites:

    There are many ways to setup up and configure a distributed Wazuh SIEM solution. You can install all the components that make up Wazuh on a single virtual server instance or place them each on separate instances. This guide will install the three components that make up the Wazuh central components (e.g. indexer, server, and dashboard) on a single virtual server instance, but in a distributed manner to make them highly available.

    Architecture for VPC with Wazuh SIEM

    Limitations

    The following limitations currently apply for the Wazuh SIEM solution that is described:

    • The Wazuh agent can be installed on virtual server instances only.

    Provision virtual server instances for distributed Wazuh cluster

    Please see the Wazuh indexer requirements and Wazuh server requirements before proceeding. The values used for the virtual server instance is an example and can be modified based on your needs.

    Provision at least two virtual server instance. For high availability, it is recommended to provision them in different zones of your VPC cluster.

      • Profile: cx2-8x16 with 8 vCPUs, 16 GB RAM, 16 Gbps.
      • Linux-based operating system (CentOS, RHEL, Ubuntu).
      • Under the Boot volume section, ensure encryption is set to Hyper Protect Crypto Services.
      • Under the Data volume section, create the following VPC Block storage volumes:
      1. Wazuh indexer data volume with the following characteristics:
        • Location: the location where your virtual server instance is provisioned
        • IOPS tier: 5 IOPS/GB
        • Size: 500
        • Encryption: Hyper Protect Crypto Services/Key protect
      2. Wazuh server data volume with the following characteristics:
        • Location: the location where your virtual server instance is provisioned
        • IOPS tier: 5 IOPS/GB
        • Size: 250
        • Encryption: Hyper Protect Crypto Services/Key protect
    1. Set up your VPC Block storage data volumes for use on the virtual server instance.
      1. Mount point for the Wazuh indexer:
        • If using Wazuh indexer (OpenSearch): /var/lib/wazuh-indexer
        • If using Elasticsearch: /var/lib/elasticsearch
      2. Mount point for the Wazuh server: /var/ossec

    Install the Wazuh cluster software

    When installing software, please note the version that you are installing since your package manager might install the latest version which might be incompatible with other software. Please see the OpenSearch support matrix or Elasticsearch support matrix for Filebeats compatibility based on the Wazuh indexer that you installed. 

    1. Install the Wazuh indexer software by using the following install guides:
    2. Install the Wazuh server in a multi-node configuration.
    3. Install the Wazuh dashboard on one of the virtual server instances

    Provision a VPC Application Load Balancer

    For high availability, an VPC Application Load Balancer (ALB) is needed to distribute traffic from your Wazuh agents across your Wazuh server nodes. See Load balancers for VPC overview for more information. The Wazuh agent configuration is the last step on this guide where the ALB's fully qualified domain name will be used.

    1. Create an VPC Application Load Balancer with the following configuration:
      • Load Balancer: Application Load Balancer
      • Type: Private
    2. Create a back-end pool for port 1515 with the following characteristics:
      • Protocol: TCP
      • Session Stickiness: Source IP
      • Proxy Protocol: Disabled
      • Method: Round Robin
      • Health check path: /
      • Health protocol: TCP
      • Health port: 1515
    3. Create a back-end pool for port 1514 with the following characteristics
      • Protocol: TCP
      • Session Stickiness: Source IP
      • Proxy Protocol: Disabled
      • Method: Round Robin
      • Health check path: /
      • Health protocol: TCP
      • Health port: 1514
    4. Attach server instances to the backend pool by clicking Attach server + and then selecting the subnets containing the virtual server instances hosting the Wazuh manager
    5. Create a front-end listener for port 1515 with the following characteristics:
      • Default backend-pool: Name of the backend pool for port 1515 above
      • Protocol: TCP
      • Listener port: 1515
    6. Create a front-end listener for port 1514 with the following characteristics:
      • Default backend-pool: Name of the backend pool for port 1514 above
      • Protocol: TCP
      • Listener port: 1514

    Configure custom decoders and rules for the Wazuh server

    You can add rulesets which are used by the system to detect attacks, intrusions, software misuse, configuration problems, application errors, malware, rootkits, system anomalies, or security policy violations. OSSEC provides an out-of-the-box set of rules that we update and expand, in order to increase Wazuh detection capabilities. You only need to set this on the Wazuh server that you set as master. See Custom rules and decoders for more information.

    1. In the file called /var/ossec/etc/decoders/local_decoder.xml, put the following contents:

      <decoder name="flowlog">
        <program_name>flowlog</program_name>
        <plugin_decoder>JSON_Decoder</plugin_decoder>
      </decoder>
      
      <decoder name="atracker">
        <program_name>atracker</program_name>
        <plugin_decoder>JSON_Decoder</plugin_decoder>
      </decoder>
      
      <decoder name="logdna">
        <program_name>logdna</program_name>
        <plugin_decoder>JSON_Decoder</plugin_decoder>
      </decoder>
      

    2. In the file called /var/ossec/etc/rules/local_rules.xml, put the following contents:

      <group name="flowlogs,">
        <rule id="100072" level="0">
          <program_name>flowlogs</program_name>
          <description>Flowlog message</description>
        </rule>
         
        <rule id="100073" level="5">
          <if_sid>100072</if_sid>
          <field name="message.flow_logs.action">rejected</field>
          <description>Flowlog rejected.</description>
        </rule>
      </group>
      
      <group name="activity-tracker,">
        <rule id="100080" level="0">
          <program_name>atracker</program_name>
          <description>{{site.data.keyword.atracker_short}} message</description>
        </rule>
         
        <rule id="100081" level="0">
          <if_sid>100080</if_sid>
          <field name="message.line.logSourceCRN">\.+</field>
          <description>{{site.data.keyword.atracker_short}} message</description>
        </rule>
      
        <rule id="100082" level="5">
          <if_sid>100081</if_sid>
          <field name="message.line.severity">warning+</field>
          <description>{{site.data.keyword.atracker_short}} message - severity warning</description>
        </rule>
      
        <rule id="100083" level="5">
          <if_sid>100081</if_sid>
          <field name="message.line.severity">critical+</field>
          <description>{{site.data.keyword.atracker_short}} message - severity critical</description>
        </rule>
      </group>
      
      <group name="at-logdna,">
        <rule id="100090" level="0">
          <program_name>logdna</program_name>
          <description>Activity Tracker LogDNA message</description>
        </rule>
      
        <rule id="100091" level="0">
          <if_sid>100090</if_sid>
          <field name="observer.name">ActivityTracker</field>
          <description>Activity Tracker message</description>
        </rule>
      
        <rule id="100092" level="5">
          <if_sid>100091</if_sid>
          <field name="severity">warning</field>
          <description>Activity Tracker warning message</description>
        </rule>
      
        <rule id="100093" level="5">
          <if_sid>100091</if_sid>
          <field name="severity">critical</field>
          <description>Activity Tracker critical message</description>
        </rule>
      </group>
      

    These are example decoders and rules. The rules above will trigger an alert if a flow log action is "rejected" and if the severity of an Activity Tracker with Event Routing event is set to either "warning" or "critical". You can modify these rules based on your needs. You are responsible for adding all the rules appropriate to meet the control requirements.

    1. Restart the Wazuh server manager by issuing systemctl restart wazuh-manager.

    Configure the Wazuh server manager

    1. Configure the Wazuh server to listen for events from the agents by adding the following configuration to /var/ossec/etc/ossec.conf and replacing the user dependent values:

      <remote>
        <connection>syslog</connection>
        <port>514</port>
        <protocol>tcp</protocol>
        <local_ip><IP ADDRESS OF YOUR VIRTUAL SERVER></local_ip>
        <allowed-ips><ADDRESS PREFIX of your VPC></allowed-ips>
      </remote>
      
    2. Restart the Wazuh server manager by issuing systemctl restart wazuh-manager.

    Setup and configure Logstash

    Logstash is used to stream files within Cloud IBM Cloud Object Storage to the Wazuh server. In this configuration, we will be using Logstash to pull down the Activity Tracker with Event Routing and Flow Logs for VPC from Cloud Object Storage.

    1. Install Logstash for the OS running on the Wazuh server node that is set as master.

    2. Install the Logstash plugins:

      ## Uninstall the base aws plugin
      <PATH TO LOGSTASH>/bin/logstash-plugin remove logstash-integration-aws
      
      ## Install the s3 plugin
      <PATH TO LOGSTASH>/bin/logstash-plugin install logstash-input-s3-cos
      
      ## Install the syslog plugin
      <PATH TO LOGSTASH>/bin/logstash-plugin install logstash-output-syslog
      

      called logstash-input-s3-cos by issuing the command <PATH TO LOGSTASH>/bin/logstash-plugin install logstash-input-s3-cos.

    3. In the config directory of Logstash, create a file called flowlogs.conf with the following content:

       input {
             s3 {
               access_key_id => "[<HMAC ACCESS KEY>](/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)"
               secret_access_key => "[<HMAC_SECRET_ACCESS_KEY>](/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)"
               bucket => "<BUCKET THAT CONTAINS YOUR FLOWLOG FILES>"
               endpoint => "https://[<S3 ENDPOINT OF THE COS Bucket>](/docs/cloud-object-storage?topic=cloud-object-storage-endpoints)"
               interval => 300
             }
       }
      
       filter {
         json {
           source => "message"
           target => "message"
         }
         json {
           source => "[message][line]"
           target => "[message][line]"
         }
         split { field => "[message][flow_logs]" }
         mutate {
           remove_field => ["[event][original]"]
         }
       }
      
       output {
         syslog {
           id => "flow_logs"
           host => "<IP address of the Wazuh server master node>"
           port => 514
           protocol => "tcp"
           sourcehost => "logstash"
           codec => "json"
           appname => "flowlogs"
           procid => "1"
       }
      }
      

    4. If you are using Activity Tracker Event Routing to send logs to Cloud Object Storage, in the config directory of Logstash, create a file called atracker.conf with the following content:

       input {
             s3 {
               access_key_id => "<HMAC ACCESS KEY>"
               secret_access_key => "<HMAC_SECRET_ACCESS_KEY>"
               bucket => "<BUCKET THAT CONTAINS YOUR ACTIVITY TRACKER EVENT ROUTING FILES>"
               endpoint => "<PRIVATE ENDPOINT OF THE COS BUCKET>"
               interval => 300
             }
       }
      
       filter {
         json {
           source => "message"
           target => "message"
           skip_on_invalid_json => true
         }
         json {
           source => "[message][line]"
           target => "[message][line]"
         }
         mutate {
           remove_field => ["[event][original]"]
         }
       }
      
       output {
         syslog {
           id => "atracker"
           host => "<IP address of the Wazuh server master node>"
           port => 514
           protocol => "tcp"
           sourcehost => "logstash"
           codec => "json"
           appname => "atracker"
           procid => "1"
       }
      }
      

    5. If you are using Activity Tracker with LogDNA, in the config directory of Logstash, create a file called atracker-logdna.conf with the following content and replace <PATH TO LOGSTASH CONFIG DIR> with the proper location:

      input {
         exec {
           command => "<PATH TO LOGSTASH CONFIG DIR>/logdna.sh"
           interval => 30
         }
       }
       filter {
         split {
           field => "message"
         }
         json {
           source => "message"
         }
         mutate {
           remove_field => ["[event][original]"]
         }
         mutate {
           rename => ["message", "log_message" ]
         }
       }
      
       output {
         syslog {
           id => "logdna"
           host => "<IP address of the Wazuh server master node>"
           port => 514
           protocol => "tcp"
           sourcehost => "logstash"
           codec => "json"
           appname => "logdna"
           procid => "2"
         }
       }
      

    6. If you are using Activity Tracker with LogDNA, in the config directory of Logstash, create a file called logdna.sh with the following content. Create a service key and set the value for API_KEY and set REGION to the endpoint value of your provisioned Activity Tracker instance:

      #!/bin/bash
      
      ###### USER DEFINED VARIABLES ###########
      API_KEY="<USER DEFINED>"
      REGION="<USER DEFINED>"
      
      TIMESTAMP_FILE_DIR=$( dirname -- "$0"; )
      TIMESTAMP_FILE="${TIMESTAMP_FILE_DIR}/logdna_timestamp"
      
      TO_TIMESTAMP=$(date +%s)
      
      FROM_TIMESTAMP=$TO_TIMESTAMP
      if [[ -f $TIMESTAMP_FILE ]]; then 
        FROM_TIMESTAMP=$(cat $TIMESTAMP_FILE)
      fi
      
      curl --request GET --url "https://api.${REGION}.logging.cloud.ibm.com/v1/export?from=${FROM_TIMESTAMP}&to=${TO_TIMESTAMP}" --header "Authorization: Basic ${API_KEY}" 
      
      if [[ $? -eq 0 ]]; then
        echo $TO_TIMESTAMP > $TIMESTAMP_FILE
      fi
      

    7. Under the config directory of Logstash, add the following to pipelines.yml based off which services you are using. Uncomment and replace the appropriate values:

      # Uncomment if using Activity Tracker with Event Routing with Cloud Object Storage
      #- pipeline.id: atracker-events
      #  path.config: "<PATH TO FILE>/atracker.conf"
      #
      # Uncomment if using Activity Tracker with LogDNA
      #- pipeline.id: atracker-events
      #  path.config: "<PATH TO FILE>/atracker-logdna.conf"
      #
      - pipeline.id: flowlogs
        path.config: "<PATH TO FILE>/flowlogs.conf"
      

    8. Restart Logstash.

    Install Wazuh agents

    The Wazuh agent is multi-platform and runs on the nodes you want to monitor. It communicates with the Wazuh server, sending data in near real-time through an encrypted and authenticated channel.

    1. For each of your virtual server instances that are not part of the Wazuh cluster, do the following:
      1. Install the Wazuh agent for the proper host OS.
        • Set the WAZUH_MANAGER environment variable to the fully qualified domain name of the the ALB from the previous section.



    ------------------------------
    ADAM GEIGER
    ------------------------------