API Connect

API Connect

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Analytics parallel offload pipelines: Robust integration with Noname security

By Mark Taylor posted Mon November 11, 2024 10:48 AM

  

API Connect analytics supports integration with Noname Security (now known as Akamai API Security) using the “offload” mechanism, but for those who are already offloading, adding Noname as a second offload target had some limitations that impacted flexibility and resiliency:

  1. The Noname endpoint must be sent API payloads, which might not be desirable for existing offload targets given the increase in the amount of data flowing over the wire or being stored. It is also possible the payload information could be confidential and not appropriate to store internally or in an external offload target, but that data is important to get the most value out of Noname Security’s traffic analysis.

  2. The flow of data to Noname is impacted by the availability of the existing external offload target.

With the release of API Connect v10.0.8.1, these problems have been addressed with support for a second data pipeline for offloading your analytics data.

Let’s examine an existing scenario where analytics is configured to store data both internally and offload it to Splunk over HTTP. The analytics ingestion and offload configuration might look something like this:

spec:
  external:
    offload:
      enabled: true
      output: |
        http {
          url => "https://my.splunk.domain.org"
          http_method => "post"
          codec => json
          content_type => "application/json"
          ssl_certificate_authorities => "/etc/velox/external_certs/offload/<certname>"
        }
      secretName: splunk-certificates 
  ingestion:
    ...
  storage:
    enabled: true
    ...

Historically, configuring output for Noname would result in a second http output plugin configuration, but for the same offload data pipeline:

spec:
  external:
    offload:
      enabled: true
      output: |
        http {
          id => "splunk-offload"
          url => "https://my.splunk.domain.org"
          http_method => "post"
          codec => json
          content_type => "application/json"
          ssl_certificate_authorities => "/etc/velox/external_certs/offload/<splunk-certname>"
        }
        http {
          id => "noname-offload"
          url => "https://<ENGINE_URL>/engine?structure=ibm-apiconnect"
          http_method => "post"
          codec => "json"
          content_type => "application/json"
          headers => ["x-nn-source-index", "<INDEX>", "x-nn-source-key", "<KEY>"]
          ssl_certificate_authorities => "/etc/velox/external_certs/offload/<noname-certname>"
        }
      secretName: external-certificates
  ingestion:
    ...
  storage:
    enabled: true
    ...

If an offload filter was added, it would apply to both outputs, meaning that the data sent to both Splunk and Noname would be modified in the same way. Also, the outputs will be actioned in order and any failures will cause all pipeline processing to stop. In our example, this means that if the Splunk output fails, the Noname output will not be attempted.

With the new support for two parallel offload pipelines, these shortcomings are overcome. Data flows are now independent and look something like this:

Here the offload pipeline for Noname is completely independent from the pipeline that is offloading to Splunk. Built-in monitoring within analytics will ensure that if either pipeline starts to “back-up”, it will not impact the other pipelines.

You can achieve this scenario using the following configuration:

spec:
  external:
    offload:
      enabled: true
      filter: |
        mutate {
          remove_field => ["request_body", "response_body"]
        }
      output: |
        http {
          id => "splunk-offload"
          url => "https://my.splunk.domain.org"
          http_method => "post"
          codec => json
          content_type => "application/json"
          ssl_certificate_authorities => "/etc/velox/external_certs/offload/<certname>"
        }
      secretName: splunk-certificates
    offload2:
      enabled: true
      output: |
        http {
          id => "noname-offload"
          url => "https://<ENGINE_URL>/engine?structure=ibm-apiconnect"
          http_method => "post"
          codec => "json"
          content_type => "application/json"
          headers => ["x-nn-source-index", "<INDEX>", "x-nn-source-key", "<KEY>"]
          ssl_certificate_authorities => "/etc/velox/external_certs/offload2/<certname>"
        }
      secretName: noname-certificates
  ingestion:
    filter: |
      mutate {
        remove_field => ["request_body", "response_body"]
      }
    ...
  storage:
    enabled: true
    ...

This configuration also adds pipeline filters to ensure that only the Noname offload target receives potentially sensitive API payload data. This will also help to keep storage space requirements to a minimum for analytics and reduce the amount of data being sent to Splunk.

Armed with this information and the release of API Connect v10.0.8.1 you can now take full advantage of the Noname security feature set, but with none of the drawbacks. Feedback, as always, is welcome and we’d love to hear about any interesting scenarios out there!

#analytics #offload #noname

0 comments
11 views

Permalink