API Connect analytics supports integration with Noname Security (now known as Akamai API Security) using the “offload” mechanism, but for those who are already offloading, adding Noname as a second offload target had some limitations that impacted flexibility and resiliency:
-
The Noname endpoint must be sent API payloads, which might not be desirable for existing offload targets given the increase in the amount of data flowing over the wire or being stored. It is also possible the payload information could be confidential and not appropriate to store internally or in an external offload target, but that data is important to get the most value out of Noname Security’s traffic analysis.
-
The flow of data to Noname is impacted by the availability of the existing external offload target.
With the release of API Connect v10.0.8.1, these problems have been addressed with support for a second data pipeline for offloading your analytics data.
Let’s examine an existing scenario where analytics is configured to store data both internally and offload it to Splunk over HTTP. The analytics ingestion and offload configuration might look something like this:
spec:
external:
offload:
enabled: true
output: |
http {
url => "https://my.splunk.domain.org"
http_method => "post"
codec => json
content_type => "application/json"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/<certname>"
}
secretName: splunk-certificates
ingestion:
...
storage:
enabled: true
...
Historically, configuring output for Noname would result in a second http output plugin configuration, but for the same offload data pipeline:
spec:
external:
offload:
enabled: true
output: |
http {
id => "splunk-offload"
url => "https://my.splunk.domain.org"
http_method => "post"
codec => json
content_type => "application/json"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/<splunk-certname>"
}
http {
id => "noname-offload"
url => "https://<ENGINE_URL>/engine?structure=ibm-apiconnect"
http_method => "post"
codec => "json"
content_type => "application/json"
headers => ["x-nn-source-index", "<INDEX>", "x-nn-source-key", "<KEY>"]
ssl_certificate_authorities => "/etc/velox/external_certs/offload/<noname-certname>"
}
secretName: external-certificates
ingestion:
...
storage:
enabled: true
...
If an offload filter was added, it would apply to both outputs, meaning that the data sent to both Splunk and Noname would be modified in the same way. Also, the outputs will be actioned in order and any failures will cause all pipeline processing to stop. In our example, this means that if the Splunk output fails, the Noname output will not be attempted.
With the new support for two parallel offload pipelines, these shortcomings are overcome. Data flows are now independent and look something like this: