API Connect

 View Only

Exporting API Connect Logs to Event Streams

By Piers Walter posted Wed August 23, 2023 04:50 AM

  

Introduction

This post is designed to explain how to connect API Connect (APIC) analytics to an IBM Event Streams (Kafka) broker. This will mean every line from the APIC analytics logs will be sent to Kafka, allowing you to process them using any system or application which can read from Kafka. APIC also supports other exports such as Syslog, Elasticsearch and HTTP however those aren’t covered by this guide. 

Requirements

The instructions for this guide were written for APIC v10.0.5.3 and Event Streams 11.1.3 running on CP4I on OpenShift 4.12.24, however the instructions should be valid for other versions as well as it’s fairly simple functionality that’s being used. While Event Streams is recommended, any flavour of Kafka should be compatible with this guide but with some tweaks as necessary. We will need the OpenShift oc cli installed locally, as well as the keytool utility for managing JKS keystores.

How To

Setting Up Event Streams

The first step to get this set up is to get Event Streams set up. This means we will need to create a topic into which we will write messages, as well as a user who is allowed to write to that topic.

  1. Login to Event Streams and select “Create Topic” from the home screen
  2. Select “Create topic” from the blue button at the top right
  3. Enter a topic name such as “apic-analytics”, set the number of partitions to at least 3, message retention can be left at the default of one month and the default replica setting can also be left as-is
  4. Press create topic at the top right
  5. Navigate into your new topic and select “Connect to this topic” located at the top right
  6. Select the internal option as the traffic will be internal within the cluster, make a note of the provided URL, and then “Generate TLS credentials”

  7. Fill in a credential name and select “Produce messages only, and read schemas” and then “Next”. This means that  these user credentials can only write to topics 

  8. Give access to a specific topic, in this case “apic-analytics” but this should be the name of your earlier-created topic then press “Next”. This means that these credentials will only be able to write to the apic-analytics topic

  9. The remaining options can be left as the defaults

  10. Download the produced certificates

  11. Download the certificates for the cluster in PEM format, this is so we can easily add it to a JKS keystore later to allow APIC to trust Kafka's cert when they connect with TLS.

Creating the Secret for APIC

With the user credentials and certificates downloaded, we can now move onto creating a Secret which APIC can use to communicate to our Kafka cluster.

  1. In the directory with the downloaded es-cert.pem, run the following command to create a keystore with the certificate in it with the password as “password”:

    keytool  -keystore ./cert.jks -import -trustcacerts -file ./es-cert.pem -noprompt -storepass password

  2. We now have all of the files that we need to create the secret. Login to your OpenShift cluster and run the following command to create the secret in the apic namespace using the files:

    oc create secret generic apic-analytics-kafka -n apic --from-file=cert.jks --from-file=user.crt --from-file=user.key --from-file=user.p12 --from-file=user.password 

Configuring APIC to use the Secret

Now we have the secret loaded into OpenShift, we can configure APIC to use it and communicate the analytics to our Kafka cluster.

  1. Open OpenShift and navigate to the Installed Operators section, open IBM API Connect and then the “Analytics cluster” tab

  2. Open up the AnalyticsCluster resource which shows up and specifically the YAML view

  3. Navigate the to the .spec.offload section and add the following text. Replace the sections with <angled bracket> with the appropriate values:

    offload:
      enabled: true
      output: |
        kafka {
          topic_id => "<topic name you created earlier>"
          bootstrap_servers => "<internal address from Event Streams>"
          codec => "json"
          id => "kafka_offload"
          security_protocol => SSL
          ssl_truststore_location => "/etc/velox/credentials/offload/cert.jks"
          ssl_truststore_type => JKS
          ssl_truststore_password => password
          ssl_keystore_location => "/etc/velox/credentials/offload/user.p12"
          ssl_keystore_password => "<user password, found in user.password file>"
        }
        passwordSecretName: apic-analytics-kafka
        secretName: apic-analytics-kafka

  4. Press save, it should look something like this

  5. Wait for the APIC ingestion pod to restart and view the pod’s logs  to make sure the kafka connection has worked. If it has there should be no errors. The pod should be named something like apic1-a7s-ingestion-0 and can be found in the apic namespace.

  6. Send some requests to APIC or use the design an API functionality’s test tab to generate some requests. 

  7. View the topic in Event Streams and you should now see the analytics messages coming through . If your version of Event Streams doesn’t have the topic viewer, you can use another utility to browse the topic and see the messages.

Conclusion

Now you’re at the end of this guide, you should have an APIC to Event Streams connection set up which you can use to build custom analytics and metrics. One use case could be automatically disabling applications which you identify as acting maliciously or identifying which APIs your users are consuming most regularly. Further information can be found in the APIC docs and the Logstash Kafka offload plugin docs


#automation-spotlight
0 comments
683 views

Permalink