In this tutorial, we will go over the need steps to get the Kubernetes API server logs into QRadar.
High level view on the steps:
- to create an audit policy.
- to configure Kubernetes api server logs to save the logs to a local file
- to forward the logs from a local file to QRadar over Syslog.
First, we need to create the audit policy file, which defines which events to log or not, also using it we can determine the level of the details in the logged event.
Kubernetes offers the following audit levels:
None - don’t log events that match this rule.
Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
Request - log event metadata and request body but no response body.
Request Response - log event metadata, request, and response bodies.
There are several online ready Kubernetes audit policies, but some of them don’t cover the logging of some essential activities, like creating a role, adding a user to a role, or requests from unsecured Kubernetes API port
If we want to detect if someone will send a request to create a privileged pod, the HTTP metadata is not enough, we need to log the request body to see the request details, e.g., Pod name, Pod privilege level, ...
Of in some audit configs, they enable the role auditing at the metadata level, in such case, if someone will add a user to the cluster-admin role, it wont be detected, as the role name, and the added user name will not be included in the meta data.
Here you can find the audit policy that I have used on my cluster, and you can use it as a starting point.
Kubernetes offers several ways to export its logs, or push its API logs:
- Log backend, which writes events to a local file.
- Webhook backend, which buffers the events and send them to an external API
- Dynamic Webhook backend (AuditSink), which configures Kubernetes on the fly to send the events to a remote API.
Configuring the API server to log its output to a local file
In this tutorial we will cover the first option, to log the events to a local file, in several Kubernetes as a service, this option might be already enabled for you, as we have seen in this tutorial:
Ingesting Kubernetes Logs from Amazon Elastic Kubernetes Service (Amazon EKS)
But in case its not enabled, or you are using your own cluster, you can enable the local file logging by adding a few flags to the Kube apiserver config file, as follows (Note: you will need to SSH to every master node and to apply the following options):
vim /etc/kubernetes/manifests/kube-apiserver.yaml
1) Add the following entries under spec -> containers -> command > kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes-apiserver.log
- --audit-log-format=json
As the kube-apiserver will be running as a container, we need to share those two files: audit-policy.yaml, kubernetes-apiserver.log from the host to the container, which can be done using the next two steps.
2) Add the following entries under the volumeMounts:
- mountPath: /etc/kubernetes/
name: policies
readOnly: false
- mountPath: /var/log/
name: logs
readOnly: false
3) Add the following entries under volumes:
- hostPath:
path: /etc/kubernetes/
type: DirectoryOrCreate
name: policies
- hostPath:
path: /var/log/
type: DirectoryOrCreate
name: logs
After saving and exiting the config file, the Kube API server will be automatically restarted, but in case of any issues, you can restart it by restarting the kubelet
systemctl restart kubelet
To check if Kubernetes started to save the logs to the specified file, we will check if the file will have any new content:
head /var/log/kubernetes-apiserver.log