Integration of IBM B2BI deployed on EKS with Amazon CloudWatch service
Users can use the AWS Cloudwatch service to collect,analyze and visualize the IBM b2bi application logs in an easier way.
CloudWatch logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.
CloudWatch agent can also be used to collect the cluster metrics which further can be used to monitor the various aspects of the cluster and health of application.
For more information on AWS cloudwatch, please visit below link
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html
Note: This blog assumes that the user has already successfully deployed the IBM b2bi application on Amazon EKS cluster.
We have used AWS cloudwatch for logging and monitoring service.
AWS cloudwatch as Logging service:
CloudWatch uses FluentD as a DaemonSet to Send Logs to CloudWatch Logs.
To setup fluentd for cloudwatch please follow the below steps:
Step 1 : Create a Namespace for CloudWatch
Follow the procedure below to create a Kubernetes namespace called amazon-cloudwatch for CloudWatch. You can skip this step if you have already created this namespace.
Enter the following command to create a namespace for CloudWatch.
kubectl apply - https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cloudwatch-namespace.yaml
Step 2 : Install FluentD
Start this process by downloading FluentD. When you finish these steps, the deployment creates the following resources on the cluster:
- A service account named fluentd in the amazon-cloudwatch namespace. This service account is used to run the FluentD DaemonSet.
- A cluster role named fluentd in the amazon-cloudwatch namespace. This cluster role grants get, list, and watch permissions on pod logs to the fluentd service account.
- A ConfigMap named fluentd-config in the amazon-cloudwatch namespace. This ConfigMap contains the configuration to be used by FluentD
To install FluentD
- Create a ConfigMap named cluster-info with the cluster name and the AWS Region that the logs will be sent to. Run the following command, updating the placeholders with your cluster and Region names.
kubectl create configmap cluster-info \
--from-literal=cluster.name=cluster_name \
--from-literal=logs.region=region_name -n amazon-cloudwatch
b. Download the FluentD DaemonSet by running the following command.
wget https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/fluentd/fluentd.yaml
Once downloaded, edit the fluentd.yaml file and replace the ConfigMap resource definition with below content:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: amazon-cloudwatch
labels:
k8s-app: fluentd-cloudwatch
data:
fluent.conf: |
@include containers.conf
<match fluent.**>
@type null
</match>
containers.conf: |
<source>
@type tail
@id in_tail_container_api_logs
path "/var/log/pods/*b2bi-api*/**/*"
pos_file /var/log/api-containers.pos
tag "b2bi.console.api.*"
read_from_head true
<parse>
@type none
</parse>
</source>
<source>
@type tail
@id in_tail_container_ac_logs
path "/var/log/pods/*b2bi-ac*/**/*"
pos_file /var/log/ac-containers.pos
tag "b2bi.console.ac.*"
read_from_head true
<parse>
@type none
</parse>
</source>
<source>
@type tail
@id in_tail_container_asi_logs
path "/var/log/pods/*b2bi-asi*/**/*"
pos_file /var/log/asi-containers.pos
tag "b2bi.console.asi.*"
read_from_head true
<parse>
@type none
</parse>
</source>
<match b2bi.console.api.**>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers_api
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application/api"
log_stream_name "b2bi_api_console"
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
<match b2bi.console.ac.**>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers_ac
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application/ac"
log_stream_name "b2bi_ac_console"
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
<match b2bi.console.asi.**>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers_asi
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application/asi"
log_stream_name "b2bi_asi_console"
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
You can add some more configurations to fetch additional cluster information or metrics. Please check the AWS Cloudwatch logs for more
information.
c) Execute the below command to create the fluentd daemon resource.
kubectl apply -f /path/to/file/fluentd.yaml
d) Validate the deployment by running the following command. Each node should have one pod named fluentd-cloudwatch-*.
kubectl get pods -n amazon-cloudwatch
Step 3: Verify the FluentD Setup
To verify your FluentD setup, use the following steps.
- Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
- In the navigation pane, choose Logs. Make sure that you're in the Region where you deployed FluentD to your containers.
- In the list of log groups in the Region, you should see the following:
/aws/containerinsights/<cluster-name>/application/asi
/aws/containerinsights/<cluster-name>/application/ac
/aws/containerinsights/<cluster-name>/application/api
If you see these log groups, the FluentD setup is verified.
Troubleshooting
If you don't see these log groups and are looking in the correct Region, check the logs for the FluentD DaemonSet pods to look for the error.
Run the following command and make sure that the status is Running.
kubectl get pods -n amazon-cloudwatch
In the results of the previous command, note the pod name that starts with fluentd-cloudwatch. Use this pod name in the following command.
kubectl logs pod_name -n amazon-cloudwatch
If the logs have errors related to IAM permissions, check the IAM role attached to the cluster nodes and use the below link to check further on Amazon EKS IAM Policies, Roles, and Permissions in the Amazon EKS User Guide.
https://docs.aws.amazon.com/eks/latest/userguide/security-iam.html#security_iam_access-manage
If the pod status is CreateContainerConfigError, get the exact error by running the following command.
kubectl describe pod pod_name -n amazon-cloudwatch
Once, your log groups are available and log streams are visible, you can select any log group and export the entire log data to Amazon Elastic Search, Amazon S3 and other available options.
AWS cloudwatch as Monitoring service:
To use AWS cloudwatch as a monitoring service, we shall set Up the CloudWatch Agent to Collect Cluster Metrics.
In the following steps, you set up the CloudWatch agent to be able to collect metrics from your clusters.
Step 1: Create a Namespace for CloudWatch
Use the following procedure to create a Kubernetes namespace called amazon-cloudwatch for CloudWatch. You can skip this step if you have already created this namespace.
To create a namespace for CloudWatch, enter the following command:
kubectl apply - https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cloudwatch-namespace.yaml
Step 2: Create a Service Account in the Cluster
Use the following step to create a service account for the CloudWatch agent, if you do not already have one.
To create a service account for the CloudWatch agent, enter the following command.
kubectl apply - https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cwagent-kubernetes-monitoring/cwagent-serviceaccount.yaml
Step 3: Create a ConfigMap for the CloudWatch Agent
Use the following steps to create a ConfigMap for the CloudWatch agent.
- Download the ConfigMap YAML to your kubectl client host by running the following command:
curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cwagent-kubernetes-monitoring/cwagent-configmap.yaml
- Edit the downloaded YAML file, as follows:
cluster_name– In the kubernetes section, replace {{cluster-name}} with the name of your cluster. Remove the {{}} characters. Alternatively, if you're using an Amazon EKS cluster, you can delete the "cluster_name" field and value. If you do, the CloudWatch agent detects the cluster name from the Amazon EC2 tags.
- Create the ConfigMap in the cluster by running the following command.
kubectl apply -f cwagent-configmap.yaml
Step 4: Deploy the CloudWatch Agent as a DaemonSet
To finish the installation of the CloudWatch agent and begin collecting container metrics, use the following steps.
- If you do not want to use StatsD on the cluster, enter the following command.
kubectl apply - https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cwagent-kubernetes-monitoring/cwagent-daemonset.yaml
If you do want to use StatsD, follow these steps:
1. Download the DaemonSet YAML to your kubectl client host by running the following command.
curl -O https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/cwagent-kubernetes-monitoring/cwagent-daemonset.yaml
2. Uncomment the port section in the cwagent-daemonset.yaml file as in the following:
ports:
- containerPort: 8125
hostPort: 8125
protocol: UDP
3. Deploy the CloudWatch agent in your cluster by running the following command.
kubectl apply -f cwagent-daemonset.yaml
b. Validate that the agent is deployed by running the following command:
kubectl get pods -n amazon-cloudwatch
When complete, the CloudWatch agent creates a log group named /aws/containerinsights/Cluster_Name/performance and sends the performance log events to this log group. If you also set up the agent as a StatsD listener, the agent also listens for StatsD metrics on port 8125 with the IP address of the node where the application pod is scheduled.
You can create your custom dashboard to monitor various metrics of cluster by writing query on AWS console on the above created log group.
Follow this link on how to create the dashboard
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_dashboard.html
Troubleshooting
If the agent doesn't deploy correctly, try the following:
- Run the following command to get the list of pods.
kubectl get pods -n amazon-cloudwatch
- Run the following command and check the events at the bottom of the output.
kubectl describe pod pod-name -n amazon-cloudwatch
- Run the following command to check the logs.
kubectl logs pod-name -n amazon-cloudwatch
If the logs have errors related to IAM permissions, check the IAM role attached to the cluster nodes and use the below link to check further on Amazon EKS IAM Policies, Roles, and Permissions in the Amazon EKS User Guide.
https://docs.aws.amazon.com/eks/latest/userguide/security-iam.html#security_iam_access-manage
#DataExchange#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers