IBM Spectrum Scale object storage combines the benefits of IBM Spectrum Scale with OpenStack Swift to manage data as objects which can be accessed over the network by using RESTful HTTP-based APIs.For more info on IBM spectrum Scale Object refer to :
https://www.redbooks.ibm.com/redpapers/pdfs/redp5113.pdfIn order to obtain objects metrics for metering and billing purpose, Openstack provides Ceilometer which collects metrics per account basis. While this is one recommended approach where customer can record Spectrum scale object stats using ELK stack (for log analysis) and Kibana (to create useful customized usage reports) by leveraging the proxy-server logs.
Openstack Swift has quite data rich INFO level logging which can be used for cluster monitoring, utilization calculations, audit records, and more using proxy-logging middleware. The proxy-server logs contain the record of all external API requests made to the proxy server in the raw data format.We can leverage these proxy-server logs and extract the required GET/PUT/DELETE/HEAD requests information using elasticsearch and can use kibana to create reports based on openstack account/tenant/project stats.
Here are the steps to configure:
1. Object protocol has to be enabled:
(For more details related to install and object, please refer to
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1ins_quickrefobjectstorage.htm)
2. Turn on INFO level logging in the proxy-server.conf with the command
mmobj config change --ccrfile proxy-server.conf --section DEFAULT --property log_level --value INFONote: Please ensure we have proxy-logging middleware in proxy-server.conf pipeline as below:
pipeline = healthcheck cache formpost tempurl authtoken swift3 s3token keystoneauth container-quotas account-quotas staticweb bulk slo dlo proxy-logging sofConstraints sofDirCr proxy-server3. Setup ELK Server to monitor & meter the usage
Before a deep dive to configure ELK server for monitoring proxy-server logs let’s quickly go through basics of ELK.
a) Key components of ELK stack Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.
There are 4 key component of ELK stack that we are using to achieve this task.
i. ElasticsearchElasticsearch is open source, distributed search & analytics engine based on Apache Lucene. It stores the data in the form of document and adds a searchable reference to the document in the cluster’s index. It is popular for running analytics on big volume log data.
ii. LogstashLogstash is a open source tool which is used to parse the data and ingest formatted data to Elasticsearch for further analytics.
iii. KibanaKibana is a open source web interface that can be used to search and view content indexed on an Elasticsearch cluster.
iv.Filebeat Filebeat (one of the Beat component) is a lightweight shipper specially used to ship the log files. Filebeat monitors the log directories or specific log files, tails the files, and forwards them either to Elasticsearch, Logstash or Kibana.
b) ELK Component Installation Installation of ELK Stack is a straightforward process. You can download the packages based on your system architecture & operating system.
Here is the link of Package Download Page of ELK :
https://www.elastic.co/downloadsFor the given use case you need to install packages
elasticsearch-6.2.4.rpm, kibana-6.2.4.rpm & logstash-6.2.4.rpm on ELKserver and install
filebeat-6.2.4-x86_64.rpm package on every spectrum scale CES node where Object service is running.
Start Elasticsearch & Kibana service on ELKserver to test the installation.
service elasticsearch start
service kibana startHere is the logical diagram of the setup.
Note: For Test purpose you can use single node ELK setup. In production environment one should use separate ELK cluster.
You can refer to below diagram for Multi Cluster setup.
c) Pipelining Logs from Scale cluster to ELKserver As we stated above, proxy-server logs are being collected from CES nodes in order to track all operations for metering and billing purpose. Once filebeat is installed on CES nodes, you just need to make few changes into the configuration of Filebeat
Note: For Linux, You can find the configuration file of each component in there respective directory under /etc.
For configuration changes in Filebeat, edit /etc/filebeat/filebeat.yml file on every CES nodes :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/swift/proxy-server.log
output.logstash:
hosts: ["Note: Comment “Elasticsearch output” session since we are using Logstash as output for filebat. By default, Elasticsearch output is set in filebeat.yml file.
Start Filebeat service.
service filebeat start
Logstash configuration need to be done on ELK server :
Create a configuration file in conf.d directory under Logstash directory (/etc/logstash/conf.d) to make logstash service listen to filebeat request.
So create a conf file under conf.d directory.
Ex: proxyserver.conf
input {
beats {
host => "#ibmstorage#Softwaredefinedstorage#IBMSoftwareDefinedStorage#IBMSpectrumScale#IBMElasticStorageServer