-
Notifications
You must be signed in to change notification settings - Fork 29
Log monitoring
This document describes how to construct log monitoring system with Hatohol.
Zabbix's log monitoring feature has the following problems:
- High CPU usage because all logs collected by Zabbix agent are evaluated on Zabbix server.
- High database load for processing many logs.
- Zabbix agent sends logs to Zabbix server on non secure connection.
Hatohol uses Fluentd for better log monitoring system.
Project Hatohol develops a Fluentd plugin Hatohol output plugin. It pushes received messages to AMQP broker. Hatohol [HAPI JSON](HAPI JSON) pulls messages from AMQP broker and registers each message as an event. Hatohol HAPI JSON uses RabbitMQ as AMQP broker.
Zabbix uses Zabbix agent to collect logs. Hatohol uses Fluentd plugins to collect logs. For example, Hatohol uses tail input plugin for collecting Apache logs.
Zabbix agent sends logs on non secure connection. If you want to use secure connection, you need to create a secure tunnel by stunnel by hand. Hatohol can use secure connection for sending logs by secure_forwared input/output plugin.
Zabbix monitors logs on Zabbix server. Hatohol monitors logs on Fluentd with some Fluentd plugins. For example, Hatohol uses grep output plugin for selecting target log by regular expression.
You can't select log monitoring system architecture with Zabbix. You always need to select logs on Zabbix server.
Here is the log monitoring system architecture with Zabbix:
+------------+ Monitoring
|Zabbix agent| target
+------------+ node
collects and sends logs
|
| non secure connection
|
\/
+-------------+
|Zabbix server|
+-------------+
receives logs and selects target logs by keywords
You can select log monitoring system architecture from some architecture patterns with Hatohol. Because Fluentd supports forwarding logs.
Here is the simplest log monitoring architecture with Hatohol:
+-------------+ +-------------+ +-------------+ Monitoring
|Fluentd | |Fluentd | |Fluentd | target
|with plugins | |with plugins | |with plugins | nodes
+-------------+ +-------------+ +-------------+
collects, collects, collects,
selects and selects and selects and
pushes logs pushes logs pushes logs
| | |
| secure connection | |
| | |
\/ \/ \/
+--------+
|RabbitMQ|
+--------+
|
| secure connection
|
\/
+---------+
|Hatohol |
|HAPI JSON|
+---------+
pull logs
Here is a log monitoring system architecture with Hatohol for system that each monitoring target node has idle CPU:
+-------------+ +-------------+ +-------------+ Monitoring
|Fluentd | |Fluentd | |Fluentd | target
|with plugins | |with plugins | |with plugins | nodes
+-------------+ +-------------+ +-------------+
collects, collects, collects,
selects and selects and selects and
forwards logs forwards logs forwards logs
| | |
| secure connection | |
| | |
\/ \/ \/
+--------------------+
|Fluentd | AMQP producer node
|with Hatohol plugin |
+--------------------+
receives all selected logs and pushes them
|
| secure connection
|
\/
+--------+
|RabbitMQ|
+--------+
|
| secure connection
|
\/
+---------+
|Hatohol |
|HAPI JSON|
+---------+
pull logs
Here is a log monitoring system architecture with Hatohol for system that each monitoring target node doesn't have idle CPU:
+-------------+ +-------------+ +-------------+ Monitoring
|Fluentd | |Fluentd | |Fluentd | target
|with plugins | |with plugins | |with plugins | nodes
+-------------+ +-------------+ +-------------+
collects and collects and collects and
forwards logs forwards logs forwards logs
| | |
| secure connection | |
| | |
\/ \/ \/
+-------------+ +-------------+
|Fluentd | |Fluentd | Log select nodes
|with plugins | |with plugins |
+-------------+ +-------------+
selects and selects and
forwards logs forwards logs
| |
| secure connection |
| |
\/ \/
+--------------------+
|Fluentd | AMQP producer node
|with Hatohol plugin |
+--------------------+
receives all selected logs and pushes them
|
| secure connection
|
\/
+--------+
|RabbitMQ|
+--------+
|
| secure connection
|
\/
+---------+
|Hatohol |
|HAPI JSON|
+---------+
pull logs
This section describes about the following architectures:
- Monitoring target nodes only architecture
- Monitoring target nodes + AMQP producer node architecture
- Monitoring target nodes + log select nodes + AMQP producer node architecture
They are architectures described in the previous section.
In this section, we assume that all nodes use CentOS 6.
In monitoring target nodes only architecture, you need to set up the following node types:
- Certificate authority (CA)
- AMQP broker (RabbitMQ)
- Hatohol HAPI JSON
- Monitoring target nodes
The following subsections describe about how to set up each node type.
Set up TLS for security. Hatohol provides convenience scripts to set up TLS available RabbitMQ, Hatohol HAPI JSON and Fluentd. You can generate TLS related files with them.
You need the following hosts:
-
ca.example.com
: It hosts the certificate authority (CA) of your system. It must be the most secure host in your system. -
rabbitmq.example.com
: It runs RabbitMQ. It uses server certificate. -
hatohol.example.com
: It runs Hatohol. It uses client certificate. -
node1.example.com
: It runs a service and Fluentd that collects logs for the service. It uses client certificate. -
node2.example.com
: Ditto.
First, you need to install Hatohol:
[ca.example.com]% sudo -H yum install hatohol
Now, you can use hatohol-ca-initialize
that creates new CA for you.
Initialize CA on ca.example.com
:
[ca.example.com]% sudo -H hatohol-ca-initialize
Generating a 2048 bit RSA private key
.........................................+++
...............................+++
writing new private key to '/var/lib/hatohol/CA/private/cakey.pem'
-----
You need to log in to the CA host to sign server certificate and client certificates in the later instructions.
You need to install RabbitMQ as a AMQP broker. You can install RabbitMQ by EPEL.
First, enable EPEL:
% sudo -H rpm -Uvh http://ftp.jaist.ac.jp/pub/Linux/Fedora/epel/6/i386/epel-release-6-8.noarch.rpm
Install RabbitMQ:
% sudo -H yum install -y rabbitmq-server
% sudo -H chkconfig rabbitmq-server on
% sudo -H service rabbitmq-server start
Set up authentication information for security. You need to add the following two users:
-
hatohol
- Purpose: To connect to RabbitMQ from Hatohol HAPI JSON.
- Password:
hatohol-password
- URI:
amqps://hatohol:hatohol-password@rabbitmq-node/hatohol
-
fluentd
:- Purpose: To connect to RabbitMQ from Fluentd.
- Password:
fluentd-password
- URI:
amqps://fluentd:fluentd-password@rabbitmq-node/hatohol
Here are command lines to set up the above authentication information:
% sudo -u rabbitmq -H /usr/sbin/rabbitmqctl add_user hatohol hatohol-password
% sudo -u rabbitmq -H /usr/sbin/rabbitmqctl add_user fluentd fluentd-password
% sudo -u rabbitmq -H /usr/sbin/rabbitmqctl add_vhost hatohol
% sudo -u rabbitmq -H /usr/sbin/rabbitmqctl set_permissions -p hatohol hatohol '^gate\..*' '' '.*'
% sudo -u rabbitmq -H /usr/sbin/rabbitmqctl set_permissions -p hatohol fluentd '^gate\..*' '.*' ''
Create a server certificate for RabbitMQ on
rabbitmq.example.com
. Don't forget to specify the correct host name:
[rabbitmq.example.com]% cd /etc/rabbitmq
[rabbitmq.example.com]% sudo -H curl -O https://raw.githubusercontent.com/project-hatohol/hatohol/master/server/data/tls/hatohol-server-certificate-create
[rabbitmq.example.com]% sudo -H chmod +x hatohol-server-certificate-create
[rabbitmq.example.com]% sudo -H ./hatohol-server-certificate-create --host-name rabbitmq.example.com
Generating RSA private key, 2048 bit long modulus
.+++
..............+++
e is 65537 (0x10001)
The next action:
Copy <./req.pem> to CA host and run the following command:
% hatohol-ca-sign-server-certificate req.pem
Copy req.pem
to ca.example.com
and sign the certificate request by
your CA:
[rabbitmq.example.com]% scp req.pem ca.example.com:
[ca.example.com]% sudo -H hatohol-ca-sign-server-certificate req.pem
Using configuration from ./openssl.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
commonName :ASN.1 12:'rabbitmq.example.com'
organizationName :ASN.1 12:'server'
Certificate is to be certified until Aug 23 09:22:09 2024 GMT (3650 days)
Write out database with 1 new entries
Data Base Updated
The next action:
Copy <server-cert.pem> to server host and
use it in your application such as RabbitMQ.
Copy /var/lib/hatohol/CA/ca-cert.pem
and server-cert.pem
to
rabbitmq.example.com
and use them in RabbitMQ:
[ca.example.com]% scp /var/lib/hatohol/CA/ca-cert.pem server-cert.pem rabbitmq.example.com:
[rabbitmq.example.com]% sudo -H mv ca-cert.pem server-cert.pem /etc/rabbitmq/
[rabbitmq.example.com]% sudo -H chown -R root:root /etc/rabbitmq/
Edit /etc/rabbitmq/rabbimq.config
:
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [
{cacertfile, "/etc/rabbitmq/ca-cert.pem"},
{certfile, "/etc/rabbitmq/server-cert.pem"},
{keyfile, "/etc/rabbitmq/key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, false}
]}
]}
].
Don't forget the last .
!
Restart RabbitMQ:
% sudo -H service rabbitmq-server restart
Now, you can access RabbitMQ with secure connection.
Install MySQL:
% sudo -H yum install -y mysql-server
% sudo -H chkconfig mysqld on
% sudo -H service mysqld start
Install Hatohol:
% sudo -H yum install -y hatohol
Set up MySQL for Hatohol:
% mysql -u root < /usr/share/hatohol/sql/create-db.sql
Start Hatohol:
% sudo -H chkconfig hatohol on
% sudo -H service hatohol start
Create a client certificate for Hatohol on
hatohol.example.com
. Don't forget to specify the correct host name:
[hatohol.example.com]% sudo mkdir -p /etc/hatohol
[hatohol.example.com]% cd /etc/hatohol
[hatohol.example.com]% sudo -H hatohol-client-certificate-create --host-name hatohol.example.com
Generating RSA private key, 2048 bit long modulus
................+++
......................................+++
e is 65537 (0x10001)
The next action:
Copy <./req.pem> to CA host and run the following command:
% hatohol-ca-sign-client-certificate req.pem
Copy req.pem
to ca.example.com
and sign the certificate request by
your CA:
[hatohol.example.com]% scp req.pem ca.example.com:
[ca.example.com]% sudo -H hatohol-ca-sign-client-certificate req.pem
Using configuration from /var/hatohol/openssl.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
commonName :ASN.1 12:'hatohol.example.com'
organizationName :ASN.1 12:'client'
Certificate is to be certified until Sep 7 05:26:57 2024 GMT (3650 days)
Write out database with 1 new entries
Data Base Updated
The next action:
Copy </var/lib/hatohol/CA/ca-cert.pem> and <client-cert.pem> to client host and
use it in your application such as Hatohol and Fluentd.
Copy /var/lib/hatohol/CA/ca-cert.pem
and client-cert.pem
to
hatohol.example.com
and use them in Hatohol:
[ca.example.com]% scp /var/lib/hatohol/CA/ca-cert.pem client-cert.pem hatohol.example.com:
[hatohol.example.com]% sudo -H mv ca-cert.pem client-cert.pem /etc/hatohol/
[hatohol.example.com]% sudo -H chown -R root:root /etc/hatohol/
TODO: Register HAPI JSON entry with TLS configuration on Web UI
% sudo mkdir -p /etc/hatohol-hapi-json
% cd /etc/hatohol-hapi-json
% sudo /usr/share/hatohol/ssl/create-certificate.sh
Here are important files created by the convenience script:
-
CA/cacert.pem
: CA certificate. All nodes, RabbitMQ, Hatohol HAPI JSON and Fluentd, should have it. It is a public file. -
server/key.pem
: Server key. Only RabbitMQ should have it. It is a secret file. You should set600
permission. -
server/cert.pem
: Server certificate. Only RabbitMQ should have it. It is a public file. -
client/key.pem
: Client key. Hatohol HAPI JSON and Fluentd should have it. It is a secret file. You should set600
permission. -
client/cert.pem
: Client certificate. Hatohol HAPI JSON and Fluentd should have it. It is a public file.
You copy the following files to a node that runs RabbitMQ server:
CA/cacert.pem
server/key.pem
server/cert.pem
Here are commands for copying the above files:
[ca-node]% cd /etc
[ca-node]% sudo tar cfz hatohol-hapi-json.tar.gz \
hatohol-hapi-json/CA/cacert.pem \
hatohol-hapi-json/server/key.pem \
hatohol-hapi-json/server/cert.pem
[ca-node]% scp hatohol-hapi-json.tar.gz rabbimq-node:
[ca-node]% sudo rm hatohol-hapi-json.tar.gz
[rabbitmq-node]% sudo tar xfz hatohol-hapi-json.tar.gz -C /etc
[rabbitmq-node]% sudo chown rabbitmq /etc/hatohol-hapi-json/server/key.pem
Create /etc/rabbitmq/rabbitmq.config
with the following content:
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [
{cacertfile, "/etc/hatohol-hapi-json/CA/cacert.pem"},
{certfile, "/etc/hatohol-hapi-json/server/cert.pem"},
{keyfile, "/etc/hatohol-hapi-json/server/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false}
]}
]}
].
Restart RabbitMQ:
% sudo service rabbitmq-server restart
TODO
- URI for Hatohol HAPI JSON: amqps://hatohol:[email protected]/hatohol
TODO
- URI for Fluentd: amqps://fluentd:[email protected]/hatohol
You need to set up Fluentd on all monitoring target nodes.
Fluentd recommends to install ntpd to use valid timestamp.
See also: Before Installing Fluentd | Fluentd
Install and run ntpd:
% sudo yum install -y ntp
% sudo chkconfig ntpd on
% sudo service ntpd start
Install and run Fluentd:
% curl -L http://toolbelt.treasuredata.com/sh/install-redhat.sh | sh
% sudo chkconfig td-agent on
% sudo service td-agent start
Note: td-agent is a Fluentd distribution provided by Treasure Data, Inc.. Td-agent provides init script. So it is suitable for server use.
Configuration:
<source>
type forward
</source>
<match hatohol.**>
type hatohol
buffer_path /var/spool/fluentd/buffer
flush_interval 1
url "amqps://fluentd-user:fluentd-password@rabbitmq-node/hatohol"
tls_cert "/etc/hatohol-hapi-json/client/cert.pem"
tls_key "/etc/hatohol-hapi-json/client/key.pem"
tls_ca_certificates ["/etc/hatohol-hapi-json/CA/ca-cert.pem"]
queue_name "gate.1"
</match>