-
Notifications
You must be signed in to change notification settings - Fork 68
Structure
The elk-tls-docker project contains several subfolders for each main service. This enables you to modify any of the containers using the provided Dockerfile and change any additional configuration objects you choose.
There are several folders in the root of this repository and I will explain each one individually as well as their configuration files and options.
The initial directory structure will look like the following:
📦elk-tls-docker
┣ 📂bin
┃ ┣ 📜modify_index_mappings.py
┃ ┣ 📜send_data_to_filebeat.py
┃ ┗ 📜send_document_to_elasticsearch.py
┣ 📂elastic-agent
┃ ┣ 📜install.py
┃ ┗ 📜Dockerfile
┣ 📂elasticsearch
┃ ┣ 📂config
┃ ┃ ┗ 📜elasticsearch.yml
┃ ┗ 📜Dockerfile
┣ 📂filebeat
┃ ┣ 📂config
┃ ┃ ┗ 📜filebeat.yml
┃ ┗ 📜Dockerfile
┣ 📂kibana
┃ ┣ 📂config
┃ ┃ ┗ 📜kibana.yml
┃ ┗ 📜Dockerfile
┣ 📂logstash
┃ ┣ 📂config
┃ ┃ ┣ logstash.yml
┃ ┃ ┗ pipelines.yml
┃ ┣ 📂pipeline
┃ ┃ ┣ 📜logstash.conf
┃ ┃ ┗ 📜metricbeat.conf
┃ ┗ 📜Dockerfile
┣ 📂packetbeat
┃ ┣ 📂config
┃ ┃ ┗ 📜packetbeat.yml
┃ ┗ 📜Dockerfile
┣ 📂secrets
┣ 📂setup
┃ ┣ 📜instances.yml
┃ ┗ 📜setup.sh
┣ 📜.gitignore
┣ 📜CERTIFICATES.md
┣ 📜COMMON_ISSUES.md
┣ 📜LICENSE.md
┣ 📜README.md
┣ 📜docker-compose.setup.yml
┗ 📜docker-compose.yml
Within each folder named after an Elastic service there is a provided Dockerfile
. This file is used (and directly pointed to this file) in the docker-compose.yml
file. By default this file contains only a reference to the specific elasticsearch version you specified in the .env
file you created.
Please see the Environment Variables page for details about
.env
settings.
The elastic-agent
folder contains the following:
📦elastic-agent
┣ 📜Dockerfile
┗ 📜install.py
The Dockerfile
will create an Ubuntu 20.04 container and copy the install.py
file into the container. When the container runs then it will run the install.py
script which does all the setting up and installing of the Elastic Agent.
The elasticsearch
folder contains the following:
┣ 📂elasticsearch
┃ ┣ 📂config
┃ ┃ ┗ 📜elasticsearch.yml
┃ ┗ 📜Dockerfile
Within the config
folder there is one configuration file named elasticsearch.yml
. This file is where all your elasticsearch configuration properties will be set within the container itself.
This file contains all settings for your Cluster, Licensing, Security, and any other settings defined by Elasticsearch (e.g. x-pack). You can find more information by digging through their docs in the repository for additional settings here.
The main settings that are enabled are xpack.security
settings like enabling it, setting transport & http certificates.
The filebeat
folder contains the following:
┣ 📂filebeat
┃ ┣ 📂config
┃ ┃ ┗ 📜filebeat.yml
┃ ┗ 📜Dockerfile
Within the config
folder there is one configuration file named filebeat.yml
. This file is where all your filebeat configuration properties will be set within the container itself.
With filebeat
we need to define one or more inputs
and a single output
settings. In the provided default configuration, filebeat
is configured to take a single input
of type tcp
.
filebeat.inputs:
- type: tcp
enabled: true
max_message_size: 10MiB
host: "filebeat:9000"
I am setting this input as enabled to true, setting the max size, and specifying the host
and port
. If you are not super familiar with Docker and docker-compose, then this may seem strange. When inside of a container, any settings specifying the name of the service (defined in your container or docker-compose file) will be automatically resolved to the actual IP of that service. So, hence we are telling filebeat
input of type TCP to receive TCP uploads of files/data on port 9000.
The kibana
folder contains the following:
┣ 📂kibana
┃ ┣ 📂config
┃ ┃ ┗ 📜kibana.yml
┃ ┗ 📜Dockerfile
Within the kibana/config
folder is another configuration file called kibana.yml
. This file, just like all the other files, contains configuration options for the kibana
service.
In this file, we are configuring Kibana
to point to our elasticsearch
host of https and specifying the port. Additionally, we are telling kibana
to use this a specific username and password to authentication to elasticseach
. We are also setting ssl to enabled and giving Kibana a certificate and key.
Finally, in order to use certain features in Kibana like their Security
services (e.g. Detections, Signals, Cases, etc.) you must set an xpack.encryptedSavedObjects.encryptionKey
value. If you provided this in the .env
then you have nothing to worry about but if you run into any issues this may be one of the reasons.
NOTE: Currently we are only using ceritificates for communications and not authentication. This will be added in the future but time....
The logstash
folder contains the following:
📦logstash
┣ 📂config
┃ ┣ 📜logstash.yml
┃ ┗ 📜pipelines.yml
┣ 📂pipeline
┃ ┣ 📜logstash.conf
┃ ┗ 📜metricbeat.conf
┗ 📜Dockerfile
The logtash
has additional settings that the previous services have not had. We have four different configuration files:
- logstash.yml
- pipelines.yml
- logstash.conf
- metricbeat.conf
The logstash.yml
file contains traditional settings we have seen from the other services. We have certificates and we are also specifying the path of our logstash.conf
file which defines our logstash pipelines
.
The logstash.conf
is a different type of configuration file that you may be use to. There are three main "sections" in this configuration file:
- input
- filter
- output
In the provided configuration file I have added two different inputs
. One is for beats
(e.g. filebeats, winlogbeats, etc.) on port 5044
and a tcp
input running on port 5045
and it is using the json
codec to basically expect json
processing.
I have no filters, you can add some if needed. The filter is really to transform the inputs into whatever format/manipulation you need.
The output that is defined is to elasticsearch
. You can add additional/different outputs but for this project I wanted everything to go to elasticsearch. You can see the configuration values below we are setting the host to our elasticsearch
container over https and its port. We are also setting how we will authenticate, forcing the use of ssl, and providing a certificate authority certificate.
output {
elasticsearch {
hosts => [ "https://elasticsearch:9200" ]
user => "${ELASTIC_USERNAME}"
password => "${ELASTIC_PASSWORD}"
ssl => true
ssl_certificate_verification => false
cacert => "${CONFIG_DIR}/ca.crt"
}
}
The metricbeat
folder contains the following:
📦metricbeat
┣ 📂config
┃ ┗ 📜metricbeat.yml
┗ 📜Dockerfile
metricbeat
will collect metrics about each running container and pass them along to logstash
which then proceeds to add them to elasticsearch
. Additionally the metricbeat
container will install all pre-built dashboards onto Kibana.
The packetbeat
folder contains the following:
┣ 📂packetbeat
┃ ┣ 📂config
┃ ┃ ┗ 📜packetbeat.yml
┃ ┗ 📜Dockerfile
packetbeat
is more complex because there are so many different configuration options but I tried to set the most common or at least enough of some examples that you would understand how it works.
In this configuration file, it is defining our Kibana settings but the most important parts are the packetbeat
settings.
These settings range from the interface device, protocols, flows, and there are many other options. Addtionally, closer to the bottom of the file I have specified a single output - logstash
on port 5044
. At this time, I beleive you can only define 1 output for packetbeat but I have commented out how you may want to send it to elasticsearch.
By default this folder is empty but after you run the docker-compose.setup.yml
it will generate:
- An
elasticsearch.keystore
file - Self-signed Certificate Authority
- Certificates and keys for
elasticsearch
andkibana
andlogstash
- Convert the logstash certificate to
pkcs8
key file
The folder should look similar to this when done:
┃ ┣ 📂ca
┃ ┃ ┣ 📜ca.crt
┃ ┃ ┗ 📜ca.key
┃ ┣ 📂elasticsearch
┃ ┃ ┣ 📜elasticsearch.crt
┃ ┃ ┗ 📜elasticsearch.key
┃ ┣ 📂kibana
┃ ┃ ┣ 📜kibana.crt
┃ ┃ ┗ 📜kibana.key
┃ ┣ 📂logstash
┃ ┃ ┣ 📜logstash.crt
┃ ┃ ┣ 📜logstash.key
┃ ┃ ┗ 📜logstash.pkcs8.key
┃ ┣ 📜bundle.zip
┃ ┣ 📜ca.zip
┃ ┗ 📜elasticsearch.keystore
The .zip files are really just an archive of the keys for your backup or whatever use you need
The bin
folder contains scripts that help you configure and interact with the running containers. These are mostly to add data and set different configuration options like your Index data mappings.
┣ 📂bin
┃ ┣ 📜modify_index_mappings.py
┃ ┣ 📜send_data_to_filebeat.py
┃ ┗ 📜send_document_to_elasticsearch.py
The setup folder contains two different files related to setup and generation of certificates. This folder contains a shell script setup.sh
and a configuration file instances.yml
.
The instances.yml
file looks like the following and is used when generating certificates for all main services. If you want to generate additional certificates for other defined services, you can add them in this file and they will be generated when you run docker-compose.setup.yml
. Just follow the same example patterns in this file:
instances:
- name: elasticsearch
dns:
- elasticsearch.fqdn.tld
- localhost
- elasticsearch
- 0.0.0.0
ip:
- 0.0.0.0
- name: kibana
dns:
- kibana
- localhost
ip:
- 0.0.0.0
- name: logstash
dns:
- logstash
- localhost
ip:
- 0.0.0.0