Skip to content

add hint of forward plugin not allowing tag assignments to forward input section #453

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 54 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
bb7b592
GitBook: [master] 135 pages modified
edsiper Oct 9, 2020
563a004
GitBook: [master] no pages and one asset modified
edsiper Oct 9, 2020
386f919
installation: update upgrade notes
edsiper Oct 9, 2020
e3b67c5
input: tail: document new db.locking option
edsiper Oct 9, 2020
1a2dcc2
filter: lua: document 'time_as_table'
edsiper Oct 9, 2020
faef444
administration: update buffering and storage
edsiper Oct 12, 2020
a408aeb
output: kafka: document 'queue_full_retries' option
edsiper Oct 12, 2020
b4724df
output: slack: document slack connector
edsiper Oct 12, 2020
080a3be
output: document 'loki' plugin
edsiper Oct 15, 2020
4b3ad1e
pipeline: filters: rewrite-tag: clarify emitted records restart pipel…
briend Nov 6, 2020
daf2b0d
Doc for setting up http proxy via `HTTP_PROXY` (#414)
erain Nov 6, 2020
29125bd
out_s3: fix reliability section, add use_put_object (#412)
PettitWesley Nov 6, 2020
8d43b50
network: Add information about `net.keepalive_max_recycle` (#408)
worr Nov 6, 2020
36bf8d2
output: loki: document new options line_format, http_user, http_passw…
edsiper Nov 22, 2020
73f432f
Merge branch 'master' of github.com:fluent/fluent-bit-docs
edsiper Nov 22, 2020
e09a871
configuration: allowed log level is warn and not warning (#342)
LionelCons Nov 24, 2020
7bdc708
administration: monitoring: add note on Windows support (#405)
fujimotos Nov 27, 2020
3a18c5c
pipeline: output: forward: document compress option (#417)
jim-minter Nov 27, 2020
0187092
input: tail: clarify multiline parser requirement (#340)
DavidWittman Nov 27, 2020
e754358
output: es: improve wording (#332)
angristan Nov 27, 2020
58bda48
Fix broken links about outputs section (#299)
nokute78 Nov 27, 2020
bf3f7ed
installation: windows: Document how to compile from source code (#352)
fujimotos Nov 27, 2020
c5077df
core: logs by default seem to go to standard error (#312)
davide-bolcioni Nov 27, 2020
b5f7e02
toc: add docker input plugin docs (#317)
championshuttler Nov 27, 2020
6e8d8bd
Add note on CRI for kubernetes configmap (#318)
egernst Nov 27, 2020
55c9ab7
input: forward: fix the docs for forward input plugin (#344)
championshuttler Nov 27, 2020
d891b30
Fix the link to the stream processor image (#345)
championshuttler Nov 27, 2020
d56f016
input: tail: Added missing documentation for exit_on_eof option (#347)
sxd Nov 27, 2020
0425d1b
output: es: Add Trace_Error (#418)
DavidWittman Nov 27, 2020
e26c421
input: tail: fix default value of Buffer_Max_Size (#407)
l2dy Nov 27, 2020
2bfe91d
Typo/spelling fixes (#400)
thezackm Nov 27, 2020
83a48c3
build: document FLB_TLS as on by default. (#403)
jkschulz Nov 27, 2020
a3c6b7b
input: tail: document read_from_header
edsiper Dec 3, 2020
746c4a3
Merge branch 'master' of github.com:fluent/fluent-bit-docs
edsiper Dec 3, 2020
e5bf64e
filter: geoip2: Document GeoIP2 filter plugin (#440)
fujimotos Dec 24, 2020
b015738
Fix the link to the Unit Sizes (#437)
mizukmb Dec 26, 2020
05f6268
input: add missing statsd
edsiper Jan 9, 2021
b0c7d7f
GitBook: [master] 25 pages and 19 assets modified
agup006 Jan 13, 2021
5e480ee
output: splunk: document new 'compress' option
edsiper Jan 18, 2021
3387edb
GitBook: [master] 140 pages and 22 assets modified
agup006 Jan 20, 2021
b0d42b9
out_cloudwatch_logs: add docs for ECR Public
zhonghui12 Jan 20, 2021
ae4df8f
out_kinesis_firehose: add docs for ECR Public
zhonghui12 Jan 20, 2021
b3c1f57
out_es: update for new elastic cloud options (#430)
ChrsMark Jan 21, 2021
5c395ba
out_websocket: add documentation (#337)
ginobiliwang Jan 22, 2021
54590ea
GitBook: [master] 141 pages and 7 assets modified
agup006 Jan 24, 2021
73e11c4
input: fix forward documentation
edsiper Jan 26, 2021
a48f287
Merge branch 'master' of github.com:fluent/fluent-bit-docs
edsiper Jan 26, 2021
cb6aabc
summary: wip
edsiper Jan 26, 2021
17c5eb1
GitBook: [master] 4 pages and 6 assets modified
edsiper Jan 26, 2021
17b7c79
GitBook: [master] one page modified
edsiper Jan 26, 2021
879b863
input: forward: fix
edsiper Jan 26, 2021
b5f7cd0
filter: kubernetes: add documentation for use_docker_id (#364)
charlesmcchan Feb 3, 2021
db8c6c9
Fix the loki output default label (#452)
akihiro Feb 3, 2021
05fa441
add hint of forward plugin not allowing tag assignments to forward in…
benjamin-hofmann-mw Feb 4, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ redirects:
input/collectd: ./pipeline/inputs/
input/cpu: ./pipeline/inputs/cpu-metrics.md
input/disk: ./pipeline/inputs/disk-io-metrics.md
#inputs/docker: ./pipeline/inputs/
inputs/docker: ./pipeline/inputs/docker.md
input/dummy: ./pipeline/inputs/dummy.md
input/exec: ./pipeline/inputs/exec.md
input/forward: ./pipeline/inputs/forward.md
Expand Down
Binary file added .gitbook/assets/azureloganalytics_small.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/fluentbit_kube_logging (2).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/fluentbit_kube_logging (3).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (2).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (3) (2) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (3) (2) (2).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (3) (2).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (4).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (5).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (6).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (7).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (8).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (9) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image (9).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/logo_documentation_1.6.png
7 changes: 7 additions & 0 deletions SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@
* [Disk I/O Metrics](pipeline/inputs/disk-io-metrics.md)
* [Docker Events](pipeline/inputs/docker-events.md)
* [Dummy](pipeline/inputs/dummy.md)
* [Docker](pipeline/inputs/docker.md)
* [Exec](pipeline/inputs/exec.md)
* [Forward](pipeline/inputs/forward.md)
* [Head](pipeline/inputs/head.md)
Expand All @@ -87,6 +88,7 @@
* [Random](pipeline/inputs/random.md)
* [Serial Interface](pipeline/inputs/serial-interface.md)
* [Standard Input](pipeline/inputs/standard-input.md)
* [StatsD](pipeline/inputs/statsd.md)
* [Syslog](pipeline/inputs/syslog.md)
* [Systemd](pipeline/inputs/systemd.md)
* [Tail](pipeline/inputs/tail.md)
Expand All @@ -105,6 +107,7 @@
* [Grep](pipeline/filters/grep.md)
* [Kubernetes](pipeline/filters/kubernetes.md)
* [Lua](pipeline/filters/lua.md)
* [GeoIP2](pipeline/filters/geoip2.md)
* [Parser](pipeline/filters/parser.md)
* [Record Modifier](pipeline/filters/record-modifier.md)
* [Rewrite Tag](pipeline/filters/rewrite-tag.md)
Expand Down Expand Up @@ -132,16 +135,19 @@
* [Kafka](pipeline/outputs/kafka.md)
* [Kafka REST Proxy](pipeline/outputs/kafka-rest-proxy.md)
* [LogDNA](pipeline/outputs/logdna.md)
* [Loki](pipeline/outputs/loki.md)
* [NATS](pipeline/outputs/nats.md)
* [New Relic](pipeline/outputs/new-relic.md)
* [NULL](pipeline/outputs/null.md)
* [PostgreSQL](pipeline/outputs/postgresql.md)
* [Slack](pipeline/outputs/slack.md)
* [Stackdriver](pipeline/outputs/stackdriver.md)
* [Standard Output](pipeline/outputs/standard-output.md)
* [Splunk](pipeline/outputs/splunk.md)
* [Syslog](pipeline/outputs/syslog.md)
* [TCP & TLS](pipeline/outputs/tcp-and-tls.md)
* [Treasure Data](pipeline/outputs/treasure-data.md)
* [WebSocket](pipeline/outputs/websocket.md)

## Stream Processing

Expand All @@ -159,3 +165,4 @@
* [Ingest Records Manually](development/ingest-records-manually.md)
* [Golang Output Plugins](development/golang-output-plugins.md)
* [Developer guide for beginners on contributing to Fluent Bit](development/developer-guide.md)

87 changes: 81 additions & 6 deletions administration/buffering-and-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,61 @@

The end-goal of [Fluent Bit](https://fluentbit.io) is to collect, parse, filter and ship logs to a central place. In this workflow there are many phases and one of the critical pieces is the ability to do _buffering_ : a mechanism to place processed data into a temporal location until is ready to be shipped.

By default when Fluent Bit process data, it uses Memory as a primary and temporal place to store the record logs, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.
By default when Fluent Bit process data, it uses Memory as a primary and temporary place to store the records, but there are certain scenarios where would be ideal to have a persistent buffering mechanism based in the filesystem to provide aggregation and data safety capabilities.

Starting with Fluent Bit v1.0, we introduced a new _storage layer_ that can either work in memory or in the file system. Input plugins can be configured to use one or the other upon demand at start time.
Choosing the right configuration is critical and the behavior of the service can be conditioned based in the backpressure settings. Before to jump into the configuration properties let's understand the relationship between _Chunks_, _Memory_, _Filesystem_ and _Backpressure_.

## Chunks, Memory, Filesystem and Backpressure

Understanding the chunks, buffering and backpressure concepts is critical for a proper configuration. Let's do a recap of the meaning of these concepts.

#### Chunks

When an input plugin \(source\) emit records, the engine group the records together in a _Chunk_. A Chunk size usually is around 2MB. By configuration, the engine decide where to place this Chunk, the default is that all chunks are created only in memory.

#### Buffering and Memory

As mentioned above, the Chunks generated by the engine are placed in memory but this is configurable.

If memory is the only mechanism set for the input plugin, it will just store data as much as it can there \(memory\). This is the fastest mechanism with less system overhead, but if the service is not able to deliver the records fast enough because of a slow network or an unresponsive remote service, Fluent Bit memory usage will increase since it will accumulate more data than it can deliver.

On a high load environment with backpressure the risks of having high memory usage is the chance to get killed by the Kernel \(OOM Killer\). A workaround for this backpressure scenario is to limit the amount of memory in records that an input plugin can register, this configuration property is called `mem_buf_limit`: if a plugin have enqueued more than `mem_buf_limit`, it won't be able to ingest more until it data can be delivered or flushed properly. On this scenario the input plugin in question is paused.

The workaround of `mem_buf_limit` is good for certain scenarios and environments, it helps to control the memory usage of the service, but at the costs that if a file gets rotated while paused, you might lose that data since it won't be able to register new records. This can happen with any input source plugin. The goal of `mem_buf_limit` is memory control and survival of the service.

For full data safety guarantee, use filesystem buffering.

#### Filesystem buffering to the rescue

Filesystem buffering enabled helps with backpressure and overall memory control.

Behind the scenes, Memory and Filesystem buffering mechanisms are **not** mutual exclusive, indeed when enabling filesystem buffering for your input plugin \(source\) you are getting the best of the two worlds: performance and data safety.

When the Filesystem buffering is enabled, the behavior of the engine is different, upon Chunk creation, it stores the content in memory but also it maps a copy on disk \(through [mmap\(2\)](https://man7.org/linux/man-pages/man2/mmap.2.html)\), this Chunk is active in memory and backed up in disk is called to be `up` which means "the chunk content is up in memory".

How this Filesystem buffering mechanism deals with high memory usage and backpressure ?: Fluent Bit controls the number of Chunks that are `up` in memory.

By default, the engine allows to have 128 Chunks `up` in memory in total \(considering all Chunks\), this value is controlled by service property `storage.max_chunks_up`. The active Chunks that are `up` are ready for delivery and the ones that still are receiving records. Any other remaining Chunk is in a `down` state, which means that's only in the filesystem and won't be `up` in memory unless is ready to be delivered.

If the input plugin has enabled `mem_buf_limit` and `storage.type` as `filesystem`, when reaching the `mem_buf_limit` threshold, instead of the plugin being paused, all new data will go to Chunks that are `down` in the filesystem. This allows to control the memory usage by the service but also providing a a guarantee that the service won't lose any data.

**Limiting Filesystem space for Chunks**

Fluent Bit implements the concept of logical queues: a Chunk based on its Tag, can be routed to multiple destinations, so internally we keep a reference from where a Chunk was created and where it needs to go.

It's common to find cases that if we have multiple destinations for a Chunk, one of the destination might be slower than the other, and maybe one of the destinations is generating backpressure and not all of them. On this scenario how do we limit the amount of filesystem Chunks that we are logically queueing ?.

Starting from Fluent Bit v1.6, we introduced the new configuration property for output plugins called `storage.total_limit_size` which limits the number of Chunks that exists in the file system for a certain logical output destination. If one destinations reaches the `storage.total_limit_size` limit, the oldest Chunk from it queue for that logical output destination will be discarded.

## Configuration

The storage layer configuration takes place in two areas:
The storage layer configuration takes place in three areas:

* Service Section
* Input Section
* Output Section

The known Service section configure a global environment for the storage layer, and then in the Input sections defines which mechanism to use.
The known Service section configure a global environment for the storage layer, the Input sections defines which buffering mechanism to use and the output the limits for the logical queues.

### Service Section Configuration

Expand All @@ -24,12 +67,13 @@ The Service section refers to the section defined in the main [configuration fil
| storage.path | Set an optional location in the file system to store streams and chunks of data. If this parameter is not set, Input plugins can only use in-memory buffering. | |
| storage.sync | Configure the synchronization mode used to store the data into the file system. It can take the values _normal_ or _full_. | normal |
| storage.checksum | Enable the data integrity check when writing and reading data from the filesystem. The storage layer uses the CRC32 algorithm. | Off |
| storage.max\_chunks\_up | If the input plugin has enabled `filesystem` storage type, this property sets the maximum number of Chunks that can be `up` in memory. This helps to control memory usage. | 128 |
| storage.backlog.mem\_limit | If _storage.path_ is set, Fluent Bit will look for data chunks that were not delivered and are still in the storage layer, these are called _backlog_ data. This option configure a hint of maximum value of memory to use when processing these records. | 5M |
| storage.metrics | If `http_server` option has been enable in the main `[SERVICE]` section, this option registers a new endpoint where internal metrics of the storage layer can be consumed. For more details refer to the [Monitoring](monitoring.md) section. | off |

a Service section will look like this:

```text
```python
[SERVICE]
flush 1
log_Level info
Expand All @@ -51,7 +95,7 @@ Optionally, any Input plugin can configure their storage preference, the followi

The following example configure a service that offers filesystem buffering capabilities and two Input plugins being the first based in filesystem and the second with memory only.

```text
```python
[SERVICE]
flush 1
log_Level info
Expand All @@ -69,3 +113,34 @@ The following example configure a service that offers filesystem buffering capab
storage.type memory
```

### Output Section Configuration

If certain chunks are filesystem storage.type based, it's possible to control the size of the logical queue for an output plugin. The following table describe the options available:

| Key | Description | Default |
| :--- | :--- | :--- |
| storage.total\_limit\_size | Limit the maximum number of Chunks in the filesystem for the current output logical destination. | |

The following example create records with CPU usage samples in the filesystem and then they are delivered to Google Stackdriver service limiting the logical queue \(buffering\) to 5M:

```text
[SERVICE]
flush 1
log_Level info
storage.path /var/log/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 5M

[INPUT]
name cpu
storage.type filesystem

[OUTPUT]
name stackdriver
match *
storage.total_limit_size 5M
```

If for some reason Fluent Bit goes offline because of a network issue, it will continuing buffering CPU samples; keeping a maximum of 5M of the newest data.

6 changes: 3 additions & 3 deletions administration/configuring-fluent-bit/configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,11 @@ The _Service_ section defines global properties of the service, the keys availab
| :--- | :--- | :--- |


| Log\_File | Absolute path for an optional log file. By default all logs are redirected to the standard output interface \(stdout\). | |
| Log\_File | Absolute path for an optional log file. By default all logs are redirected to standard error \(stderr\). | |
| :--- | :--- | :--- |


| Log\_Level | Set the logging verbosity level. Allowed values are: error, warning, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that _trace_ mode is only available if Fluent Bit was built with the _WITH\_TRACE_ option enabled. | info |
| Log\_Level | Set the logging verbosity level. Allowed values are: error, warn, info, debug and trace. Values are accumulative, e.g: if 'debug' is set, it will include error, warning, info and debug. Note that _trace_ mode is only available if Fluent Bit was built with the _WITH\_TRACE_ option enabled. | info |
| :--- | :--- | :--- |


Expand Down Expand Up @@ -102,7 +102,7 @@ An _INPUT_ section defines a source \(related to an input plugin\), here we will
| Key | Description |
| :--- | :--- |
| Name | Name of the input plugin. |
| Tag | Tag name associated to all records comming from this plugin. |
| Tag | Tag name associated to all records coming from this plugin. |

The _Name_ is mandatory and it let Fluent Bit know which input plugin should be loaded. The _Tag_ is mandatory for all plugins except for the _input forward_ plugin \(as it provides dynamic tags\).

Expand Down
30 changes: 15 additions & 15 deletions administration/configuring-fluent-bit/record-accessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,27 +4,27 @@ description: A full feature set to access content of your records

# Record Accessor

Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.
Fluent Bit works internally with structured records and it can be composed of an unlimited number of keys and values. Values can be anything like a number, string, array, or a map.

Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called _Record Accessor._
Having a way to select a specific part of the record is critical for certain core functionalities or plugins, this feature is called _Record Accessor._

> consider Record Accessor a simple grammar to specify record content and other miscellaneus values.
> consider Record Accessor a simple grammar to specify record content and other miscellaneous values.

### Format
## Format

A _record accessor_ rule starts with the character `$`. Using the structured content above as an example the following table describes how to access a record:

```javascript
{
"log": "some message",
"stream": "stdout",
"labels": {
"color": "blue",
"unset": null,
"project": {
"env": "production"
}
}
"log": "some message",
"stream": "stdout",
"labels": {
"color": "blue",
"unset": null,
"project": {
"env": "production"
}
}
}
```

Expand All @@ -38,9 +38,9 @@ The following table describe some accessing rules and the expected returned valu
| $labels\['unset'\] | null |
| $labels\['undefined'\] | |

If the accessor key does not exist in the record like the last example `$labels['undefined']` , the operation is simply omitted, no exception will occur.
If the accessor key does not exist in the record like the last example `$labels['undefined']` , the operation is simply omitted, no exception will occur.

### Usage Example
## Usage Example

The feature is enabled on a per plugin basis, not all plugins enable this feature. As an example consider a configuration that aims to filter records using [grep](../../pipeline/filters/grep.md) that only matches where labels have a color blue:

Expand Down
14 changes: 14 additions & 0 deletions administration/http-proxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
description: User a http proxy via HTTP\_PROXY environment variable.
---

# HTTP Proxy

Fluent Bit supports setting up a HTTP proxy for all egress HTTP/HTTPS traffic by setting `HTTP_PROXY` environment variable:

- You can setup `HTTP_PROXY=http://username:[email protected]:port` to use a `username` and `password` when connecting to the proxy.
- You can also setup `HTTP_PROXY=http://your-proxy.com:port` to omit `username` and `password` if there is none.

The `HTTP_PROXY` environment variable is a standard way for setting a HTTP proxy in a containerized environment ([reference](https://docs.docker.com/network/proxy/#use-environment-variables)), and it is also natively supported by any application written in Go. Therefore, we follow and implement the same convention for Fluent Bit.

**Note**: we also have an older way for http proxy support in specific output plugins `output` plugin by its configuration. The configuration continues to work, however it _should not_ be used together with the `HTTP_PROXY` environment variable. This is because: under the hood, the `HTTP_RPOXY` based proxy support is implemented by setting up a TCP connection tunnel via [HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT). Both HTTP and HTTPS egress traffic can work this way. And this is different than the current plugin's implementation.
2 changes: 2 additions & 0 deletions administration/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ Fluent Bit comes with a built-in HTTP Server that can be used to query internal

The monitoring interface can be easily integrated with Prometheus since we support it native format.

NOTE: The Windows version does not support the HTTP monitoring feature yet as of v1.6.0.

## Getting Started <a id="getting_started"></a>

To get started, the first step is to enable the HTTP Server from the configuration file:
Expand Down
5 changes: 5 additions & 0 deletions administration/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,10 @@ If a TCP connection is keepalive enabled, there might be scenarios where the con

In order to control how long a keepalive connection can be idle, we expose the configuration property called `net.keepalive_idle_timeout`.

### TCP Keepalive Recycling

If a TCP connection is keepalive enabled and has very high traffic, the connection may _never_ be killed. In a situation where the remote endpoint is load-balanced in some way, this may lead to an unequal distribution of traffic. Setting `net.keepalive_max_recycle` causes keepalive connections to be recycled after a number of messages are sent over that connection. Once this limit is reached, the connection is terminated gracefully, and a new connection will be created for subsequent messages.

## Configuration Options

For plugins that relies on networking I/O, the following section describes the network configuration properties available and how they can be used to optimize performance or adjust to different configuration needs:
Expand All @@ -42,6 +46,7 @@ For plugins that relies on networking I/O, the following section describes the n
| `net.source_address` | Specify network address \(interface\) to use for connection and data traffic. | |
| `net.keepalive` | Enable or disable TCP keepalive support. Accepts a boolean value: on / off. | on |
| `net.keepalive_idle_timeout` | Set maximum time expressed in seconds for an idle keepalive connection. | 30 |
| `net.keepalive_max_recycle` | Set the maximum number of times a keepalive connection can be used before it is destroyed. | 0 |

## Example

Expand Down
3 changes: 2 additions & 1 deletion administration/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The following **output** plugins can take advantage of the TLS feature:
* [BigQuery](../pipeline/outputs/bigquery.md)
* [Datadog](../pipeline/outputs/datadog.md)
* [Elasticsearch](../pipeline/outputs/elasticsearch.md)
* [Forward]()
* [Forward](security.md)
* [GELF](../pipeline/outputs/gelf.md)
* [HTTP](../pipeline/outputs/http.md)
* [InfluxDB](../pipeline/outputs/influxdb.md)
Expand Down Expand Up @@ -93,3 +93,4 @@ Fluent Bit supports [TLS server name indication](https://en.wikipedia.org/wiki/S
tls.ca_file /etc/certs/fluent.crt
tls.vhost fluent.example.com
```

Loading