diff --git a/docs/sources/flow/reference/components/_index.md b/docs/sources/flow/reference/components/_index.md index 74d21678c179..5d16c7eb219d 100644 --- a/docs/sources/flow/reference/components/_index.md +++ b/docs/sources/flow/reference/components/_index.md @@ -12,8 +12,7 @@ weight: 300 # Components reference -This section contains reference documentation for all recognized -[components][]. +This section contains reference documentation for all recognized [components][]. {{< section >}} diff --git a/docs/sources/flow/reference/components/discovery.azure.md b/docs/sources/flow/reference/components/discovery.azure.md index 5192299a1eeb..b5f09b4f5539 100644 --- a/docs/sources/flow/reference/components/discovery.azure.md +++ b/docs/sources/flow/reference/components/discovery.azure.md @@ -26,25 +26,24 @@ discovery.azure "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required -------------------- | ---------- | ---------------------------------------------------------------------- | -------------------- | -------- -`environment` | `string` | Azure environment. | `"AzurePublicCloud"` | no -`port` | `number` | Port to be appended to the `__address__` label for each target. | `80` | no -`subscription_id` | `string` | Azure subscription ID. | | no -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `5m` | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +-------------------|------------|-----------------------------------------------------------------|----------------------|--------- +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`environment` | `string` | Azure environment. | `"AzurePublicCloud"` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`port` | `number` | Port to be appended to the `__address__` label for each target. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `5m` | no +`subscription_id` | `string` | Azure subscription ID. | | no ## Blocks -The following blocks are supported inside the definition of -`discovery.azure`: +The following blocks are supported inside the definition of `discovery.azure`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -oauth | [oauth][] | OAuth configuration for Azure API. | no -managed_identity | [managed_identity][] | Managed Identity configuration for Azure API. | no -tls_config | [tls_config][] | TLS configuration for requests to the Azure API. | no +Hierarchy | Block | Description | Required +-----------------|----------------------|--------------------------------------------------|--------- +managed_identity | [managed_identity][] | Managed Identity configuration for Azure API. | no +oauth | [oauth][] | OAuth configuration for Azure API. | no +tls_config | [tls_config][] | TLS configuration for requests to the Azure API. | no Exactly one of the `oauth` or `managed_identity` blocks must be specified. @@ -52,32 +51,34 @@ Exactly one of the `oauth` or `managed_identity` blocks must be specified. [managed_identity]: #managed_identity-block [tls_config]: #tls_config-block -### oauth block -The `oauth` block configures OAuth authentication for the Azure API. - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`client_id` | `string` | OAuth client ID. | | yes -`client_secret` | `string` | OAuth client secret. | | yes -`tenant_id` | `string` | OAuth tenant ID. | | yes +### managed_identity -### managed_identity block The `managed_identity` block configures Managed Identity authentication for the Azure API. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`client_id` | `string` | Managed Identity client ID. | | yes +Name | Type | Description | Default | Required +------------|----------|-----------------------------|---------|--------- +`client_id` | `string` | Managed Identity client ID. | | yes + +### oauth + +The `oauth` block configures OAuth authentication for the Azure API. + +Name | Type | Description | Default | Required +----------------|----------|----------------------|---------|--------- +`client_id` | `string` | OAuth client ID. | | yes +`client_secret` | `string` | OAuth client secret. | | yes +`tenant_id` | `string` | OAuth tenant ID. | | yes -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|-------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Azure API. Each target includes the following labels: @@ -88,40 +89,39 @@ Each target includes the following labels: * `__meta_azure_machine_resource_group`: The name of the resource group the VM is in. * `__meta_azure_machine_name`: The name of the VM. * `__meta_azure_machine_computer_name`: The host OS name of the VM. -* `__meta_azure_machine_os_type`: The OS the VM is running (either `Linux` or `Windows`). +* `__meta_azure_machine_os_type`: The OS the VM is running, either `Linux` or `Windows`. * `__meta_azure_machine_location`: The region the VM is in. * `__meta_azure_machine_private_ip`: The private IP address of the VM. * `__meta_azure_machine_public_ip`: The public IP address of the VM. -* `__meta_azure_machine_tag_*`: A tag on the VM. There will be one label per tag. +* `__meta_azure_machine_tag_*`: A tag on the VM. There is one label per tag. * `__meta_azure_machine_scale_set`: The name of the scale set the VM is in. * `__meta_azure_machine_size`: The size of the VM. -Each discovered VM maps to a single target. The `__address__` label is set to the `private_ip:port` (`[private_ip]:port` if the private IP is an IPv6 address) of the VM. +Each discovered VM maps to a single target. The `__address__` label is set to the `private_ip:port` or `[private_ip]:port` if the private IP is an IPv6 address. ## Component health -`discovery.azure` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.azure` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.azure` does not expose any component-specific debug information. +`discovery.azure` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.azure` does not expose any component-specific debug metrics. +`discovery.azure` doesn't expose any component-specific debug metrics. ## Example ```river discovery.azure "example" { port = 80 - subscription_id = AZURE_SUBSCRIPTION_ID + subscription_id = oauth { - client_id = AZURE_CLIENT_ID - client_secret = AZURE_CLIENT_SECRET - tenant_id = AZURE_TENANT_ID + client_id = + client_secret = + tenant_id = } } @@ -132,20 +132,20 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `AZURE_SUBSCRIPTION_ID`: Your Azure subscription ID. - - `AZURE_CLIENT_ID`: Your Azure client ID. - - `AZURE_CLIENT_SECRET`: Your Azure client secret. - - `AZURE_TENANT_ID`: Your Azure tenant ID. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: Your Azure subscription ID. +- _``_: Your Azure client ID. +- _``_: Your Azure client secret. +- _``_: Your Azure tenant ID. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.consul.md b/docs/sources/flow/reference/components/discovery.consul.md index 884fa1fe602f..ba4ffd6032bc 100644 --- a/docs/sources/flow/reference/components/discovery.consul.md +++ b/docs/sources/flow/reference/components/discovery.consul.md @@ -27,116 +27,113 @@ discovery.consul "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`server` | `string` | Host and port of the Consul API. | `localhost:8500` | no -`token` | `secret` | Secret token used to access the Consul API. | | no -`datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no -`namespace` | `string` | Namespace to use (only supported in Consul Enterprise). | | no -`partition` | `string` | Admin partition to use (only supported in Consul Enterprise). | | no -`tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no -`scheme` | `string` | The scheme to use when talking to Consul. | `http` | no -`username` | `string` | The username to use (deprecated in favor of the basic_auth configuration). | | no -`password` | `secret` | The password to use (deprecated in favor of the basic_auth configuration). | | no -`allow_stale` | `bool` | Allow stale Consul results (see [official documentation][consistency documentation]). Will reduce load on Consul. | `true` | no -`services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no -`tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no -`node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|----------------|----------------------------------------------------------------------------------------------------------------|------------------|--------- +`allow_stale` | `bool` | Allow stale Consul results. Refer to the [Consul documentation][]. Reduces the load on Consul. | `true` | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`namespace` | `string` | Namespace to use. Only supported in Consul Enterprise. | | no +`node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no +`partition` | `string` | Admin partition to use. Only supported in Consul Enterprise. | | no +`password` | `secret` | The password to use. Deprecated in favor of the basic_auth configuration. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no +`scheme` | `string` | The scheme to use when talking to Consul. | `http` | no +`server` | `string` | Host and port of the Consul API. | `localhost:8500` | no +`services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no +`tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no +`tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no +`token` | `secret` | Secret token used to access the Consul API. | | no +`username` | `string` | The username to use. Deprecated in favor of the basic_auth configuration. | | no At most one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. -[consistency documentation]: https://www.consul.io/api/features/consistency.html +[Consul documentation]: https://www.consul.io/api/features/consistency.html [arguments]: #arguments ## Blocks -The following blocks are supported inside the definition of -`discovery.consul`: +The following blocks are supported inside the definition of `discovery.consul`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. Each target includes the following labels: -* `__meta_consul_address`: the address of the target. -* `__meta_consul_dc`: the datacenter name for the target. -* `__meta_consul_health`: the health status of the service. -* `__meta_consul_partition`: the admin partition name where the service is registered. -* `__meta_consul_metadata_`: each node metadata key value of the target. -* `__meta_consul_node`: the node name defined for the target. -* `__meta_consul_service_address`: the service address of the target. -* `__meta_consul_service_id`: the service ID of the target. -* `__meta_consul_service_metadata_`: each service metadata key value of the target. -* `__meta_consul_service_port`: the service port of the target. -* `__meta_consul_service`: the name of the service the target belongs to. -* `__meta_consul_tagged_address_`: each node tagged address key value of the target. -* `__meta_consul_tags`: the list of tags of the target joined by the tag separator. +* `__meta_consul_address`: The address of the target. +* `__meta_consul_dc`: The datacenter name for the target. +* `__meta_consul_health`: The health status of the service. +* `__meta_consul_metadata_`: Each node metadata key value of the target. +* `__meta_consul_node`: The node name defined for the target. +* `__meta_consul_partition`: The admin partition name where the service is registered. +* `__meta_consul_service_address`: The service address of the target. +* `__meta_consul_service_id`: The service ID of the target. +* `__meta_consul_service_metadata_`: Each service metadata key value of the target. +* `__meta_consul_service_port`: The service port of the target. +* `__meta_consul_service`: The name of the service the target belongs to. +* `__meta_consul_tagged_address_`: Each node tagged address key value of the target. +* `__meta_consul_tags`: The list of tags of the target joined by the tag separator. ## Component health -`discovery.consul` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.consul` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.consul` does not expose any component-specific debug information. +`discovery.consul` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.consul` does not expose any component-specific debug metrics. +`discovery.consul` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from Consul for the specified list of services: +The following example discovers targets from Consul for the specified list of services: ```river discovery.consul "example" { @@ -154,16 +151,16 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.consulagent.md b/docs/sources/flow/reference/components/discovery.consulagent.md index 6ff793d1933e..530e6836ac8c 100644 --- a/docs/sources/flow/reference/components/discovery.consulagent.md +++ b/docs/sources/flow/reference/components/discovery.consulagent.md @@ -27,23 +27,22 @@ discovery.consulagent "LABEL" { The following arguments are supported: -| Name | Type | Description | Default | Required | -| ------------------ | -------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | -------- | -| `server` | `string` | Host and port of the Consul Agent API. | `localhost:8500` | no | -| `token` | `secret` | Secret token used to access the Consul Agent API. | | no | -| `datacenter` | `string` | Datacenter in which the Consul Agent is configured to run. If not provided, the datacenter will be retrieved from the local Consul Agent. | | no | -| `tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no | -| `scheme` | `string` | The scheme to use when talking to the Consul Agent. | `http` | no | -| `username` | `string` | The username to use. | | no | -| `password` | `secret` | The password to use. | | no | -| `services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no | -| `tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no | -| `refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no | +| Name | Type | Description | Default | Required | +|--------------------|----------------|----------------------------------------------------------------------------------------------------------------|------------------|----------| +| `datacenter` | `string` | Datacenter for the Consul Agent. If not provided, the datacenter is retrieved from the local Consul Agent. | | no | +| `password` | `secret` | The password to use. | | no | +| `refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no | +| `scheme` | `string` | The scheme to use when talking to the Consul Agent. | `http` | no | +| `server` | `string` | Host and port of the Consul Agent API. | `localhost:8500` | no | +| `services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no | +| `tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no | +| `tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no | +| `token` | `secret` | Secret token used to access the Consul Agent API. | | no | +| `username` | `string` | The username to use. | | no | ## Blocks -The following blocks are supported inside the definition of -`discovery.consulagent`: +The following blocks are supported inside the definition of `discovery.consulagent`: | Hierarchy | Block | Description | Required | | ---------- | -------------- | ------------------------------------------------------ | -------- | @@ -51,9 +50,9 @@ The following blocks are supported inside the definition of [tls_config]: #tls_config-block -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -65,28 +64,27 @@ The following fields are exported and can be referenced by other components: Each target includes the following labels: -- `__meta_consulagent_address`: the address of the target. -- `__meta_consulagent_dc`: the datacenter name for the target. -- `__meta_consulagent_health`: the health status of the service. -- `__meta_consulagent_metadata_`: each node metadata key value of the target. -- `__meta_consulagent_node`: the node name defined for the target. -- `__meta_consulagent_service`: the name of the service the target belongs to. -- `__meta_consulagent_service_address`: the service address of the target. -- `__meta_consulagent_service_id`: the service ID of the target. -- `__meta_consulagent_service_metadata_`: each service metadata key value of the target. -- `__meta_consulagent_service_port`: the service port of the target. -- `__meta_consulagent_tagged_address_`: each node tagged address key value of the target. -- `__meta_consulagent_tags`: the list of tags of the target joined by the tag separator. +- `__meta_consulagent_address`: The address of the target. +- `__meta_consulagent_dc`: The datacenter name for the target. +- `__meta_consulagent_health`: The health status of the service. +- `__meta_consulagent_metadata_`: Each node metadata key value of the target. +- `__meta_consulagent_node`: The node name defined for the target. +- `__meta_consulagent_service_address`: The service address of the target. +- `__meta_consulagent_service_id`: The service ID of the target. +- `__meta_consulagent_service_metadata_`: Each service metadata key value of the target. +- `__meta_consulagent_service_port`: The service port of the target. +- `__meta_consulagent_service`: The name of the service the target belongs to. +- `__meta_consulagent_tagged_address_`: Each node tagged address key value of the target. +- `__meta_consulagent_tags`: The list of tags of the target joined by the tag separator. ## Component health -`discovery.consulagent` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.consulagent` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.consulagent` does not expose any component-specific debug information. +`discovery.consulagent` doesn't expose any component-specific debug information. ## Debug metrics @@ -96,7 +94,7 @@ values. ## Example -This example discovers targets from a Consul Agent for the specified list of services: +The following example discovers targets from a Consul Agent for the specified list of services: ```river discovery.consulagent "example" { @@ -114,18 +112,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - -- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.digitalocean.md b/docs/sources/flow/reference/components/discovery.digitalocean.md index 18b42714b421..4489f1e53cb3 100644 --- a/docs/sources/flow/reference/components/discovery.digitalocean.md +++ b/docs/sources/flow/reference/components/discovery.digitalocean.md @@ -31,46 +31,46 @@ The following arguments are supported: Name | Type | Description | Default | Required ------------------- | ---------- | ---------------------------------------------------------------------- | ------- | -------- -`port` | `number` | Port to be appended to the `__address__` label for each Droplet. | `80` | no -`refresh_interval` | `duration` | Frequency to refresh list of Droplets. | `"1m"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`port` | `number` | Port to be appended to the `__address__` label for each Droplet. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to refresh list of Droplets. | `"1m"` | no -The DigitalOcean API uses bearer tokens for authentication, see more about it in the [DigitalOcean API documentation](https://docs.digitalocean.com/reference/api/api-reference/#section/Authentication). +The DigitalOcean API uses bearer tokens for authentication. +Refer to the [DigitalOcean API documentation](https://docs.digitalocean.com/reference/api/api-reference/#section/Authentication) for more information. Exactly one of the [`bearer_token`](#arguments) and [`bearer_token_file`](#arguments) arguments must be specified to authenticate against DigitalOcean. [arguments]: #arguments ## Blocks -The `discovery.digitalocean` component does not support any blocks, and is configured -fully through arguments. +The `discovery.digitalocean` component doesn't support any blocks, and is configured fully through arguments. ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|--------------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the DigitalOcean API. Each target includes the following labels: * `__meta_digitalocean_droplet_id`: ID of the Droplet. * `__meta_digitalocean_droplet_name`: Name of the Droplet. -* `__meta_digitalocean_image`: The image slug (unique text identifier of the image) used to create the Droplet. +* `__meta_digitalocean_features`: Optional properties configured for the Droplet, such as IPV6 networking, private networking, or backups. * `__meta_digitalocean_image_name`: Name of the image used to create the Droplet. +* `__meta_digitalocean_image`: The image slug (unique text identifier of the image) used to create the Droplet. * `__meta_digitalocean_private_ipv4`: The private IPv4 address of the Droplet. * `__meta_digitalocean_public_ipv4`: The public IPv4 address of the Droplet. * `__meta_digitalocean_public_ipv6`: The public IPv6 address of the Droplet. * `__meta_digitalocean_region`: The region the Droplet is running in. * `__meta_digitalocean_size`: The size of the Droplet. * `__meta_digitalocean_status`: The current status of the Droplet. -* `__meta_digitalocean_features`: Optional properties configured for the Droplet, such as IPV6 networking, private networking, or backups. * `__meta_digitalocean_tags`: The tags assigned to the Droplet. * `__meta_digitalocean_vpc`: The ID of the VPC where the Droplet is located. @@ -78,21 +78,21 @@ Each discovered Droplet maps to one target. ## Component health -`discovery.digitalocean` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.digitalocean` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.digitalocean` does not expose any component-specific debug information. +`discovery.digitalocean` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.digitalocean` does not expose any component-specific debug metrics. +`discovery.digitalocean` doesn't expose any component-specific debug metrics. ## Example -This would result in targets with `__address__` labels like: `192.0.2.1:8080`: +The following example results in targets with `__address__` labels like: `192.0.2.1:8080`: + ```river discovery.digitalocean "example" { port = 8080 @@ -107,16 +107,16 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.dns.md b/docs/sources/flow/reference/components/discovery.dns.md index 3a2615d5df29..bff5a135f108 100644 --- a/docs/sources/flow/reference/components/discovery.dns.md +++ b/docs/sources/flow/reference/components/discovery.dns.md @@ -26,42 +26,40 @@ discovery.dns "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`names` | `list(string)` | DNS names to look up. | | yes -`port` | `number` | Port to use for collecting metrics. Not used for SRV records. | `0` | no -`refresh_interval` | `duration` | How often to query DNS for updates. | `"30s"` | no -`type` | `string` | Type of DNS record to query. Must be one of SRV, A, AAAA, or MX. | `"SRV"` | no +Name | Type | Description | Default | Required +-------------------|----------------|------------------------------------------------------------------|---------|--------- +`names` | `list(string)` | DNS names to look up. | | yes +`port` | `number` | Port to use for collecting metrics. Not used for SRV records. | `0` | no +`refresh_interval` | `duration` | How often to query DNS for updates. | `"30s"` | no +`type` | `string` | Type of DNS record to query. Must be one of SRV, A, AAAA, or MX. | `"SRV"` | no ## Exported fields The following field is exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the docker API. Each target includes the following labels: +* `__meta_dns_mx_record_target`: Target field of the MX record. * `__meta_dns_name`: Name of the record that produced the discovered target. -* `__meta_dns_srv_record_target`: Target field of the SRV record. * `__meta_dns_srv_record_port`: Port field of the SRV record. -* `__meta_dns_mx_record_target`: Target field of the MX record. - +* `__meta_dns_srv_record_target`: Target field of the SRV record. ## Component health -`discovery.dns` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.dns` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.dns` does not expose any component-specific debug information. +`discovery.dns` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.dns` does not expose any component-specific debug metrics. +`discovery.dns` doesn't expose any component-specific debug metrics. ## Example @@ -81,16 +79,16 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. \ No newline at end of file +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.docker.md b/docs/sources/flow/reference/components/discovery.docker.md index 076f00f75b21..6d1083b00100 100644 --- a/docs/sources/flow/reference/components/discovery.docker.md +++ b/docs/sources/flow/reference/components/discovery.docker.md @@ -27,43 +27,41 @@ discovery.docker "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`host` | `string` | Address of the Docker Daemon to connect to. | | yes -`port` | `number` | Port to use for collecting metrics when containers don't have any port mappings. | `80` | no -`host_networking_host` | `string` | Host to use if the container is in host networking mode. | `"localhost"` | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"1m"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +-----------------------|------------|----------------------------------------------------------------------------------|---------------|--------- +`host` | `string` | Address of the Docker Daemon to connect to. | | yes +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`host_networking_host` | `string` | Host to use if the container is in host networking mode. | `"localhost"` | no +`port` | `number` | Port to use for collecting metrics when containers don't have any port mappings. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"1m"` | no At most one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. [arguments]: #arguments ## Blocks -The following blocks are supported inside the definition of -`discovery.docker`: +The following blocks are supported inside the definition of `discovery.docker`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -filter | [filter][] | Filters discoverable resources. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +filter | [filter][] | Filters discoverable resources. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [filter]: #filter-block [basic_auth]: #basic_auth-block @@ -71,91 +69,81 @@ an `oauth2` block. [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### filter block +### authorization -The `filter` block configures a filter to pass to the Docker Engine to limit -the amount of containers returned. The `filter` block can be specified multiple -times to provide more than one filter. +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | Filter name to use. | | yes -`values` | `list(string)` | Values to pass to the filter. | | yes +### basic_auth -Refer to [List containers][List containers] from the Docker Engine API -documentation for the list of supported filters and their meaning. +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -[List containers]: https://docs.docker.com/engine/api/v1.41/#tag/Container/operation/ContainerList +### filter -### basic_auth block +The `filter` block configures a filter to pass to the Docker Engine to limit the amount of containers returned. +The `filter` block can be specified multiple times to provide more than one filter. -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +Name | Type | Description | Default | Required +---------|----------------|-------------------------------|---------|--------- +`name` | `string` | Filter name to use. | | yes +`values` | `list(string)` | Values to pass to the filter. | | yes -### authorization block +Refer to [List containers][List containers] from the Docker Engine API documentation for the list of supported filters and their meaning. -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +[List containers]: https://docs.docker.com/engine/api/v1.41/#tag/Container/operation/ContainerList -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the docker API. Each target includes the following labels: * `__meta_docker_container_id`: ID of the container. +* `__meta_docker_container_label_`: Each label from the container. * `__meta_docker_container_name`: Name of the container. * `__meta_docker_container_network_mode`: Network mode of the container. -* `__meta_docker_container_label_`: Each label from the container. * `__meta_docker_network_id`: ID of the Docker network the container is in. +* `__meta_docker_network_ingress`: Set to `true` if the Docker network is an ingress network. +* `__meta_docker_network_internal`: Set to `true` if the Docker network is an internal network. +* `__meta_docker_network_ip`: The IP of the container in the network. +* `__meta_docker_network_label_`: Each label from the network the container is in. * `__meta_docker_network_name`: Name of the Docker network the container is in. -* `__meta_docker_network_ingress`: Set to `true` if the Docker network is an - ingress network. -* `__meta_docker_network_internal`: Set to `true` if the Docker network is an - internal network. -* `__meta_docker_network_label_`: Each label from the network the - container is in. * `__meta_docker_network_scope`: The scope of the network the container is in. -* `__meta_docker_network_ip`: The IP of the container in the network. * `__meta_docker_port_private`: The private port on the container. -* `__meta_docker_port_public`: The publicly exposed port from the container, - if a port mapping exists. -* `__meta_docker_port_public_ip`: The public IP of the container, if a port - mapping exists. +* `__meta_docker_port_public_ip`: The public IP of the container, if a port mapping exists. +* `__meta_docker_port_public`: The publicly exposed port from the container, if a port mapping exists. -Each discovered container maps to one target per unique combination of networks -and port mappings used by the container. +Each discovered container maps to one target per unique combination of networks and port mappings used by the container. ## Component health -`discovery.docker` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.docker` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.docker` does not expose any component-specific debug information. +`discovery.docker` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.docker` does not expose any component-specific debug metrics. +`discovery.docker` doesn't expose any component-specific debug metrics. ## Examples ### Linux or macOS hosts -This example discovers Docker containers when the host machine is macOS or -Linux: +THe following example discovers Docker containers when the host machine is Linux or macOS: ```river discovery.docker "containers" { @@ -169,23 +157,24 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. ### Windows hosts -This example discovers Docker containers when the host machine is Windows: +The following example discovers Docker containers when the host machine is Windows: ```river discovery.docker "containers" { @@ -199,19 +188,21 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. -> **NOTE**: This example requires the "Expose daemon on tcp://localhost:2375 -> without TLS" setting to be enabled in the Docker Engine settings. +{{% admonition type="note" %}} +This example requires the "Expose daemon on tcp://localhost:2375 without TLS" setting to be enabled in the Docker Engine settings. +{{% /admonition %}} diff --git a/docs/sources/flow/reference/components/discovery.dockerswarm.md b/docs/sources/flow/reference/components/discovery.dockerswarm.md index bf4eef2074e8..0ce3b36c144d 100644 --- a/docs/sources/flow/reference/components/discovery.dockerswarm.md +++ b/docs/sources/flow/reference/components/discovery.dockerswarm.md @@ -30,28 +30,26 @@ The following arguments are supported: | ------------------ | -------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | | `host` | `string` | Address of the Docker daemon. | | yes | | `role` | `string` | Role of the targets to retrieve. Must be `services`, `tasks`, or `nodes`. | | yes | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | | `port` | `number` | The port to scrape metrics from, when `role` is nodes, and for discovered tasks and services that don't have published ports. | `80` | no | -| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"60s"` | no | | `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | -| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | -| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"60s"` | no | ## Blocks -The following blocks are supported inside the definition of -`discovery.dockerswarm`: +The following blocks are supported inside the definition of `discovery.dockerswarm`: | Hierarchy | Block | Description | Required | | ------------------- | ----------------- | ---------------------------------------------------------------------------------- | -------- | -| filter | [filter][] | Optional filter to limit the discovery process to a subset of available resources. | no | -| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | | authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| filter | [filter][] | Optional filter to limit the discovery process to a subset of available resources. | no | | oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | | oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [filter]: #filter-block [basic_auth]: #basic_auth-block @@ -59,15 +57,23 @@ an `oauth2` block. [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### filter block +### authorization + +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} + +### basic_auth + +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} + +### filter Filters can be used to limit the discovery process to a subset of available resources. -It is possible to define multiple `filter` blocks within the `discovery.dockerswarm` block. +It's possible to define multiple `filter` blocks within the `discovery.dockerswarm` block. The list of available filters depends on the `role`: +- [nodes filters](https://docs.docker.com/engine/api/v1.40/#operation/NodeList) - [services filters](https://docs.docker.com/engine/api/v1.40/#operation/ServiceList) - [tasks filters](https://docs.docker.com/engine/api/v1.40/#operation/TaskList) -- [nodes filters](https://docs.docker.com/engine/api/v1.40/#operation/NodeList) The following arguments can be used to configure a filter. @@ -76,21 +82,13 @@ The following arguments can be used to configure a filter. | `name` | `string` | Name of the filter. | | yes | | `values` | `list(string)` | List of values associated with the filter. | | yes | -### basic_auth block +### oauth2 -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### authorization block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} - -### oauth2 block - -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} - -### tls_config block - -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -110,21 +108,21 @@ The `services` role discovers all [Swarm services](https://docs.docker.com/engin Available meta labels: -- `__meta_dockerswarm_service_id`: the ID of the service. -- `__meta_dockerswarm_service_name`: the name of the service. -- `__meta_dockerswarm_service_mode`: the mode of the service. -- `__meta_dockerswarm_service_endpoint_port_name`: the name of the endpoint port, if available. -- `__meta_dockerswarm_service_endpoint_port_publish_mode`: the publish mode of the endpoint port. -- `__meta_dockerswarm_service_label_`: each label of the service. -- `__meta_dockerswarm_service_task_container_hostname`: the container hostname of the target, if available. -- `__meta_dockerswarm_service_task_container_image`: the container image of the target. -- `__meta_dockerswarm_service_updating_status`: the status of the service, if available. -- `__meta_dockerswarm_network_id`: the ID of the network. -- `__meta_dockerswarm_network_name`: the name of the network. -- `__meta_dockerswarm_network_ingress`: whether the network is ingress. -- `__meta_dockerswarm_network_internal`: whether the network is internal. -- `__meta_dockerswarm_network_label_`: each label of the network. -- `__meta_dockerswarm_network_scope`: the scope of the network. +- `__meta_dockerswarm_network_id`: The ID of the network. +- `__meta_dockerswarm_network_ingress`: Whether the network is ingress. +- `__meta_dockerswarm_network_internal`: Whether the network is internal. +- `__meta_dockerswarm_network_label_`: Each label of the network. +- `__meta_dockerswarm_network_name`: The name of the network. +- `__meta_dockerswarm_network_scope`: The scope of the network. +- `__meta_dockerswarm_service_endpoint_port_name`: The name of the endpoint port, if available. +- `__meta_dockerswarm_service_endpoint_port_publish_mode`: The publish mode of the endpoint port. +- `__meta_dockerswarm_service_id`: The ID of the service. +- `__meta_dockerswarm_service_label_`: Each label of the service. +- `__meta_dockerswarm_service_mode`: The mode of the service. +- `__meta_dockerswarm_service_name`: The name of the service. +- `__meta_dockerswarm_service_task_container_hostname`: The container hostname of the target, if available. +- `__meta_dockerswarm_service_task_container_image`: The container image of the target. +- `__meta_dockerswarm_service_updating_status`: The status of the service, if available. ### tasks @@ -132,33 +130,33 @@ The `tasks` role discovers all [Swarm tasks](https://docs.docker.com/engine/swar Available meta labels: -- `__meta_dockerswarm_container_label_`: each label of the container. -- `__meta_dockerswarm_task_id`: the ID of the task. -- `__meta_dockerswarm_task_container_id`: the container ID of the task. -- `__meta_dockerswarm_task_desired_state`: the desired state of the task. -- `__meta_dockerswarm_task_slot`: the slot of the task. -- `__meta_dockerswarm_task_state`: the state of the task. -- `__meta_dockerswarm_task_port_publish_mode`: the publish mode of the task port. -- `__meta_dockerswarm_service_id`: the ID of the service. -- `__meta_dockerswarm_service_name`: the name of the service. -- `__meta_dockerswarm_service_mode`: the mode of the service. -- `__meta_dockerswarm_service_label_`: each label of the service. -- `__meta_dockerswarm_network_id`: the ID of the network. -- `__meta_dockerswarm_network_name`: the name of the network. -- `__meta_dockerswarm_network_ingress`: whether the network is ingress. -- `__meta_dockerswarm_network_internal`: whether the network is internal. -- `__meta_dockerswarm_network_label_`: each label of the network. -- `__meta_dockerswarm_network_label`: each label of the network. -- `__meta_dockerswarm_network_scope`: the scope of the network. -- `__meta_dockerswarm_node_id`: the ID of the node. -- `__meta_dockerswarm_node_hostname`: the hostname of the node. -- `__meta_dockerswarm_node_address`: the address of the node. -- `__meta_dockerswarm_node_availability`: the availability of the node. -- `__meta_dockerswarm_node_label_`: each label of the node. -- `__meta_dockerswarm_node_platform_architecture`: the architecture of the node. -- `__meta_dockerswarm_node_platform_os`: the operating system of the node. -- `__meta_dockerswarm_node_role`: the role of the node. -- `__meta_dockerswarm_node_status`: the status of the node. +- `__meta_dockerswarm_container_label_`: Each label of the container. +- `__meta_dockerswarm_network_id`: The ID of the network. +- `__meta_dockerswarm_network_ingress`: Whether the network is ingress. +- `__meta_dockerswarm_network_internal`: Whether the network is internal. +- `__meta_dockerswarm_network_label_`: Each label of the network. +- `__meta_dockerswarm_network_label`: Each label of the network. +- `__meta_dockerswarm_network_name`: The name of the network. +- `__meta_dockerswarm_network_scope`: The scope of the network. +- `__meta_dockerswarm_node_address`: The address of the node. +- `__meta_dockerswarm_node_availability`: The availability of the node. +- `__meta_dockerswarm_node_hostname`: The hostname of the node. +- `__meta_dockerswarm_node_id`: The ID of the node. +- `__meta_dockerswarm_node_label_`: Each label of the node. +- `__meta_dockerswarm_node_platform_architecture`: The architecture of the node. +- `__meta_dockerswarm_node_platform_os`: The operating system of the node. +- `__meta_dockerswarm_node_role`: The role of the node. +- `__meta_dockerswarm_node_status`: The status of the node. +- `__meta_dockerswarm_service_id`: The ID of the service. +- `__meta_dockerswarm_service_label_`: Each label of the service. +- `__meta_dockerswarm_service_mode`: The mode of the service. +- `__meta_dockerswarm_service_name`: The name of the service. +- `__meta_dockerswarm_task_container_id`: The container ID of the task. +- `__meta_dockerswarm_task_desired_state`: The desired state of the task. +- `__meta_dockerswarm_task_id`: The ID of the task. +- `__meta_dockerswarm_task_port_publish_mode`: The publish mode of the task port. +- `__meta_dockerswarm_task_slot`: The slot of the task. +- `__meta_dockerswarm_task_state`: The state of the task. The `__meta_dockerswarm_network_*` meta labels are not populated for ports which are published with mode=host. @@ -168,37 +166,36 @@ The `nodes` role is used to discover [Swarm nodes](https://docs.docker.com/engin Available meta labels: -- `__meta_dockerswarm_node_address`: the address of the node. -- `__meta_dockerswarm_node_availability`: the availability of the node. -- `__meta_dockerswarm_node_engine_version`: the version of the node engine. -- `__meta_dockerswarm_node_hostname`: the hostname of the node. -- `__meta_dockerswarm_node_id`: the ID of the node. -- `__meta_dockerswarm_node_label_`: each label of the node. -- `__meta_dockerswarm_node_manager_address`: the address of the manager component of the node. -- `__meta_dockerswarm_node_manager_leader`: the leadership status of the manager component of the node (true or false). -- `__meta_dockerswarm_node_manager_reachability`: the reachability of the manager component of the node. -- `__meta_dockerswarm_node_platform_architecture`: the architecture of the node. -- `__meta_dockerswarm_node_platform_os`: the operating system of the node. -- `__meta_dockerswarm_node_role`: the role of the node. -- `__meta_dockerswarm_node_status`: the status of the node. +- `__meta_dockerswarm_node_address`: The address of the node. +- `__meta_dockerswarm_node_availability`: The availability of the node. +- `__meta_dockerswarm_node_engine_version`: The version of the node engine. +- `__meta_dockerswarm_node_hostname`: The hostname of the node. +- `__meta_dockerswarm_node_id`: The ID of the node. +- `__meta_dockerswarm_node_label_`: Each label of the node. +- `__meta_dockerswarm_node_manager_address`: The address of the manager component of the node. +- `__meta_dockerswarm_node_manager_leader`: The leadership status of the manager component of the node (true or false). +- `__meta_dockerswarm_node_manager_reachability`: The reachability of the manager component of the node. +- `__meta_dockerswarm_node_platform_architecture`: The architecture of the node. +- `__meta_dockerswarm_node_platform_os`: The operating system of the node. +- `__meta_dockerswarm_node_role`: The role of the node. +- `__meta_dockerswarm_node_status`: The status of the node. ## Component health -`discovery.dockerswarm` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.dockerswarm` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.dockerswarm` does not expose any component-specific debug information. +`discovery.dockerswarm` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.dockerswarm` does not expose any component-specific debug metrics. +`discovery.dockerswarm` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from Docker Swarm tasks: +The following example discovers targets from Docker Swarm tasks: ```river discovery.dockerswarm "example" { @@ -223,18 +220,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - -- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.ec2.md b/docs/sources/flow/reference/components/discovery.ec2.md index 63a4cfc802f4..be14e8e3f9e5 100644 --- a/docs/sources/flow/reference/components/discovery.ec2.md +++ b/docs/sources/flow/reference/components/discovery.ec2.md @@ -26,37 +26,36 @@ discovery.ec2 "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | Custom endpoint to be used.| | no -`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no -`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no -`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no -`profile` | `string` | Named AWS profile used to connect to the API. | | no -`role_arn` | `string` | AWS Role Amazon Resource Name (ARN), an alternative to using AWS API keys. | | no -`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no +Name | Type | Description | Default | Required +-------------------|----------|-------------------------------------------------------------------------------------------------------------------------|---------|--------- +`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no +`endpoint` | `string` | Custom endpoint to be used. | | no +`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no +`profile` | `string` | Named AWS profile used to connect to the API. | | no +`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no +`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no +`role_arn` | `string` | AWS Role Amazon Resource Name (ARN), an alternative to using AWS API keys. | | no +`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no ## Blocks -The following blocks are supported inside the definition of -`discovery.ec2`: +The following blocks are supported inside the definition of `discovery.ec2`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -filter | [filter][] | Filters discoverable resources. | no +Hierarchy | Block | Description | Required +----------|------------|---------------------------------|--------- +filter | [filter][] | Filters discoverable resources. | no [filter]: #filter-block -### filter block +### filter Filters can be used optionally to filter the instance list by other criteria. Available filter criteria can be found in the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html). -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | Filter name to use. | | yes -`values` | `list(string)` | Values to pass to the filter. | | yes +Name | Type | Description | Default | Required +---------|----------------|-------------------------------|---------|--------- +`name` | `string` | Filter name to use. | | yes +`values` | `list(string)` | Values to pass to the filter. | | yes Refer to the [Filter API AWS EC2 documentation][filter api] for the list of supported filters and their descriptions. @@ -66,16 +65,16 @@ Refer to the [Filter API AWS EC2 documentation][filter api] for the list of supp The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------- `targets` | `list(map(string))` | The set of discovered EC2 targets. Each target includes the following labels: * `__meta_ec2_ami`: The EC2 Amazon Machine Image. * `__meta_ec2_architecture`: The architecture of the instance. +* `__meta_ec2_availability_zone_id`: The availability zone ID in which the instance is running. Requires `ec2:DescribeAvailabilityZones`. * `__meta_ec2_availability_zone`: The availability zone in which the instance is running. -* `__meta_ec2_availability_zone_id`: The availability zone ID in which the instance is running (requires `ec2:DescribeAvailabilityZones`). * `__meta_ec2_instance_id`: The EC2 instance ID. * `__meta_ec2_instance_lifecycle`: The lifecycle of the EC2 instance, set only for 'spot' or 'scheduled' instances, absent otherwise. * `__meta_ec2_instance_state`: The state of the EC2 instance. @@ -95,17 +94,16 @@ Each target includes the following labels: ## Component health -`discovery.ec2` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.ec2` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.ec2` does not expose any component-specific debug information. +`discovery.ec2` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.ec2` does not expose any component-specific debug metrics. +`discovery.ec2` doesn't expose any component-specific debug metrics. ## Example @@ -121,16 +119,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.eureka.md b/docs/sources/flow/reference/components/discovery.eureka.md index b2f7e73ad85c..7b29cc66130e 100644 --- a/docs/sources/flow/reference/components/discovery.eureka.md +++ b/docs/sources/flow/reference/components/discovery.eureka.md @@ -30,45 +30,43 @@ The following arguments are supported: Name | Type | Description | Default | Required ------------------- | ---------- | ---------------------------------------------------------------------- | -------------------- | -------- `server` | `string` | Eureka server URL. | | yes -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `30s` | no `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `30s` | no ## Blocks -The following blocks are supported inside the definition of -`discovery.eureka`: +The following blocks are supported inside the definition of `discovery.eureka`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -80,38 +78,37 @@ Name | Type | Description Each target includes the following labels: -* `__meta_eureka_app_name` -* `__meta_eureka_app_instance_hostname` -* `__meta_eureka_app_instance_homepage_url` -* `__meta_eureka_app_instance_statuspage_url` +* `__meta_eureka_app_instance_country_id` +* `__meta_eureka_app_instance_datacenterinfo_metadata_` +* `__meta_eureka_app_instance_datacenterinfo_name` * `__meta_eureka_app_instance_healthcheck_url` +* `__meta_eureka_app_instance_homepage_url` +* `__meta_eureka_app_instance_hostname` +* `__meta_eureka_app_instance_id` * `__meta_eureka_app_instance_ip_addr` -* `__meta_eureka_app_instance_vip_address` -* `__meta_eureka_app_instance_secure_vip_address` -* `__meta_eureka_app_instance_status` -* `__meta_eureka_app_instance_port` +* `__meta_eureka_app_instance_metadata_` * `__meta_eureka_app_instance_port_enabled` -* `__meta_eureka_app_instance_secure_port` +* `__meta_eureka_app_instance_port` * `__meta_eureka_app_instance_secure_port_enabled` -* `__meta_eureka_app_instance_datacenterinfo_name` -* `__meta_eureka_app_instance_datacenterinfo_metadata_` -* `__meta_eureka_app_instance_country_id` -* `__meta_eureka_app_instance_id` -* `__meta_eureka_app_instance_metadata_` +* `__meta_eureka_app_instance_secure_port` +* `__meta_eureka_app_instance_secure_vip_address` +* `__meta_eureka_app_instance_status` +* `__meta_eureka_app_instance_statuspage_url` +* `__meta_eureka_app_instance_vip_address` +* `__meta_eureka_app_name` ## Component health -`discovery.eureka` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.eureka` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.eureka` does not expose any component-specific debug information. +`discovery.eureka` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.eureka` does not expose any component-specific debug metrics. +`discovery.eureka` doesn't expose any component-specific debug metrics. ## Example @@ -127,16 +124,16 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.file.md b/docs/sources/flow/reference/components/discovery.file.md index 402406ee32fd..c467be240843 100644 --- a/docs/sources/flow/reference/components/discovery.file.md +++ b/docs/sources/flow/reference/components/discovery.file.md @@ -11,15 +11,13 @@ title: discovery.file # discovery.file -> **NOTE:** In `v0.35.0` of the Grafana Agent, the `discovery.file` component was renamed to [local.file_match][], -> and `discovery.file` was repurposed to discover scrape targets from one or more files. -> ->
-> -> If you are trying to discover files on the local filesystem rather than scrape -> targets within a set of files, you should use [local.file_match][] instead. +{{% admonition type="note" %}} +In `v0.35.0` of the Grafana Agent, the `discovery.file` component was renamed to [local.file_match][], and `discovery.file` was repurposed to discover scrape targets from one or more files. + +If you are trying to discover files on the local filesystem rather than scrape targets within a set of files, you should use [local.file_match][] instead. [local.file_match]: {{< relref "./local.file_match.md" >}} +{{% /admonition %}} `discovery.file` discovers targets from a set of files, similar to the [Prometheus file_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config). @@ -37,17 +35,17 @@ The following arguments are supported: Name | Type | Description | Default | Required ------------------ | ------------------- | ------------------------------------------ |---------| -------- -`files` | `list(string)` | Files to read and discover targets from. | | yes +`files` | `list(string)` | Files to read and discover targets from. | | yes `refresh_interval` | `duration` | How often to sync targets. | "5m" | no -The last path segment of each element in `files` may contain a single * that matches any character sequence, e.g. `my/path/tg_*.json`. +The last path segment of each element in `files` may contain a single * that matches any character sequence, for example, `my/path/tg_*.json`. ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the filesystem. Each target includes the following labels: @@ -56,17 +54,16 @@ Each target includes the following labels: ## Component health -`discovery.file` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.file` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.file` does not expose any component-specific debug information. +`discovery.file` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.file` does not expose any component-specific debug metrics. +`discovery.file` doesn't expose any component-specific debug metrics. ## Examples @@ -102,8 +99,7 @@ values. ### Basic file discovery -This example discovers targets from a single file, scrapes them, and writes metrics -to a Prometheus remote write endpoint. +The following example discovers targets from a single file, scrapes them, and writes metrics to a Prometheus remote write endpoint. ```river discovery.file "example" { @@ -117,26 +113,24 @@ prometheus.scrape "default" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. ### File discovery with retained file path label -This example discovers targets from a wildcard file path, scrapes them, and writes metrics -to a Prometheus remote write endpoint. - +The following example discovers targets from a wildcard file path, scrapes them, and writes metrics to a Prometheus remote write endpoint. It also uses a relabeling rule to retain the file path as a label on each target. ```river @@ -159,17 +153,17 @@ prometheus.scrape "default" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. \ No newline at end of file +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. \ No newline at end of file diff --git a/docs/sources/flow/reference/components/discovery.gce.md b/docs/sources/flow/reference/components/discovery.gce.md index b7ca49aaf0e3..e92efdec4c72 100644 --- a/docs/sources/flow/reference/components/discovery.gce.md +++ b/docs/sources/flow/reference/components/discovery.gce.md @@ -11,16 +11,17 @@ title: discovery.gce # discovery.gce -`discovery.gce` allows retrieving scrape targets from [Google Compute Engine](https://cloud.google.com/compute) (GCE) instances. The private IP address is used by default, but may be changed to the public IP address with relabeling. +`discovery.gce` allows retrieving scrape targets from [Google Compute Engine](https://cloud.google.com/compute) (GCE) instances. +The private IP address is used by default, but may be changed to the public IP address with relabeling. Credentials are discovered by the Google Cloud SDK default client by looking in the following places, preferring the first location found: -1. a JSON file specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. -2. a JSON file in the well-known path `$HOME/.config/gcloud/application_default_credentials.json`. -3. fetched from the GCE metadata server. - -If the Agent is running within GCE, the service account associated with the instance it is running on should have at least read-only permissions to the compute resources. If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations. +1. A JSON file specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. +1. A JSON file in the well-known path `$HOME/.config/gcloud/application_default_credentials.json`. +1. Fetched from the GCE metadata server. +If the Agent is running within GCE, the service account associated with the instance it's running on should have at least read-only permissions to the compute resources. +If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations. ## Usage @@ -35,14 +36,14 @@ discovery.gce "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`project` | `string` | The GCP Project.| | yes -`zone` | `string` | The zone of the scrape targets. | | yes -`filter` | `string` | Filter can be used optionally to filter the instance list by other criteria. | | no -`refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `"60s"`| no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | `80`| no -`tag_separator` | `string` | The tag separator is used to separate the tags on concatenation. | `","`| no +Name | Type | Description | Default | Required +-------------------|------------|-------------------------------------------------------------------------------------------------------------------------|---------|--------- +`project` | `string` | The GCP Project. | | yes +`zone` | `string` | The zone of the scrape targets. | | yes +`filter` | `string` | Filter can be used optionally to filter the instance list by other criteria. | | no +`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | `80` | no +`refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `"60s"` | no +`tag_separator` | `string` | The tag separator is used to separate the tags on concatenation. | `","` | no For more information on the syntax of the `filter` argument, refer to Google's `filter` documentation for [Method: instances.list](https://cloud.google.com/compute/docs/reference/latest/instances/list). @@ -50,40 +51,39 @@ For more information on the syntax of the `filter` argument, refer to Google's ` The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------- `targets` | `list(map(string))` | The set of discovered GCE targets. Each target includes the following labels: -* `__meta_gce_instance_id`: the numeric id of the instance -* `__meta_gce_instance_name`: the name of the instance -* `__meta_gce_label_LABEL_NAME`: each GCE label of the instance -* `__meta_gce_machine_type`: full or partial URL of the machine type of the instance -* `__meta_gce_metadata_NAME`: each metadata item of the instance -* `__meta_gce_network`: the network URL of the instance -* `__meta_gce_private_ip`: the private IP address of the instance -* `__meta_gce_interface_ipv4_NAME`: IPv4 address of each named interface -* `__meta_gce_project`: the GCP project in which the instance is running -* `__meta_gce_public_ip`: the public IP address of the instance, if present -* `__meta_gce_subnetwork`: the subnetwork URL of the instance -* `__meta_gce_tags`: comma separated list of instance tags -* `__meta_gce_zone`: the GCE zone URL in which the instance is running +* `__meta_gce_instance_id`: The numeric id of the instance. +* `__meta_gce_instance_name`: The name of the instance. +* `__meta_gce_interface_ipv4_NAME`: IPv4 address of each named interface. +* `__meta_gce_label_LABEL_NAME`: Each GCE label of the instance. +* `__meta_gce_machine_type`: Full or partial URL of the machine type of the instance. +* `__meta_gce_metadata_NAME`: Each metadata item of the instance. +* `__meta_gce_network`: The network URL of the instance. +* `__meta_gce_private_ip`: The private IP address of the instance. +* `__meta_gce_project`: The GCP project in which the instance is running. +* `__meta_gce_public_ip`: The public IP address of the instance, if present. +* `__meta_gce_subnetwork`: The subnetwork URL of the instance. +* `__meta_gce_tags`: Comma separated list of instance tags. +* `__meta_gce_zone`: The GCE zone URL in which the instance is running. ## Component health -`discovery.gce` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.gce` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.gce` does not expose any component-specific debug information. +`discovery.gce` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.gce` does not expose any component-specific debug metrics. +`discovery.gce` doesn't expose any component-specific debug metrics. ## Example @@ -100,16 +100,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. \ No newline at end of file +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.hetzner.md b/docs/sources/flow/reference/components/discovery.hetzner.md index 1cf7f5c6ff72..b75e5e3c7595 100644 --- a/docs/sources/flow/reference/components/discovery.hetzner.md +++ b/docs/sources/flow/reference/components/discovery.hetzner.md @@ -29,64 +29,62 @@ discovery.hetzner "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Hetzner role of entities that should be discovered. | | yes -`port` | `int` | The port to scrape metrics from. | `80` | no -`refresh_interval` | `duration` | The time after which the servers are refreshed. | `"60s"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|------------|--------------------------------------------------------------|---------|--------- +`role` | `string` | Hetzner role of entities that should be discovered. | | yes +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`port` | `int` | The port to scrape metrics from. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | The time after which the servers are refreshed. | `"60s"` | no `role` must be one of `robot` or `hcloud`. - You can provide one of the following arguments for authentication: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +You can provide one of the following arguments for authentication: +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. [arguments]: #arguments ## Blocks -The following blocks are supported inside the definition of -`discovery.hetzner`: +The following blocks are supported inside the definition of `discovery.hetzner`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -98,61 +96,60 @@ Name | Type | Description Each target includes the following labels: -* `__meta_hetzner_server_id`: the ID of the server -* `__meta_hetzner_server_name`: the name of the server -* `__meta_hetzner_server_status`: the status of the server -* `__meta_hetzner_public_ipv4`: the public ipv4 address of the server -* `__meta_hetzner_public_ipv6_network`: the public ipv6 network (/64) of the server -* `__meta_hetzner_datacenter`: the datacenter of the server +* `__meta_hetzner_datacenter`: The datacenter of the server. +* `__meta_hetzner_public_ipv4`: The public ipv4 address of the server. +* `__meta_hetzner_public_ipv6_network`: The public ipv6 network (/64) of the server. +* `__meta_hetzner_server_id`: The ID of the server. +* `__meta_hetzner_server_name`: The name of the server. +* `__meta_hetzner_server_status`: The status of the server. ### `hcloud` The labels below are only available for targets with `role` set to `hcloud`: -* `__meta_hetzner_hcloud_image_name`: the image name of the server -* `__meta_hetzner_hcloud_image_description`: the description of the server image -* `__meta_hetzner_hcloud_image_os_flavor`: the OS flavor of the server image -* `__meta_hetzner_hcloud_image_os_version`: the OS version of the server image -* `__meta_hetzner_hcloud_datacenter_location`: the location of the server -* `__meta_hetzner_hcloud_datacenter_location_network_zone`: the network zone of the server -* `__meta_hetzner_hcloud_server_type`: the type of the server -* `__meta_hetzner_hcloud_cpu_cores`: the CPU cores count of the server -* `__meta_hetzner_hcloud_cpu_type`: the CPU type of the server (shared or dedicated) -* `__meta_hetzner_hcloud_memory_size_gb`: the amount of memory of the server (in GB) -* `__meta_hetzner_hcloud_disk_size_gb`: the disk size of the server (in GB) -* `__meta_hetzner_hcloud_private_ipv4_`: the private ipv4 address of the server within a given network -* `__meta_hetzner_hcloud_label_`: each label of the server -* `__meta_hetzner_hcloud_labelpresent_`: `true` for each label of the server +* `__meta_hetzner_hcloud_cpu_cores`: The CPU cores count of the server. +* `__meta_hetzner_hcloud_cpu_type`: The CPU type of the server (shared or dedicated). +* `__meta_hetzner_hcloud_datacenter_location_network_zone`: The network zone of the server. +* `__meta_hetzner_hcloud_datacenter_location`: The location of the server. +* `__meta_hetzner_hcloud_disk_size_gb`: The disk size of the server (in GB). +* `__meta_hetzner_hcloud_image_description`: The description of the server image. +* `__meta_hetzner_hcloud_image_name`: The image name of the server. +* `__meta_hetzner_hcloud_image_os_flavor`: The OS flavor of the server image. +* `__meta_hetzner_hcloud_image_os_version`: The OS version of the server image. +* `__meta_hetzner_hcloud_label_`: Each label of the server. +* `__meta_hetzner_hcloud_labelpresent_`: `true` for each label of the server. +* `__meta_hetzner_hcloud_memory_size_gb`: The amount of memory in the server (in GB). +* `__meta_hetzner_hcloud_private_ipv4_`: The private ipv4 address of the server within a given network. +* `__meta_hetzner_hcloud_server_type`: The type of the server. ### `robot` The labels below are only available for targets with `role` set to `robot`: -* `__meta_hetzner_robot_product`: the product of the server * `__meta_hetzner_robot_cancelled`: the server cancellation status +* `__meta_hetzner_robot_product`: the product of the server ## Component health -`discovery.hetzner` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.hetzner` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.hetzner` does not expose any component-specific debug information. +`discovery.hetzner` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.hetzner` does not expose any component-specific debug metrics. +`discovery.hetzner` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from Hetzner: +The following example discovers targets from Hetzner: ```river discovery.hetzner "example" { - role = HETZNER_ROLE + role = } prometheus.scrape "demo" { @@ -162,17 +159,18 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `HETZNER_ROLE`: The role of the entities that should be discovered. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The role of the entities that should be discovered. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.http.md b/docs/sources/flow/reference/components/discovery.http.md index c8fd0aa348f4..960535d2897a 100644 --- a/docs/sources/flow/reference/components/discovery.http.md +++ b/docs/sources/flow/reference/components/discovery.http.md @@ -13,7 +13,8 @@ title: discovery.http `discovery.http` provides a flexible way to define targets by querying an external http endpoint. -It fetches targets from an HTTP endpoint containing a list of zero or more target definitions. The target must reply with an HTTP 200 response. The HTTP header Content-Type must be application/json, and the body must be valid JSON. +It fetches targets from an HTTP endpoint containing a list of zero or more target definitions. The target must reply with an HTTP 200 response. +The HTTP header Content-Type must be application/json, and the body must be valid JSON. Example response body: @@ -29,11 +30,11 @@ Example response body: ] ``` -It is possible to use additional fields in the JSON to pass parameters to [prometheus.scrape][] such as the `metricsPath` and `scrape_interval`. +It's possible to use additional fields in the JSON to pass parameters to [prometheus.scrape][] such as the `metricsPath` and `scrape_interval`. [prometheus.scrape]: {{< relref "./prometheus.scrape.md#technical-details" >}} -As an example, the following will provide a target with a custom `metricsPath`, scrape interval, and timeout value: +The following example provides a target with a custom `metricsPath`, scrape interval, and timeout value: ```json [ @@ -53,9 +54,9 @@ As an example, the following will provide a target with a custom `metricsPath`, ``` -It is also possible to append query parameters to the metrics path with the `__param_` syntax. +It's also possible to append query parameters to the metrics path with the `__param_` syntax. -For example, the following will call a metrics path of `/health?target_data=prometheus`: +The following example calls a metrics path of `/health?target_data=prometheus`: ```json [ @@ -76,7 +77,7 @@ For example, the following will call a metrics path of `/health?target_data=prom ``` -For more information on the potential labels you can use, see the [prometheus.scrape technical details][prometheus.scrape] section, or the [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) documentation. +For more information on the potential labels you can use, refer to the [prometheus.scrape technical details][prometheus.scrape] section, or the [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) documentation. ## Usage @@ -90,54 +91,52 @@ discovery.http "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ---------------- | ------------------- | ------------------------------------------------------------------------------------------ |---------| -------- -`url` | string | URL to scrape | | yes -`refresh_interval` | `duration` | How often to refresh targets. | `"60s"` | no +Name | Type | Description | Default | Required +-------------------|------------|-------------------------------|---------|--------- +`url` | string | URL to scrape | | yes +`refresh_interval` | `duration` | How often to refresh targets. | `"60s"` | no ## Blocks -The following blocks are supported inside the definition of -`discovery.http`: +The following blocks are supported inside the definition of `discovery.http`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the filesystem. Each target includes the following labels: @@ -146,13 +145,12 @@ Each target includes the following labels: ## Component health -`discovery.http` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.http` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.http` does not expose any component-specific debug information. +`discovery.http` doesn't expose any component-specific debug information. ## Debug metrics @@ -160,7 +158,7 @@ values. ## Examples -This example will query a url every 15 seconds and expose targets that it finds: +The following example queries a URL every 15 seconds and exposes targets that it finds: ```river discovery.http "dynamic_targets" { diff --git a/docs/sources/flow/reference/components/discovery.ionos.md b/docs/sources/flow/reference/components/discovery.ionos.md index 0017b5ea95e8..518037454249 100644 --- a/docs/sources/flow/reference/components/discovery.ionos.md +++ b/docs/sources/flow/reference/components/discovery.ionos.md @@ -30,48 +30,46 @@ The following arguments are supported: | Name | Type | Description | Default | Required | | ------------------ | ---------- | ------------------------------------------------------------ | ------- | -------- | | `datacenter_id` | `string` | The unique ID of the data center. | | yes | -| `refresh_interval` | `duration` | The time after which the servers are refreshed. | `60s` | no | -| `port` | `int` | The port to scrap metrics from. | 80 | no | -| `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | | `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | | `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `port` | `int` | The port to scrap metrics from. | 80 | no | +| `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | +| `refresh_interval` | `duration` | The time after which the servers are refreshed. | `60s` | no | ## Blocks -The following blocks are supported inside the definition of -`discovery.ionos`: +The following blocks are supported inside the definition of `discovery.ionos`: | Hierarchy | Block | Description | Required | | ------------------- | ----------------- | -------------------------------------------------------- | -------- | -| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | | authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | | oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | | oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -83,33 +81,32 @@ The following fields are exported and can be referenced by other components: Each target includes the following labels: -- `__meta_ionos_server_availability_zone`: the availability zone of the server. -- `__meta_ionos_server_boot_cdrom_id`: the ID of the CD-ROM the server is booted from. -- `__meta_ionos_server_boot_image_id`: the ID of the boot image or snapshot the server is booted from. -- `__meta_ionos_server_boot_volume_id`: the ID of the boot volume. -- `__meta_ionos_server_cpu_family`: the CPU family of the server to. -- `__meta_ionos_server_id`: the ID of the server. -- `__meta_ionos_server_ip`: comma separated list of all IPs assigned to the server. -- `__meta_ionos_server_lifecycle`: the lifecycle state of the server resource. -- `__meta_ionos_server_name`: the name of the server. -- `__meta_ionos_server_nic_ip_`: comma separated list of IPs, grouped by the name of each NIC attached to the server. -- `__meta_ionos_server_servers_id`: the ID of the servers the server belongs to. -- `__meta_ionos_server_state`: the execution state of the server. -- `__meta_ionos_server_type`: the type of the server. +- `__meta_ionos_server_availability_zone`: The availability zone of the server. +- `__meta_ionos_server_boot_cdrom_id`: The ID of the CD-ROM the server is booted from. +- `__meta_ionos_server_boot_image_id`: The ID of the boot image or snapshot the server is booted from. +- `__meta_ionos_server_boot_volume_id`: The ID of the boot volume. +- `__meta_ionos_server_cpu_family`: The CPU family of the server to. +- `__meta_ionos_server_id`: The ID of the server. +- `__meta_ionos_server_ip`: Comma separated list of all IPs assigned to the server. +- `__meta_ionos_server_lifecycle`: The lifecycle state of the server resource. +- `__meta_ionos_server_name`: The name of the server. +- `__meta_ionos_server_nic_ip_`: Comma separated list of IPs, grouped by the name of each NIC attached to the server. +- `__meta_ionos_server_servers_id`: The ID of the servers the server belongs to. +- `__meta_ionos_server_state`: The execution state of the server. +- `__meta_ionos_server_type`: The type of the server. ## Component health -`discovery.ionos` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.ionos` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.ionos` does not expose any component-specific debug information. +`discovery.ionos` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.ionos` does not expose any component-specific debug metrics. +`discovery.ionos` doesn't expose any component-specific debug metrics. ## Example @@ -125,18 +122,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - -- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.kubelet.md b/docs/sources/flow/reference/components/discovery.kubelet.md index a99fdffa9739..dfe239018b92 100644 --- a/docs/sources/flow/reference/components/discovery.kubelet.md +++ b/docs/sources/flow/reference/components/discovery.kubelet.md @@ -26,110 +26,97 @@ discovery.kubelet "LABEL" { ## Requirements * The Kubelet must be reachable from the `grafana-agent` pod network. -* Follow the [Kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization) - documentation to configure authentication to the Kubelet API. +* Follow the [Kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization) documentation to configure authentication to the Kubelet API. ## Arguments The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`url` | `string` | URL of the Kubelet server. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`refresh_interval` | `duration` | How often the Kubelet should be polled for scrape targets | `5s` | no -`namespaces` | `list(string)` | A list of namespaces to extract target pods from | | no +Name | Type | Description | Default | Required +--------------------|----------------|-----------------------------------------------------------|---------|--------- +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`namespaces` | `list(string)` | A list of namespaces to extract target pods from | | no +`refresh_interval` | `duration` | How often the Kubelet should be polled for scrape targets | `5s` | no +`url` | `string` | URL of the Kubelet server. | | no -One of the following authentication methods must be provided if kubelet authentication is enabled - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`authorization` block][authorization]. +One of the following authentication methods must be provided if Kubelet authentication is enabled: -The `namespaces` list limits the namespaces to discover resources in. If -omitted, all namespaces are searched. +- [`authorization` block][authorization]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). + +The `namespaces` list limits the namespaces to discover resources in. If omitted, all namespaces are searched. ## Blocks -The following blocks are supported inside the definition of -`discovery.kubelet`: +The following blocks are supported inside the definition of `discovery.kubelet`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------|-------------------|--------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no [authorization]: #authorization-block [tls_config]: #tls_config-block -### authorization block +### authorization -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|---------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Kubelet API. Each target includes the following labels: * `__address__`: The target address to scrape derived from the pod IP and container port. * `__meta_kubernetes_namespace`: The namespace of the pod object. -* `__meta_kubernetes_pod_name`: The name of the pod object. -* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. -* `__meta_kubernetes_pod_label_`: Each label from the pod object. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from - the pod object. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the - pod object. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each - annotation from the pod object. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an - `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container the target - address points to. -* `__meta_kubernetes_pod_container_id`: ID of the container the target address - points to. The ID is in the form `://`. +* `__meta_kubernetes_pod_annotation_`: Each annotation from the pod object. +* `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the pod object. +* `__meta_kubernetes_pod_container_id`: ID of the container the target address points to. The ID is in the form `://`. * `__meta_kubernetes_pod_container_image`: The image the container is using. +* `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. +* `__meta_kubernetes_pod_container_name`: Name of the container the target address points to. * `__meta_kubernetes_pod_container_port_name`: Name of the container port. * `__meta_kubernetes_pod_container_port_number`: Number of the container port. -* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container - port. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready - state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or - `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled - onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the pod object. +* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port. * `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. * `__meta_kubernetes_pod_controller_name`: Name of the pod controller. +* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. +* `__meta_kubernetes_pod_label_`: Each label from the pod object. +* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the pod object. +* `__meta_kubernetes_pod_name`: The name of the pod object. +* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. +* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. +* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready state. +* `__meta_kubernetes_pod_uid`: The UID of the pod object. -> **Note**: The Kubelet API used by this component is an internal API and therefore the -> data in the response returned from the API cannot be guaranteed between different versions -> of the Kubelet. +{{% admonition type="note" %}} +The Kubelet API used by this component is an internal API and therefore the data in the response returned from the API can't be guaranteed between different versions of the Kubelet. +{{% /admonition %}} ## Component health -`discovery.kubelet` is reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.kubelet` is reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.kubelet` does not expose any component-specific debug information. +`discovery.kubelet` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.kubelet` does not expose any component-specific debug metrics. +`discovery.kubelet` doesn't expose any component-specific debug metrics. ## Examples @@ -149,23 +136,24 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. ### Limit searched namespaces -This example limits the namespaces where pods are discovered using the `namespaces` argument: +The following example limits the namespaces where pods are discovered using the `namespaces` argument: ```river discovery.kubelet "k8s_pods" { @@ -180,16 +168,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _`_`_: The username to use for authentication to the remote_write API. +- _``: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.kubernetes.md b/docs/sources/flow/reference/components/discovery.kubernetes.md index 5b8cd870af6e..598944196b39 100644 --- a/docs/sources/flow/reference/components/discovery.kubernetes.md +++ b/docs/sources/flow/reference/components/discovery.kubernetes.md @@ -11,13 +11,11 @@ title: discovery.kubernetes # discovery.kubernetes -`discovery.kubernetes` allows you to find scrape targets from Kubernetes -resources. It watches cluster state, and ensures targets are continually synced -with what is currently running in your cluster. +`discovery.kubernetes` allows you to find scrape targets from Kubernetes resources. +It watches cluster state, and ensures targets are continually synced with what is currently running in your cluster. -If you supply no connection information, this component defaults to an -in-cluster config. A kubeconfig file or manual connection settings can be used -to override the defaults. +If you supply no connection information, this component defaults to an in-cluster configuration. +A kubeconfig file or manual connection settings can be used to override the defaults. ## Usage @@ -31,238 +29,189 @@ discovery.kubernetes "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`api_server` | `string` | URL of Kubernetes API server. | | no -`role` | `string` | Type of Kubernetes resource to query. | | yes -`kubeconfig_file` | `string` | Path of kubeconfig file to use for connecting to Kubernetes. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|----------|--------------------------------------------------------------|---------|--------- +`role` | `string` | Type of Kubernetes resource to query. | | yes +`api_server` | `string` | URL of Kubernetes API server. | | no +`kubeconfig_file` | `string` | Path of kubeconfig file to use for connecting to Kubernetes. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no At most one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. + +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. [arguments]: #arguments The `role` argument is required to specify what type of targets to discover. -`role` must be one of `node`, `pod`, `service`, `endpoints`, `endpointslice`, -or `ingress`. +`role` must be one of `node`, `pod`, `service`, `endpoints`, `endpointslice`, or `ingress`. ### node role -The `node` role discovers one target per cluster node with the address -defaulting to the HTTP port of the Kubelet daemon. The target address defaults -to the first existing address of the Kubernetes node object in the address type -order of `NodeInternalIP`, `NodeExternalIP`, `NodeLegacyHostIP`, and -`NodeHostName`. +The `node` role discovers one target per cluster node with the address defaulting to the HTTP port of the Kubelet daemon. +The target address defaults to the first existing address of the Kubernetes node object in the address type order of `NodeInternalIP`, `NodeExternalIP`, `NodeLegacyHostIP`, and `NodeHostName`. The following labels are included for discovered nodes: -* `__meta_kubernetes_node_name`: The name of the node object. -* `__meta_kubernetes_node_provider_id`: The cloud provider's name for the node object. +* `__meta_kubernetes_node_address_`: The first address for each node address type, if it exists. +* `__meta_kubernetes_node_annotation_`: Each annotation from the node object. +* `__meta_kubernetes_node_annotationpresent_`: Set to `true` for each annotation from the node object. * `__meta_kubernetes_node_label_`: Each label from the node object. * `__meta_kubernetes_node_labelpresent_`: Set to `true` for each label from the node object. -* `__meta_kubernetes_node_annotation_`: Each annotation from the node object. -* `__meta_kubernetes_node_annotationpresent_`: Set to `true` - for each annotation from the node object. -* `__meta_kubernetes_node_address_`: The first address for each - node address type, if it exists. +* `__meta_kubernetes_node_name`: The name of the node object. +* `__meta_kubernetes_node_provider_id`: The cloud provider's name for the node object. -In addition, the `instance` label for the node will be set to the node name as -retrieved from the API server. +In addition, the `instance` label for the node is set to the node name as retrieved from the API server. ### service role The `service` role discovers a target for each service port for each service. -This is generally useful for externally monitoring a service. The address will -be set to the Kubernetes DNS name of the service and respective service port. +This is generally useful for externally monitoring a service. +The address will be set to the Kubernetes DNS name of the service and respective service port. The following labels are included for discovered services: * `__meta_kubernetes_namespace`: The namespace of the service object. -* `__meta_kubernetes_service_annotation_`: Each annotation from - the service object. -* `__meta_kubernetes_service_annotationpresent_`: `true` for - each annotation of the service object. -* `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the - service. This does not apply to services of type `ExternalName`. -* `__meta_kubernetes_service_external_name`: The DNS name of the service. - This only applies to services of type `ExternalName`. -* `__meta_kubernetes_service_label_`: Each label from the service - object. -* `__meta_kubernetes_service_labelpresent_`: `true` for each label - of the service object. +* `__meta_kubernetes_service_annotation_`: Each annotation from the service object. +* `__meta_kubernetes_service_annotationpresent_`: `true` for each annotation of the service object. +* `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the service. This doesn't apply to services of type `ExternalName`. +* `__meta_kubernetes_service_external_name`: The DNS name of the service. This only applies to services of type `ExternalName`. +* `__meta_kubernetes_service_label_`: Each label from the service object. +* `__meta_kubernetes_service_labelpresent_`: `true` for each label of the service object. * `__meta_kubernetes_service_name`: The name of the service object. -* `__meta_kubernetes_service_port_name`: Name of the service port for the - target. -* `__meta_kubernetes_service_port_number`: Number of the service port for the - target. -* `__meta_kubernetes_service_port_protocol`: Protocol of the service port for - the target. +* `__meta_kubernetes_service_port_name`: Name of the service port for the target. +* `__meta_kubernetes_service_port_number`: Number of the service port for the target. +* `__meta_kubernetes_service_port_protocol`: Protocol of the service port for the target. * `__meta_kubernetes_service_type`: The type of the service. ### pod role -The `pod` role discovers all pods and exposes their containers as targets. For -each declared port of a container, a single target is generated. +The `pod` role discovers all pods and exposes their containers as targets. +For each declared port of a container, a single target is generated. -If a container has no specified ports, a port-free target per container is -created. These targets must have a port manually injected using a -[`discovery.relabel` component][discovery.relabel] before metrics can be -collected from them. +If a container has no specified ports, a port-free target per container is created. +These targets must have a port manually injected using a [`discovery.relabel` component][discovery.relabel] before metrics can be collected from them. The following labels are included for discovered pods: * `__meta_kubernetes_namespace`: The namespace of the pod object. -* `__meta_kubernetes_pod_name`: The name of the pod object. -* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. -* `__meta_kubernetes_pod_label_`: Each label from the pod object. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from - the pod object. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the - pod object. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each - annotation from the pod object. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an - `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container the target - address points to. -* `__meta_kubernetes_pod_container_id`: ID of the container the target address - points to. The ID is in the form `://`. +* `__meta_kubernetes_pod_annotation_`: Each annotation from the pod object. +* `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the pod object. +* `__meta_kubernetes_pod_container_id`: ID of the container the target address points to. The ID is in the form `://`. * `__meta_kubernetes_pod_container_image`: The image the container is using. +* `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. +* `__meta_kubernetes_pod_container_name`: Name of the container the target address points to. * `__meta_kubernetes_pod_container_port_name`: Name of the container port. * `__meta_kubernetes_pod_container_port_number`: Number of the container port. -* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container - port. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready - state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or - `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled - onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the pod object. +* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port. * `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. * `__meta_kubernetes_pod_controller_name`: Name of the pod controller. +* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. +* `__meta_kubernetes_pod_label_`: Each label from the pod object. +* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the pod object. +* `__meta_kubernetes_pod_name`: The name of the pod object. +* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. +* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. +* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready state. +* `__meta_kubernetes_pod_uid`: The UID of the pod object. ### endpoints role -The `endpoints` role discovers targets from listed endpoints of a service. For -each endpoint address one target is discovered per port. If the endpoint is -backed by a pod, all container ports of a pod are discovered as targets even if -they are not bound to an endpoint port. +The `endpoints` role discovers targets from listed endpoints of a service. +For each endpoint address one target is discovered per port. +If the endpoint is backed by a pod, all container ports of a pod are discovered as targets even if they are not bound to an endpoint port. The following labels are included for discovered endpoints: -* `__meta_kubernetes_namespace:` The namespace of the endpoints object. +* `__meta_kubernetes_endpoints_label_`: Each label from the endpoints object. +* `__meta_kubernetes_endpoints_labelpresent_`: `true` for each label from the endpoints object. * `__meta_kubernetes_endpoints_name:` The names of the endpoints object. -* `__meta_kubernetes_endpoints_label_`: Each label from the - endpoints object. -* `__meta_kubernetes_endpoints_labelpresent_`: `true` for each label - from the endpoints object. -* The following labels are attached for all targets discovered directly from - the endpoints list: - * `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint. - * `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the - endpoint. - * `__meta_kubernetes_endpoint_ready`: Set to `true` or `false` for the - endpoint's ready state. - * `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port. - * `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port. - * `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint - address target. - * `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint - address target. -* If the endpoints belong to a service, all labels of the `service` role - discovery are attached. -* For all targets backed by a pod, all labels of the `pod` role discovery are - attached. +* `__meta_kubernetes_namespace:` The namespace of the endpoints object. + +The following labels are attached for all targets discovered directly from the endpoints list: + +* `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint address target. +* `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint address target. +* `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint. +* `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the endpoint. +* `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port. +* `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port. +* `__meta_kubernetes_endpoint_ready`: Set to `true` or `false` for the endpoint's ready state. + +If the endpoints belong to a service, all labels of the `service` role discovery are attached. + +For all targets backed by a pod, all labels of the `pod` role discovery are attached. ### endpointslice role -The endpointslice role discovers targets from existing Kubernetes endpoint -slices. For each endpoint address referenced in the `EndpointSlice` object, one -target is discovered. If the endpoint is backed by a pod, all container ports -of a pod are discovered as targets even if they are not bound to an endpoint -port. +The endpointslice role discovers targets from existing Kubernetes endpoint slices. +For each endpoint address referenced in the `EndpointSlice` object, one target is discovered. +If the endpoint is backed by a pod, all container ports of a pod are discovered as targets even if they are not bound to an endpoint port. The following labels are included for discovered endpoint slices: -* `__meta_kubernetes_namespace`: The namespace of the endpoints object. * `__meta_kubernetes_endpointslice_name`: The name of endpoint slice object. -* The following labels are attached for all targets discovered directly from - the endpoint slice list: - * `__meta_kubernetes_endpointslice_address_target_kind`: Kind of the - referenced object. - * `__meta_kubernetes_endpointslice_address_target_name`: Name of referenced - object. - * `__meta_kubernetes_endpointslice_address_type`: The IP protocol family of - the address of the target. - * `__meta_kubernetes_endpointslice_endpoint_conditions_ready`: Set to `true` - or `false` for the referenced endpoint's ready state. - * `__meta_kubernetes_endpointslice_endpoint_topology_kubernetes_io_hostname`: - Name of the node hosting the referenced endpoint. - * `__meta_kubernetes_endpointslice_endpoint_topology_present_kubernetes_io_hostname`: - `true` if the referenced object has a `kubernetes.io/hostname` annotation. - * `__meta_kubernetes_endpointslice_port`: Port of the referenced endpoint. - * `__meta_kubernetes_endpointslice_port_name`: Named port of the referenced - endpoint. - * `__meta_kubernetes_endpointslice_port_protocol`: Protocol of the referenced - endpoint. -* If the endpoints belong to a service, all labels of the `service` role - discovery are attached. -* For all targets backed by a pod, all labels of the `pod` role discovery are - attached. +* `__meta_kubernetes_namespace`: The namespace of the endpoints object. + +The following labels are attached for all targets discovered directly from the endpoint slice list: + +* `__meta_kubernetes_endpointslice_address_target_kind`: Kind of the referenced object. +* `__meta_kubernetes_endpointslice_address_target_name`: Name of referenced object. +* `__meta_kubernetes_endpointslice_address_type`: The IP protocol family of the address of the target. +* `__meta_kubernetes_endpointslice_endpoint_conditions_ready`: Set to `true` or `false` for the referenced endpoint's ready state. +* `__meta_kubernetes_endpointslice_endpoint_topology_kubernetes_io_hostname`: Name of the node hosting the referenced endpoint. +* `__meta_kubernetes_endpointslice_endpoint_topology_present_kubernetes_io_hostname`: `true` if the referenced object has a `kubernetes.io/hostname` annotation. +* `__meta_kubernetes_endpointslice_port_name`: Named port of the referenced endpoint. +* `__meta_kubernetes_endpointslice_port_protocol`: Protocol of the referenced endpoint. +* `__meta_kubernetes_endpointslice_port`: Port of the referenced endpoint. + +If the endpoints belong to a service, all labels of the `service` role discovery are attached. + +For all targets backed by a pod, all labels of the `pod` role discovery are attached. ### ingress role -The `ingress` role discovers a target for each path of each ingress. This is -generally useful for externally monitoring an ingress. The address will be set -to the host specified in the Kubernetes `Ingress`'s `spec` block. +The `ingress` role discovers a target for each path of each ingress. +This is generally useful for externally monitoring an ingress. +The address will be set to the host specified in the Kubernetes `Ingress`'s `spec` block. The following labels are included for discovered ingress objects: -* `__meta_kubernetes_namespace`: The namespace of the ingress object. +* `__meta_kubernetes_ingress_annotation_`: Each annotation from the ingress object. +* `__meta_kubernetes_ingress_annotationpresent_`: `true` for each annotation from the ingress object. +* `__meta_kubernetes_ingress_class_name`: Class name from ingress spec, if present. +* `__meta_kubernetes_ingress_label_`: Each label from the ingress object. +* `__meta_kubernetes_ingress_labelpresent_`: `true` for each label from the ingress object. * `__meta_kubernetes_ingress_name`: The name of the ingress object. -* `__meta_kubernetes_ingress_label_`: Each label from the ingress - object. -* `__meta_kubernetes_ingress_labelpresent_`: `true` for each label - from the ingress object. -* `__meta_kubernetes_ingress_annotation_`: Each annotation from - the ingress object. -* `__meta_kubernetes_ingress_annotationpresent_`: `true` for each - annotation from the ingress object. -* `__meta_kubernetes_ingress_class_name`: Class name from ingress spec, if - present. -* `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress, `https` if TLS - config is set. Defaults to `http`. * `__meta_kubernetes_ingress_path`: Path from ingress spec. Defaults to /. +* `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress, `https` if TLS config is set. Defaults to `http`. +* `__meta_kubernetes_namespace`: The namespace of the ingress object. ## Blocks The following blocks are supported inside the definition of `discovery.kubernetes`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -namespaces | [namespaces][] | Information about which Kubernetes namespaces to search. | no -selectors | [selectors][] | Information about which Kubernetes namespaces to search. | no -attach_metadata | [attach_metadata][] | Optional metadata to attach to discovered targets. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|---------------------|----------------------------------------------------------|--------- +attach_metadata | [attach_metadata][] | Optional metadata to attach to discovered targets. | no +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +namespaces | [namespaces][] | Information about which Kubernetes namespaces to search. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +selectors | [selectors][] | Information about which Kubernetes namespaces to search. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [namespaces]: #namespaces-block [selectors]: #selectors-block @@ -272,97 +221,92 @@ an `oauth2` block. [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### namespaces block +### attach_metadata -The `namespaces` block limits the namespaces to discover resources in. If -omitted, all namespaces are searched. +The `attach_metadata` block allows to attach node metadata to discovered targets. Valid for roles: pod, endpoints, endpointslice. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`own_namespace` | `bool` | Include the namespace the agent is running in. | | no -`names` | `list(string)` | List of namespaces to search. | | no +Name | Type | Description | Default | Required +-------|--------|-----------------------|---------|--------- +`node` | `bool` | Attach node metadata. | | no -### selectors block +### authorization -The `selectors` block contains optional label and field selectors to limit the -discovery process to a subset of resources. +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Role of the selector. | | yes -`label`| `string` | Label selector string. | | no -`field` | `string` | Field selector string. | | no +### basic_auth -See Kubernetes' documentation for [Field selectors][] and [Labels and -selectors][] to learn more about the possible filters that can be used. +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -The endpoints role supports pod, service, and endpoints selectors. -The pod role supports node selectors when configured with `attach_metadata: {node: true}`. -Other roles only support selectors matching the role itself (e.g. node role can only contain node selectors). +### namespaces -> **Note**: Using multiple `discovery.kubernetes` components with different -> selectors may result in a bigger load against the Kubernetes API. -> -> Selectors are recommended for retrieving a small set of resources in a very -> large cluster. Smaller clusters are recommended to avoid selectors in favor -> of filtering with [a `discovery.relabel` component][discovery.relabel] -> instead. +The `namespaces` block limits the namespaces to discover resources in. If omitted, all namespaces are searched. -[Field selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ -[Labels and selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ -[discovery.relabel]: {{< relref "./discovery.relabel.md" >}} +Name | Type | Description | Default | Required +----------------|----------------|------------------------------------------------|---------|--------- +`names` | `list(string)` | List of namespaces to search. | | no +`own_namespace` | `bool` | Include the namespace the agent is running in. | | no -### attach_metadata block -The `attach_metadata` block allows to attach node metadata to discovered -targets. Valid for roles: pod, endpoints, endpointslice. +### oauth2 -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`node` | `bool` | Attach node metadata. | | no +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### basic_auth block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} -### authorization block +### selectors -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +The `selectors` block contains optional label and field selectors to limit the discovery process to a subset of resources. -### oauth2 block +Name | Type | Description | Default | Required +--------|----------|------------------------|---------|--------- +`field` | `string` | Field selector string. | | no +`label` | `string` | Label selector string. | | no +`role` | `string` | Role of the selector. | | yes -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +See Kubernetes' documentation for [Field selectors][] and [Labels and selectors][] to learn more about the possible filters that can be used. -### tls_config block +The endpoints role supports pod, service, and endpoints selectors. +The pod role supports node selectors when configured with `attach_metadata: {node: true}`. +Other roles only support selectors matching the role itself. For example, the node role can only contain node selectors. + +{{% admonition type="note" %}} +Using multiple `discovery.kubernetes` components with different selectors may result in a bigger load against the Kubernetes API. + +Selectors are recommended for retrieving a small set of resources in a very large cluster. Smaller clusters are recommended to avoid selectors in favor of filtering with [a `discovery.relabel` component][discovery.relabel] instead. + +[discovery.relabel]: {{< relref "./discovery.relabel.md" >}} +{{% /admonition %}} -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +[Field selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ +[Labels and selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|------------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Kubernetes API. ## Component health -`discovery.kubernetes` is reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.kubernetes` is reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.kubernetes` does not expose any component-specific debug information. +`discovery.kubernetes` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.kubernetes` does not expose any component-specific debug metrics. +`discovery.kubernetes` doesn't expose any component-specific debug metrics. ## Examples ### In-cluster discovery -This example uses in-cluster authentication to discover all pods: +The following example uses in-cluster authentication to discover all pods: ```river discovery.kubernetes "k8s_pods" { @@ -376,23 +320,24 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _`>PASSWORD>`_: The password to use for authentication to the remote_write API. ### Kubeconfig authentication -This example uses a kubeconfig file to authenticate to the Kubernetes API: +The following example uses a Kubeconfig file to authenticate to the Kubernetes API: ```river discovery.kubernetes "k8s_pods" { @@ -407,23 +352,24 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _`>PASSWORD>`_: The password to use for authentication to the remote_write API. ### Limit searched namespaces and filter by labels value -This example limits the searched namespaces and only selects pods with a specific label value attached to them: +The following example limits the searched namespaces and only selects pods with a specific label value attached to them: ```river discovery.kubernetes "k8s_pods" { @@ -446,23 +392,24 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _`>PASSWORD>`_: The password to use for authentication to the remote_write API. ### Limit to only pods on the same node -This example limits the search to pods on the same node as this Grafana Agent. This configuration could be useful if you are running the Agent as a DaemonSet: +The following example limits the search to pods on the same node as this Grafana Agent. This configuration could be useful if you are running the Agent as a DaemonSet: ```river discovery.kubernetes "k8s_pods" { @@ -480,16 +427,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _`>PASSWORD>`_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.kuma.md b/docs/sources/flow/reference/components/discovery.kuma.md index 6e799a6147a8..47ba33a4e1f8 100644 --- a/docs/sources/flow/reference/components/discovery.kuma.md +++ b/docs/sources/flow/reference/components/discovery.kuma.md @@ -27,88 +27,84 @@ discovery.kuma "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | -------------- | -------------------------------------------------------------- | ------------- | -------- -`server` | `string` | Address of the Kuma Control Plane's MADS xDS server. | | yes -`refresh_interval` | `duration` | The time to wait between polling update requests. | `"30s"` | no -`fetch_timeout` | `duration` | The time after which the monitoring assignments are refreshed. | `"2m"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no - - You can provide one of the following arguments for authentication: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - -The following blocks are supported inside the definition of -`discovery.kuma`: - -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no - -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +Name | Type | Description | Default | Required +--------------------|------------|----------------------------------------------------------------|---------|--------- +`server` | `string` | Address of the Kuma Control Plane's MADS xDS server. | | yes +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`fetch_timeout` | `duration` | The time after which the monitoring assignments are refreshed. | `"2m"` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | The time to wait between polling update requests. | `"30s"` | no + +You can provide one of the following arguments for authentication: + +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. + +The following blocks are supported inside the definition of `discovery.kuma`: + +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no + +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block - -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +### oauth2 > tls_config +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Kuma API. -The following meta labels are available on targets and can be used by the -discovery.relabel component: -* `__meta_kuma_mesh`: the name of the proxy's Mesh -* `__meta_kuma_dataplane`: the name of the proxy -* `__meta_kuma_service`: the name of the proxy's associated Service -* `__meta_kuma_label_`: each tag of the proxy +The following meta labels are available on targets and can be used by the discovery.relabel component: +* `__meta_kuma_dataplane`: The name of the proxy. +* `__meta_kuma_label_`: Each tag of the proxy. +* `__meta_kuma_mesh`: The name of the proxy's Mesh. +* `__meta_kuma_service`: The name of the proxy's associated Service. ## Component health -`discovery.kuma` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.kuma` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.kuma` does not expose any component-specific debug information. +`discovery.kuma` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.kuma` does not expose any component-specific debug metrics. +`discovery.kuma` doesn't expose any component-specific debug metrics. ## Example @@ -122,16 +118,16 @@ prometheus.scrape "demo" { } prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` -Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +Replace the following: +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.lightsail.md b/docs/sources/flow/reference/components/discovery.lightsail.md index a2b47841217d..1f7258d1c132 100644 --- a/docs/sources/flow/reference/components/discovery.lightsail.md +++ b/docs/sources/flow/reference/components/discovery.lightsail.md @@ -11,7 +11,8 @@ title: discovery.lightsail # discovery.lightsail -`discovery.lightsail` allows retrieving scrape targets from Amazon Lightsail instances. The private IP address is used by default, but may be changed to the public IP address with relabeling. +`discovery.lightsail` allows retrieving scrape targets from Amazon Lightsail instances. +The private IP address is used by default, but may be changed to the public IP address with relabeling. ## Usage @@ -24,23 +25,23 @@ discovery.lightsail "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | Custom endpoint to be used.| | no -`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no -`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no -`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no -`profile` | `string` | Named AWS profile used to connect to the API. | | no -`role_arn` | `string` | AWS Role ARN, an alternative to using AWS API keys. | | no -`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no +Name | Type | Description | Default | Required +-------------------|----------|-------------------------------------------------------------------------------------------------------------------------|---------|--------- +`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no +`endpoint` | `string` | Custom endpoint to be used. | | no +`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no +`profile` | `string` | Named AWS profile used to connect to the API. | | no +`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no +`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no +`role_arn` | `string` | AWS Role ARN, an alternative to using AWS API keys. | | no +`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------------- `targets` | `list(map(string))` | The set of discovered Lightsail targets. Each target includes the following labels: @@ -60,17 +61,16 @@ Each target includes the following labels: ## Component health -`discovery.lightsail` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.lightsail` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.lightsail` does not expose any component-specific debug information. +`discovery.lightsail` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.lightsail` does not expose any component-specific debug metrics. +`discovery.lightsail` doesn't expose any component-specific debug metrics. ## Example @@ -86,16 +86,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.linode.md b/docs/sources/flow/reference/components/discovery.linode.md index 5bad40f86174..070a9afd1567 100644 --- a/docs/sources/flow/reference/components/discovery.linode.md +++ b/docs/sources/flow/reference/components/discovery.linode.md @@ -21,60 +21,57 @@ discovery.linode "LABEL" { ``` {{% admonition type="note" %}} -The linode APIv4 Token must be created with the scopes: `linodes:read_only`, `ips:read_only`, and `events:read_only`. +You must create the Linode APIv4 Token with the scopes: `linodes:read_only`, `ips:read_only`, and `events:read_only`. {{% /admonition %}} ## Arguments The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | -------------- | -------------------------------------------------------------- | ------------- | -------- -`refresh_interval` | `duration` | The time to wait between polling update requests. | `"60s"` | no -`port` | `int` | Port that metrics are scraped from. | `80` | no -`tag_separator` | `string` | The string by which Linode Instance tags are joined into the tag label. | `,` | no - -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|------------|-------------------------------------------------------------------------|---------|--------- +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`port` | `int` | Port that metrics are scraped from. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | The time to wait between polling update requests. | `"60s"` | no +`tag_separator` | `string` | The string by which Linode Instance tags are joined into the tag label. | `,` | no You can provide one of the following arguments for authentication: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. -The following blocks are supported inside the definition of -`discovery.linode`: +- [`authorization` block][authorization]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. + +The following blocks are supported inside the definition of `discovery.linode`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|--------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### authorization block - -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +### authorization -### oauth2 block +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +### oauth2 -### tls_config block +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +### oauth2 > tls_config +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -87,38 +84,37 @@ Name | Type | Description The following meta labels are available on targets and can be used by the discovery.relabel component: -* `__meta_linode_instance_id`: the id of the Linode instance -* `__meta_linode_instance_label`: the label of the Linode instance -* `__meta_linode_image`: the slug of the Linode instance's image -* `__meta_linode_private_ipv4`: the private IPv4 of the Linode instance -* `__meta_linode_public_ipv4`: the public IPv4 of the Linode instance -* `__meta_linode_public_ipv6`: the public IPv6 of the Linode instance -* `__meta_linode_region`: the region of the Linode instance -* `__meta_linode_type`: the type of the Linode instance -* `__meta_linode_status`: the status of the Linode instance -* `__meta_linode_tags`: a list of tags of the Linode instance joined by the tag separator -* `__meta_linode_group`: the display group a Linode instance is a member of -* `__meta_linode_hypervisor`: the virtualization software powering the Linode instance -* `__meta_linode_backups`: the backup service status of the Linode instance -* `__meta_linode_specs_disk_bytes`: the amount of storage space the Linode instance has access to -* `__meta_linode_specs_memory_bytes`: the amount of RAM the Linode instance has access to -* `__meta_linode_specs_vcpus`: the number of VCPUS this Linode has access to -* `__meta_linode_specs_transfer_bytes`: the amount of network transfer the Linode instance is allotted each month -* `__meta_linode_extra_ips`: a list of all extra IPv4 addresses assigned to the Linode instance joined by the tag separator +* `__meta_linode_backups`: The backup service status of the Linode instance. +* `__meta_linode_extra_ips`: A list of all extra IPv4 addresses assigned to the Linode instance joined by the tag separator. +* `__meta_linode_group`: The display group a Linode instance is a member of. +* `__meta_linode_hypervisor`: The virtualization software powering the Linode instance. +* `__meta_linode_image`: The slug of the Linode instance's image. +* `__meta_linode_instance_id`: The ID of the Linode instance. +* `__meta_linode_instance_label`: The label of the Linode instance. +* `__meta_linode_private_ipv4`: The private IPv4 of the Linode instance. +* `__meta_linode_public_ipv4`: The public IPv4 of the Linode instance. +* `__meta_linode_public_ipv6`: The public IPv6 of the Linode instance. +* `__meta_linode_region`: The region of the Linode instance. +* `__meta_linode_specs_disk_bytes`: The amount of storage space the Linode instance has access to. +* `__meta_linode_specs_memory_bytes`: The amount of RAM the Linode instance has access to. +* `__meta_linode_specs_transfer_bytes`: The amount of network transfer the Linode instance is allotted each month. +* `__meta_linode_specs_vcpus`: The number of VCPUS this Linode has access to. +* `__meta_linode_status`: The status of the Linode instance. +* `__meta_linode_tags`: A list of tags of the Linode instance joined by the tag separator. +* `__meta_linode_type`: The type of the Linode instance. ## Component health -`discovery.linode` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.linode` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.linode` does not expose any component-specific debug information. +`discovery.linode` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.linode` does not expose any component-specific debug metrics. +`discovery.linode` doesn't expose any component-specific debug metrics. ## Example @@ -133,20 +129,20 @@ prometheus.scrape "demo" { } prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. -### Using private IP address: +### Use a private IP address: ``` discovery.linode "example" { @@ -167,11 +163,16 @@ prometheus.scrape "demo" { } prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } -``` \ No newline at end of file +``` + +Replace the following: +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.marathon.md b/docs/sources/flow/reference/components/discovery.marathon.md index 194e8ca24107..996c83b28b7b 100644 --- a/docs/sources/flow/reference/components/discovery.marathon.md +++ b/docs/sources/flow/reference/components/discovery.marathon.md @@ -28,12 +28,12 @@ The following arguments are supported: | Name | Type | Description | Default | Required | | ------------------ | -------------- | ------------------------------------------------------------ | ------- | -------- | | `servers` | `list(string)` | List of Marathon servers. | | yes | -| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"30s"` | no | -| `auth_token` | `secret` | Auth token to authenticate with. | | no | | `auth_token_file` | `string` | File containing an auth token to authenticate with. | | no | -| `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | -| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `auth_token` | `secret` | Auth token to authenticate with. | | no | | `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"30s"` | no | You can provide one of the following arguments for authentication: @@ -52,35 +52,34 @@ The following blocks are supported inside the definition of | Hierarchy | Block | Description | Required | | ------------------- | ----------------- | -------------------------------------------------------- | -------- | -| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | | authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | | oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | | oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields @@ -92,31 +91,30 @@ The following fields are exported and can be referenced by other components: Each target includes the following labels: -- `__meta_marathon_app`: the name of the app (with slashes replaced by dashes). -- `__meta_marathon_image`: the name of the Docker image used (if available). -- `__meta_marathon_task`: the ID of the Mesos task. -- `__meta_marathon_app_label_`: any Marathon labels attached to the app. -- `__meta_marathon_port_definition_label_`: the port definition labels. -- `__meta_marathon_port_mapping_label_`: the port mapping labels. -- `__meta_marathon_port_index`: the port index number (e.g. 1 for PORT1). +- `__meta_marathon_app_label_`: Any Marathon labels attached to the app. +- `__meta_marathon_app`: The name of the app, with slashes replaced by dashes. +- `__meta_marathon_image`: The name of the Docker image used, if available. +- `__meta_marathon_port_definition_label_`: The port definition labels. +- `__meta_marathon_port_index`: The port index number, for example, 1 for PORT1. +- `__meta_marathon_port_mapping_label_`: The port mapping labels. +- `__meta_marathon_task`: The ID of the Mesos task. ## Component health -`discovery.marathon` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.marathon` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.marathon` does not expose any component-specific debug information. +`discovery.marathon` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.marathon` does not expose any component-specific debug metrics. +`discovery.marathon` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from a Marathon server: +The following example discovers targets from a Marathon server: ```river discovery.marathon "example" { @@ -130,18 +128,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - -- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.nerve.md b/docs/sources/flow/reference/components/discovery.nerve.md index e9cd8afcbd60..eb3c09e8ec61 100644 --- a/docs/sources/flow/reference/components/discovery.nerve.md +++ b/docs/sources/flow/reference/components/discovery.nerve.md @@ -28,47 +28,44 @@ The following arguments are supported: Name | Type | Description | Default | Required ------------------ | -------------- | ------------------------------------ | ------------- | -------- -`servers` | `list(string)` | The Zookeeper servers. | | yes `paths` | `list(string)` | The paths to look for targets at. | | yes +`servers` | `list(string)` | The Zookeeper servers. | | yes `timeout` | `duration` | The timeout to use. | `"10s"` | no -Each element in the `path` list can either point to a single service, or to the -root of a tree of services. +Each element in the `path` list can either point to a single service, or to the root of a tree of services. ## Blocks -The `discovery.nerve` component does not support any blocks, and is configured -fully through arguments. +The `discovery.nerve` component does not support any blocks, and is configured fully through arguments. ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|------------------------------------------------ `targets` | `list(map(string))` | The set of targets discovered from Nerve's API. -The following meta labels are available on targets and can be used by the -discovery.relabel component -* `__meta_nerve_path`: the full path to the endpoint node in Zookeeper +The following meta labels are available on targets and can be used by the discovery.relabel component: + * `__meta_nerve_endpoint_host`: the host of the endpoint -* `__meta_nerve_endpoint_port`: the port of the endpoint * `__meta_nerve_endpoint_name`: the name of the endpoint +* `__meta_nerve_endpoint_port`: the port of the endpoint +* `__meta_nerve_path`: the full path to the endpoint node in Zookeeper ## Component health -`discovery.nerve` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.nerve` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.nerve` does not expose any component-specific debug information. +`discovery.nerve` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.nerve` does not expose any component-specific debug metrics. +`discovery.nerve` doesn't expose any component-specific debug metrics. ## Example @@ -84,16 +81,16 @@ prometheus.scrape "demo" { } prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.nomad.md b/docs/sources/flow/reference/components/discovery.nomad.md index 7df1466081fb..3643e1b0d98f 100644 --- a/docs/sources/flow/reference/components/discovery.nomad.md +++ b/docs/sources/flow/reference/components/discovery.nomad.md @@ -24,26 +24,26 @@ discovery.nomad "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`server` | `string` | Address of nomad server. | `http://localhost:4646` | no -`namespace` | `string` | Nomad namespace to use. | `default` | no -`region` | `string` | Nomad region to use. | `global` | no -`allow_stale` | `bool` | Allow reading from non-leader nomad instances. | `true` | no -`tag_separator` | `string` | Seperator to join nomad tags into Prometheus labels. | `,` | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|------------|--------------------------------------------------------------|-------------------------|--------- +`allow_stale` | `bool` | Allow reading from non-leader nomad instances. | `true` | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`namespace` | `string` | Nomad namespace to use. | `default` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no +`region` | `string` | Nomad region to use. | `global` | no +`server` | `string` | Address of nomad server. | `http://localhost:4646` | no +`tag_separator` | `string` | Seperator to join nomad tags into Prometheus labels. | `,` | no You can provide one of the following arguments for authentication: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -52,75 +52,73 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `discovery.nomad`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config ock -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the nomad server. Each target includes the following labels: -* `__meta_nomad_address`: the service address of the target. -* `__meta_nomad_dc`: the datacenter name for the target. -* `__meta_nomad_namespace`: the namespace of the target. -* `__meta_nomad_node_id`: the node name defined for the target. -* `__meta_nomad_service`: the name of the service the target belongs to. -* `__meta_nomad_service_address`: the service address of the target. -* `__meta_nomad_service_id`: the service ID of the target. -* `__meta_nomad_service_port`: the service port of the target. -* `__meta_nomad_tags`: the list of tags of the target joined by the tag separator. +* `__meta_nomad_address`: The service address of the target. +* `__meta_nomad_dc`: The datacenter name for the target. +* `__meta_nomad_namespace`: The namespace of the target. +* `__meta_nomad_node_id`: The node name defined for the target. +* `__meta_nomad_service_address`: The service address of the target. +* `__meta_nomad_service_id`: The service ID of the target. +* `__meta_nomad_service_port`: The service port of the target. +* `__meta_nomad_service`: The name of the service the target belongs to. +* `__meta_nomad_tags`: The list of tags of the target joined by the tag separator. ## Component health -`discovery.nomad` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.nomad` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.nomad` does not expose any component-specific debug information. +`discovery.nomad` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.nomad` does not expose any component-specific debug metrics. +`discovery.nomad` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from a Nomad server: +The following example discovers targets from a Nomad server: ```river discovery.nomad "example" { @@ -133,16 +131,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.openstack.md b/docs/sources/flow/reference/components/discovery.openstack.md index 984fbf2fa4b4..c099c4c2b2e1 100644 --- a/docs/sources/flow/reference/components/discovery.openstack.md +++ b/docs/sources/flow/reference/components/discovery.openstack.md @@ -28,104 +28,105 @@ discovery.openstack "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required -------------------- | ---------- | ---------------------------------------------------------------------- | -------------------- | -------- -`role` | `string` | Role of the discovered targets. | | yes -`region` | `string` | OpenStack region. | | yes -`identity_endpoint` | `string` | Specifies the HTTP endpoint that is required to work with te Identity API of the appropriate version | | no -`username` | `string` | OpenStack username for the Identity V2 and V3 APIs. | | no -`userid` | `string` | OpenStack userid for the Identity V2 and V3 APIs. | | no -`password` | `secret` | Password for the Identity V2 and V3 APIs. | | no -`domain_name` | `string` | OpenStack domain name for the Identity V2 and V3 APIs. | | no -`domain_id` | `string` | OpenStack domain ID for the Identity V2 and V3 APIs. | | no -`project_name` | `string` | OpenStack project name for the Identity V2 and V3 APIs. | | no -`project_id` | `string` | OpenStack project ID for the Identity V2 and V3 APIs. | | no -`application_credential_name` | `string` | OpenStack application credential name for the Identity V2 and V3 APIs. | | no -`application_credential_id` | `string` | OpenStack application credential ID for the Identity V2 and V3 APIs. | | no -`application_credential_secret` | `secret` | OpenStack application credential secret for the Identity V2 and V3 APIs. | | no -`all_tenants` | `bool` | Whether the service discovery should list all instances for all projects. | `false` | no -`refresh_interval` | `duration`| Refresh interval to re-read the instance list. | `60s` | no -`port` | `int` | The port to scrape metrics from. | `80` | no -`availability` | `string` | The availability of the endpoint to connect to. | `public` | no +Name | Type | Description | Default | Required +--------------------------------|------------|-------------------------------------------------------------------------------------------------------|----------|--------- +`role` | `string` | Role of the discovered targets. | | yes +`region` | `string` | OpenStack region. | | yes +`all_tenants` | `bool` | Whether the service discovery should list all instances for all projects. | `false` | no +`application_credential_id` | `string` | OpenStack application credential ID for the Identity V2 and V3 APIs. | | no +`application_credential_name` | `string` | OpenStack application credential name for the Identity V2 and V3 APIs. | | no +`application_credential_secret` | `secret` | OpenStack application credential secret for the Identity V2 and V3 APIs. | | no +`availability` | `string` | The availability of the endpoint to connect to. | `public` | no +`domain_id` | `string` | OpenStack domain ID for the Identity V2 and V3 APIs. | | no +`domain_name` | `string` | OpenStack domain name for the Identity V2 and V3 APIs. | | no +`identity_endpoint` | `string` | Specifies the HTTP endpoint that is required to work with the Identity API of the appropriate version | | no +`password` | `secret` | Password for the Identity V2 and V3 APIs. | | no +`port` | `int` | The port to scrape metrics from. | `80` | no +`project_id` | `string` | OpenStack project ID for the Identity V2 and V3 APIs. | | no +`project_name` | `string` | OpenStack project name for the Identity V2 and V3 APIs. | | no +`refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `60s` | no +`userid` | `string` | OpenStack userid for the Identity V2 and V3 APIs. | | no +`username` | `string` | OpenStack username for the Identity V2 and V3 APIs. | | no `role` must be one of `hypervisor` or `instance`. `username` is required if using Identity V2 API. In Identity V3, either `userid` or a combination of `username` and `domain_id` or `domain_name` are needed. -`project_id` and `project_name` fields are optional for the Identity V2 API. Some providers allow you to specify a `project_name` instead of the `project_id`. Some require both. +`project_id` and `project_name` fields are optional for the Identity V2 API. +Some providers allow you to specify a `project_name` instead of the `project_id`. Some require both. -`application_credential_id` or `application_credential_name` fields are required if using an application credential to authenticate. Some providers allow you to create an application credential to authenticate rather than a password. +`application_credential_id` or `application_credential_name` fields are required if using an application credential to authenticate. +Some providers allow you to create an application credential to authenticate rather than a password. `application_credential_secret` field is required if using an application credential to authenticate. -`all_tenants` is only relevant for the `instance` role and usually requires admin permissions. +`all_tenants` is only relevant for the `instance` role and usually requires administrator permissions. `availability` must be one of `public`, `admin`, or `internal`. ## Blocks + The following blocks are supported inside the definition of `discovery.openstack`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- +Hierarchy | Block | Description | Required +-----------|----------------|------------------------------------------------------|--------- tls_config | [tls_config][] | TLS configuration for requests to the OpenStack API. | no [tls_config]: #tls_config-block ### tls_config block -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|------------------------------------------------------ `targets` | `list(map(string))` | The set of targets discovered from the OpenStack API. #### `hypervisor` -The `hypervisor` role discovers one target per Nova hypervisor node. The target -address defaults to the `host_ip` attribute of the hypervisor. +The `hypervisor` role discovers one target per Nova hypervisor node. +The target address defaults to the `host_ip` attribute of the hypervisor. -* `__meta_openstack_hypervisor_host_ip`: the hypervisor node's IP address. -* `__meta_openstack_hypervisor_hostname`: the hypervisor node's name. -* `__meta_openstack_hypervisor_id`: the hypervisor node's ID. -* `__meta_openstack_hypervisor_state`: the hypervisor node's state. -* `__meta_openstack_hypervisor_status`: the hypervisor node's status. -* `__meta_openstack_hypervisor_type`: the hypervisor node's type. +* `__meta_openstack_hypervisor_host_ip`: The hypervisor node's IP address. +* `__meta_openstack_hypervisor_hostname`: The hypervisor node's name. +* `__meta_openstack_hypervisor_id`: The hypervisor node's ID. +* `__meta_openstack_hypervisor_state`: The hypervisor node's state. +* `__meta_openstack_hypervisor_status`: The hypervisor node's status. +* `__meta_openstack_hypervisor_type`: The hypervisor node's type. #### `instance` -The `instance` role discovers one target per network interface of Nova -instance. The target address defaults to the private IP address of the network -interface. - -* `__meta_openstack_address_pool`: the pool of the private IP. -* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance. -* `__meta_openstack_instance_id`: the OpenStack instance ID. -* `__meta_openstack_instance_image`: the ID of the image the OpenStack instance is using. -* `__meta_openstack_instance_name`: the OpenStack instance name. -* `__meta_openstack_instance_status`: the status of the OpenStack instance. -* `__meta_openstack_private_ip`: the private IP of the OpenStack instance. -* `__meta_openstack_project_id`: the project (tenant) owning this instance. -* `__meta_openstack_public_ip`: the public IP of the OpenStack instance. -* `__meta_openstack_tag_`: each tag value of the instance. -* `__meta_openstack_user_id`: the user account owning the tenant. +The `instance` role discovers one target per network interface of Nova instance. +The target address defaults to the private IP address of the network interface. + +* `__meta_openstack_address_pool`: The pool of the private IP. +* `__meta_openstack_instance_flavor`: The flavor of the OpenStack instance. +* `__meta_openstack_instance_id`: The OpenStack instance ID. +* `__meta_openstack_instance_image`: The ID of the image the OpenStack instance is using. +* `__meta_openstack_instance_name`: The OpenStack instance name. +* `__meta_openstack_instance_status`: The status of the OpenStack instance. +* `__meta_openstack_private_ip`: The private IP of the OpenStack instance. +* `__meta_openstack_project_id`: The project (tenant) owning this instance. +* `__meta_openstack_public_ip`: The public IP of the OpenStack instance. +* `__meta_openstack_tag_`: Each tag value of the instance. +* `__meta_openstack_user_id`: The user account owning the tenant. ## Component health -`discovery.openstack` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.openstack` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.openstack` does not expose any component-specific debug information. +`discovery.openstack` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.openstack` does not expose any component-specific debug metrics. +`discovery.openstack` doesn't expose any component-specific debug metrics. ## Example @@ -142,18 +143,19 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `OPENSTACK_ROLE`: Your OpenStack role. - - `OPENSTACK_REGION`: Your OpenStack region. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: Your OpenStack role. +- _``_: Your OpenStack region. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.puppetdb.md b/docs/sources/flow/reference/components/discovery.puppetdb.md index 3886b7d79726..c9a91a10846b 100644 --- a/docs/sources/flow/reference/components/discovery.puppetdb.md +++ b/docs/sources/flow/reference/components/discovery.puppetdb.md @@ -13,7 +13,7 @@ title: discovery.puppetdb `discovery.puppetdb` allows you to retrieve scrape targets from [PuppetDB](https://www.puppet.com/docs/puppetdb/7/overview.html) resources. -This SD discovers resources and will create a target for each resource returned by the API. +This SD discovers resources and creates a target for each resource returned by the API. The resource address is the `certname` of the resource, and can be changed during relabeling. @@ -31,103 +31,101 @@ discovery.puppetdb "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`url` | `string` | The URL of the PuppetDB root query endpoint. | | yes -`query` | `string` | Puppet Query Language (PQL) query. Only resources are supported. | | yes -`include_parameters` | `bool` | Whether to include the parameters as meta labels. Due to the differences between parameter types and Prometheus labels, some parameters might not be rendered. The format of the parameters might also change in future releases. Make sure that you don't have secrets exposed as parameters if you enable this. | `false` | no -`port` | `int` | The port to scrape metrics from.. | `80` | no -`refresh_interval` | `duration` | Frequency to refresh targets. | `"30s"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +---------------------|------------|------------------------------------------------------------------|---------|--------- +`query` | `string` | Puppet Query Language (PQL) query. Only resources are supported. | | yes +`url` | `string` | The URL of the PuppetDB root query endpoint. | | yes +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects ` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`include_parameters` | `bool` | Whether to include the parameters as meta labels. Due to the differences between parameter types and Prometheus labels, some parameters might not be rendered. The format of the parameters might also change in future releases. Make sure that you don't have secrets exposed as parameters if you enable this. | `false` | no +`port` | `int` | The port to scrape metrics from.. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to refresh targets. | `"30s"` | no You can provide one of the following arguments for authentication: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. + +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#arguments). +- [`bearer_token` argument](#arguments). +- [`oauth2` block][oauth2]. [arguments]: #arguments ## Blocks -The following blocks are supported inside the definition of -`discovery.puppetdb`: +The following blocks are supported inside the definition of `discovery.puppetdb`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|----------------------------------------------------------|--------- +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### basic_auth block +### authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from puppetdb. +Name | Type | Description +----------|---------------------|--------------------------------------------- +`targets` | `list(map(string))` | The set of targets discovered from PuppetDB. Each target includes the following labels: -* `__meta_puppetdb_query`: the Puppet Query Language (PQL) query. -* `__meta_puppetdb_certname`: the name of the node associated with the resourcet. -* `__meta_puppetdb_resource`: a SHA-1 hash of the resource’s type, title, and parameters, for identification. -* `__meta_puppetdb_type`: the resource type. -* `__meta_puppetdb_title`: the resource title. -* `__meta_puppetdb_exported`: whether the resource is exported ("true" or "false"). -* `__meta_puppetdb_tags`: comma separated list of resource tags. -* `__meta_puppetdb_file`: the manifest file in which the resource was declared. -* `__meta_puppetdb_environment`: the environment of the node associated with the resource. -* `__meta_puppetdb_parameter_`: the parameters of the resource. +* `__meta_puppetdb_certname`: The name of the node associated with the resource. +* `__meta_puppetdb_environment`: The environment of the node associated with the resource. +* `__meta_puppetdb_exported`: Whether the resource is exported ("true" or "false"). +* `__meta_puppetdb_file`: The manifest file in which the resource was declared. +* `__meta_puppetdb_parameter_`: The parameters of the resource. +* `__meta_puppetdb_query`: The Puppet Query Language (PQL) query. +* `__meta_puppetdb_resource`: A SHA-1 hash of the resource’s type, title, and parameters, for identification. +* `__meta_puppetdb_tags`: Comma separated list of resource tags. +* `__meta_puppetdb_title`: The resource title. +* `__meta_puppetdb_type`: The resource type. ## Component health -`discovery.puppetdb` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.puppetdb` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.puppetdb` does not expose any component-specific debug information. +`discovery.puppetdb` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.puppetdb` does not expose any component-specific debug metrics. +`discovery.puppetdb` doesn't expose any component-specific debug metrics. ## Example -This example discovers targets from puppetdb for all the servers that have a specific package defined: +The following example discovers targets from PuppetDB for all the servers that have a specific package defined: ```river discovery.puppetdb "example" { @@ -143,16 +141,17 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.relabel.md b/docs/sources/flow/reference/components/discovery.relabel.md index 71b71fea5ef5..b6d7dc157027 100644 --- a/docs/sources/flow/reference/components/discovery.relabel.md +++ b/docs/sources/flow/reference/components/discovery.relabel.md @@ -13,29 +13,20 @@ title: discovery.relabel In Flow, targets are defined as sets of key-value pairs called _labels_. -`discovery.relabel` rewrites the label set of the input targets by applying one -or more relabeling rules. If no rules are defined, then the input targets are -exported as-is. - -The most common use of `discovery.relabel` is to filter targets or standardize -the target label set that is passed to a downstream component. The `rule` -blocks are applied to the label set of each target in order of their appearance -in the configuration file. The configured rules can be retrieved by calling the -function in the `rules` export field. - -Target labels which start with a double underscore `__` are considered -internal, and may be removed by other Flow components prior to telemetry -collection. To retain any of these labels, use a `labelmap` action to remove -the prefix, or remap them to a different name. Service discovery mechanisms -usually group their labels under `__meta_*`. For example, the -discovery.kubernetes component populates a set of `__meta_kubernetes_*` labels -to provide information about the discovered Kubernetes resources. If a -relabeling rule needs to store a label value temporarily, for example as the -input to a subsequent step, use the `__tmp` label name prefix, as it is -guaranteed to never be used. - -Multiple `discovery.relabel` components can be specified by giving them -different labels. +`discovery.relabel` rewrites the label set of the input targets by applying one or more relabeling rules. +If no rules are defined, then the input targets are exported as-is. + +The most common use of `discovery.relabel` is to filter targets or standardize the target label set that is passed to a downstream component. +The `rule` blocks are applied to the label set of each target in order of their appearance in the configuration file. +The configured rules can be retrieved by calling the function in the `rules` export field. + +Target labels which start with a double underscore `__` are considered internal, and may be removed by other Flow components prior to telemetry collection. +To retain any of these labels, use a `labelmap` action to remove the prefix, or remap them to a different name. +Service discovery mechanisms usually group their labels under `__meta_*`. +For example, the discovery.kubernetes component populates a set of `__meta_kubernetes_*` labels to provide information about the discovered Kubernetes resources. +If a relabeling rule needs to store a label value temporarily, for example as the input to a subsequent step, use the `__tmp` label name prefix, as it's guaranteed to never be used. + +Multiple `discovery.relabel` components can be specified by giving them different labels. ## Usage @@ -55,47 +46,45 @@ discovery.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | Targets to relabel | | yes +Name | Type | Description | Default | Required +----------|---------------------|--------------------|---------|--------- +`targets` | `list(map(string))` | Targets to relabel | | yes ## Blocks -The following blocks are supported inside the definition of -`discovery.relabel`: +The following blocks are supported inside the definition of `discovery.relabel`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to targets. | no +Hierarchy | Block | Description | Required +----------|----------|---------------------------------------|--------- +rule | [rule][] | Relabeling rules to apply to targets. | no [rule]: #rule-block -### rule block +### rule -{{< docs/shared lookup="flow/reference/components/rule-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/rule-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +---------|---------------------|---------------------------------------------- `output` | `list(map(string))` | The set of targets after applying relabeling. -`rules` | `RelabelRules` | The currently configured relabeling rules. +`rules` | `RelabelRules` | The currently configured relabeling rules. ## Component health -`discovery.relabel` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.relabel` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.relabel` does not expose any component-specific debug information. +`discovery.relabel` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.relabel` does not expose any component-specific debug metrics. +`discovery.relabel` doesn't expose any component-specific debug metrics. ## Example @@ -121,5 +110,3 @@ discovery.relabel "keep_backend_only" { } } ``` - - diff --git a/docs/sources/flow/reference/components/discovery.scaleway.md b/docs/sources/flow/reference/components/discovery.scaleway.md index 59263435ab1f..24bed7b9b0db 100644 --- a/docs/sources/flow/reference/components/discovery.scaleway.md +++ b/docs/sources/flow/reference/components/discovery.scaleway.md @@ -9,8 +9,7 @@ title: discovery.scaleway # discovery.scaleway -`discovery.scaleway` discovers targets from [Scaleway instances][instance] and -[baremetal services][baremetal]. +`discovery.scaleway` discovers targets from [Scaleway instances][instance] and [baremetal services][baremetal]. [instance]: https://www.scaleway.com/en/virtual-instances/ [baremetal]: https://www.scaleway.com/en/bare-metal-servers/ @@ -30,70 +29,65 @@ discovery.scaleway "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`project_id` | `string` | Scaleway project ID of targets. | | yes -`role` | `string` | Role of targets to retrieve. | | yes -`api_url` | `string` | Scaleway API URL. | `"https://api.scaleway.com"` | no -`zone` | `string` | Availability zone of targets. | `"fr-par-1"` | no -`access_key` | `string` | Access key for the Scaleway API. | | yes -`secret_key` | `secret` | Secret key for the Scaleway API. | | conditional -`secret_key_file` | `string` | Path to file containing secret key for the Scaleway API. | | conditional -`name_filter` | `string` | Name filter to apply against the listing request. | | no -`tags_filter` | `list(string)` | List of tags to search for. | | no -`refresh_interval` | `duration` | Frequency to rediscover targets. | `"60s"` | no -`port` | `number` | Default port on servers to associate with generated targets. | `80` | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no - -The `role` argument determines what type of Scaleway machines to discover. It -must be set to one of the following: +Name | Type | Description | Default | Required +-------------------|----------------|--------------------------------------------------------------|------------------------------|------------ +`access_key` | `string` | Access key for the Scaleway API. | | yes +`project_id` | `string` | Scaleway project ID of targets. | | yes +`role` | `string` | Role of targets to retrieve. | | yes +`secret_key_file` | `string` | Path to file containing secret key for the Scaleway API. | | conditional +`secret_key` | `secret` | Secret key for the Scaleway API. | | conditional +`api_url` | `string` | Scaleway API URL. | `"https://api.scaleway.com"` | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`name_filter` | `string` | Name filter to apply against the listing request. | | no +`port` | `number` | Default port on servers to associate with generated targets. | `80` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Frequency to rediscover targets. | `"60s"` | no +`tags_filter` | `list(string)` | List of tags to search for. | | no +`zone` | `string` | Availability zone of targets. | `"fr-par-1"` | no + +The `role` argument determines what type of Scaleway machines to discover. It must be set to one of the following: * `"baremetal"`: Discover [baremetal][] Scaleway machines. * `"instance"`: Discover virtual Scaleway [instances][instance]. -The `name_filter` and `tags_filter` arguments can be used to filter the set of -discovered servers. `name_filter` returns machines matching a specific name, -while `tags_filter` returns machines who contain _all_ the tags listed in the -`tags_filter` argument. +The `name_filter` and `tags_filter` arguments can be used to filter the set of discovered servers. +`name_filter` returns machines matching a specific name, while `tags_filter` returns machines who contain _all_ the tags listed in the `tags_filter` argument. ## Blocks -The following blocks are supported inside the definition of -`discovery.scaleway`: +The following blocks are supported inside the definition of `discovery.scaleway`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- +Hierarchy | Block | Description | Required +-----------|----------------|--------------------------------------------------------|--------- tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, -`oauth2 > tls_config` refers to a `tls_config` block defined inside -an `oauth2` block. +The `>` symbol indicates deeper levels of nesting. +For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. [tls_config]: #tls_config-block -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|----------------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. When `role` is `baremetal`, discovered targets include the following labels: * `__meta_scaleway_baremetal_id`: ID of the server. -* `__meta_scaleway_baremetal_public_ipv4`: Public IPv4 address of the server. -* `__meta_scaleway_baremetal_public_ipv6`: Public IPv6 address of the server. * `__meta_scaleway_baremetal_name`: Name of the server. * `__meta_scaleway_baremetal_os_name`: Operating system name of the server. -* `__meta_scaleway_baremetal_os_version`: Operation system version of the server. +* `__meta_scaleway_baremetal_os_version`: Operating system version of the server. * `__meta_scaleway_baremetal_project_id`: Project ID the server belongs to. +* `__meta_scaleway_baremetal_public_ipv4`: Public IPv4 address of the server. +* `__meta_scaleway_baremetal_public_ipv6`: Public IPv6 address of the server. * `__meta_scaleway_baremetal_status`: Current status of the server. * `__meta_scaleway_baremetal_tags`: The list of tags associated with the server concatenated with a `,`. * `__meta_scaleway_baremetal_type`: Commercial type of the server. @@ -126,26 +120,25 @@ When `role` is `instance`, discovered targets include the following labels: ## Component health -`discovery.scaleway` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.scaleway` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.scaleway` does not expose any component-specific debug information. +`discovery.scaleway` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.scaleway` does not expose any component-specific debug metrics. +`discovery.scaleway` doesn't expose any component-specific debug metrics. ## Example ```river discovery.scaleway "example" { - project_id = "SCALEWAY_PROJECT_ID" - role = "SCALEWAY_PROJECT_ROLE" - access_key = "SCALEWAY_ACCESS_KEY" - secret_key = "SCALEWAY_SECRET_KEY" + project_id = + role = + access_key = + secret_key = } prometheus.scrape "demo" { @@ -155,22 +148,21 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` Replace the following: - -* `SCALEWAY_PROJECT_ID`: The project ID of your Scaleway machines. -* `SCALEWAY_PROJECT_ROLE`: Set to `baremetal` to discover [baremetal][] machines or `instance` to discover [virtual instances][instance]. -* `SCALEWAY_ACCESS_KEY`: Your Scaleway API access key. -* `SCALEWAY_SECRET_KEY`: Your Scaleway API secret key. -* `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -* `USERNAME`: The username to use for authentication to the remote_write API. -* `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The project ID of your Scaleway machines. +- _``_: Set to `baremetal` to discover [baremetal][] machines or `instance` to discover [virtual instances][instance]. +- _``_: Your Scaleway API access key. +- _``_: Your Scaleway API secret key. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.serverset.md b/docs/sources/flow/reference/components/discovery.serverset.md index c0ab1f130b6d..d036af63b050 100644 --- a/docs/sources/flow/reference/components/discovery.serverset.md +++ b/docs/sources/flow/reference/components/discovery.serverset.md @@ -25,7 +25,7 @@ discovery.serverset "LABEL" { } ``` -Serverset data stored in Zookeeper must be in JSON format. The Thrift format is not supported. +Serverset data stored in Zookeeper must be in JSON format. The Thrift format isn't supported. ## Arguments @@ -33,9 +33,9 @@ The following arguments are supported: | Name | Type | Description | Default | Required | |-----------|----------------|--------------------------------------------------|---------|----------| -| `servers` | `list(string)` | The Zookeeper servers to connect to. | | yes | +| `servers` | `list(string)` | The Zookeeper servers to connect to. | | yes | | `paths` | `list(string)` | The Zookeeper paths to discover Serversets from. | | yes | -| `timeout` | `duration` | The Zookeeper session timeout | `10s` | no | +| `timeout` | `duration` | The Zookeeper session timeout | `10s` | no | ## Exported fields @@ -46,36 +46,32 @@ Name | Type | Description `targets` | `list(map(string))` | The set of targets discovered. The following metadata labels are available on targets during relabeling: -* `__meta_serverset_path`: the full path to the serverset member node in Zookeeper -* `__meta_serverset_endpoint_host`: the host of the default endpoint -* `__meta_serverset_endpoint_port`: the port of the default endpoint -* `__meta_serverset_endpoint_host_`: the host of the given endpoint -* `__meta_serverset_endpoint_port_`: the port of the given endpoint -* `__meta_serverset_shard`: the shard number of the member -* `__meta_serverset_status`: the status of the member + +* `__meta_serverset_endpoint_host_`: The host of the given endpoint. +* `__meta_serverset_endpoint_host`: The host of the default endpoint. +* `__meta_serverset_endpoint_port_`: The port of the given endpoint. +* `__meta_serverset_endpoint_port`: The port of the default endpoint. +* `__meta_serverset_path`: The full path to the serverset member node in Zookeeper. +* `__meta_serverset_shard`: The shard number of the member. +* `__meta_serverset_status`: The status of the member. ## Component health -`discovery.serverset` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.serverset` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.serverset` does not expose any component-specific debug information. +`discovery.serverset` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.serverset` does not expose any component-specific debug metrics. +`discovery.serverset` doesn't expose any component-specific debug metrics. ## Example -The configuration below will connect to one of the Zookeeper servers -(either `zk1`, `zk2`, or `zk3`) and discover JSON Serversets at paths -`/path/to/znode1` and `/path/to/znode2`. The discovered targets are scraped -by the `prometheus.scrape.default` component and forwarded to -the `prometheus.remote_write.default` component, which will send the samples to -specified remote_write URL. +The configuration below will connect to one of the Zookeeper servers (either `zk1`, `zk2`, or `zk3`) and discover JSON Serversets at paths `/path/to/znode1` and `/path/to/znode2`. +The discovered targets are scraped by the `prometheus.scrape.default` component and forwarded to the `prometheus.remote_write.default` component, which will send the samples to specified remote_write URL. ```river discovery.serverset "zookeeper" { diff --git a/docs/sources/flow/reference/components/discovery.triton.md b/docs/sources/flow/reference/components/discovery.triton.md index 1b449010aef5..df1a934d3947 100644 --- a/docs/sources/flow/reference/components/discovery.triton.md +++ b/docs/sources/flow/reference/components/discovery.triton.md @@ -32,41 +32,39 @@ The following arguments are supported: Name | Type | Description | Default | Required ------------------ | -------------- | --------------------------------------------------- | ------------- | -------- `account` | `string` | The account to use for discovering new targets. | | yes -`role` | `string` | The type of targets to discover. | `"container"` | no `dns_suffix` | `string` | The DNS suffix that is applied to the target. | | yes `endpoint` | `string` | The Triton discovery endpoint. | | yes `groups` | `list(string)` | A list of groups to retrieve targets from. | | no `port` | `int` | The port to use for discovery and metrics scraping. | `9163` | no `refresh_interval` | `duration` | The refresh interval for the list of targets. | `60s` | no +`role` | `string` | The type of targets to discover. | `"container"` | no `version` | `int` | The Triton discovery API version. | `1` | no `role` can be set to: -* `"container"` to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton -* `"cn"` to discover compute nodes (servers/global zones) making up the Triton infrastructure +* `"cn"` to discover compute nodes (servers/global zones) making up the Triton infrastructure. +* `"container"` to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton. -`groups` is only supported when `role` is set to `"container"`. If omitted all -containers owned by the requesting account are scraped. +`groups` is only supported when `role` is set to `"container"`. If omitted, all containers owned by the requesting account are scraped. ## Blocks -The following blocks are supported inside the definition of -`discovery.triton`: +The following blocks are supported inside the definition of `discovery.triton`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- +Hierarchy | Block | Description | Required +-----------|----------------|---------------------------------------------------|--------- tls_config | [tls_config][] | TLS configuration for requests to the Triton API. | no [tls_config]: #tls_config-block -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Triton API. When `role` is set to `"container"`, each target includes the following labels: @@ -85,25 +83,24 @@ When `role` is set to `"cn"` each target includes the following labels: ## Component health -`discovery.triton` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.triton` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.triton` does not expose any component-specific debug information. +`discovery.triton` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.triton` does not expose any component-specific debug metrics. +`discovery.triton` doesn't expose any component-specific debug metrics. ## Example ```river discovery.triton "example" { - account = TRITON_ACCOUNT - dns_suffix = TRITON_DNS_SUFFIX - endpoint = TRITON_ENDPOINT + account = + dns_suffix = + endpoint = } prometheus.scrape "demo" { @@ -113,19 +110,20 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `TRITON_ACCOUNT`: Your Triton account. - - `TRITON_DNS_SUFFIX`: Your Triton DNS suffix. - - `TRITON_ENDPOINT`: Your Triton endpoint. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: Your Triton account. +- _``_: Your Triton DNS suffix. +- _``_: Your Triton endpoint. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.uyuni.md b/docs/sources/flow/reference/components/discovery.uyuni.md index 5f6d415b472e..429cf0dd2ecb 100644 --- a/docs/sources/flow/reference/components/discovery.uyuni.md +++ b/docs/sources/flow/reference/components/discovery.uyuni.md @@ -19,9 +19,9 @@ title: discovery.uyuni ```river discovery.uyuni "LABEL" { - server = SERVER - username = USERNAME - password = PASSWORD + server = + username = + password = } ``` @@ -31,75 +31,72 @@ The following arguments are supported: Name | Type | Description | Default | Required --------------------- | ---------- | ---------------------------------------------------------------------- | ------------------------ | -------- +`password` | `Secret` | The password to use for authentication to the Uyuni API. | | yes `server` | `string` | The primary Uyuni Server. | | yes `username` | `string` | The username to use for authentication to the Uyuni API. | | yes -`password` | `Secret` | The password to use for authentication to the Uyuni API. | | yes +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no `entitlement` | `string` | The entitlement to filter on when listing targets. | `"monitoring_entitled"` | no -`separator` | `string` | The separator to use when building the `__meta_uyuni_groups` label. | `","` | no -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `1m` | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `1m` | no +`separator` | `string` | The separator to use when building the `__meta_uyuni_groups` label. | `","` | no ## Blocks -The following blocks are supported inside the definition of -`discovery.uyuni`: +The following blocks are supported inside the definition of `discovery.uyuni`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- +Hierarchy | Block | Description | Required +-----------|----------------|--------------------------------------------------|--------- tls_config | [tls_config][] | TLS configuration for requests to the Uyuni API. | no [tls_config]: #tls_config-block -### tls_config block +### tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|-------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the Uyuni API. Each target includes the following labels: -* `__meta_uyuni_minion_hostname`: The hostname of the Uyuni Minion. -* `__meta_uyuni_primary_fqdn`: The FQDN of the Uyuni primary. -* `__meta_uyuni_system_id`: The system ID of the Uyuni Minion. -* `__meta_uyuni_groups`: The groups the Uyuni Minion belongs to. * `__meta_uyuni_endpoint_name`: The name of the endpoint. * `__meta_uyuni_exporter`: The name of the exporter. -* `__meta_uyuni_proxy_module`: The name of the Uyuni module. +* `__meta_uyuni_groups`: The groups the Uyuni Minion belongs to. * `__meta_uyuni_metrics_path`: The path to the metrics endpoint. +* `__meta_uyuni_minion_hostname`: The hostname of the Uyuni Minion. +* `__meta_uyuni_primary_fqdn`: The FQDN of the Uyuni primary. +* `__meta_uyuni_proxy_module`: The name of the Uyuni module. * `__meta_uyuni_scheme`: `https` if TLS is enabled on the endpoint, `http` otherwise. +* `__meta_uyuni_system_id`: The system ID of the Uyuni Minion. -These labels are largely derived from a [listEndpoints](https://www.uyuni-project.org/uyuni-docs-api/uyuni/api/system.monitoring.html) -API call to the Uyuni Server. +These labels are largely derived from a [listEndpoints](https://www.uyuni-project.org/uyuni-docs-api/uyuni/api/system.monitoring.html) API call to the Uyuni Server. ## Component health -`discovery.uyuni` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`discovery.uyuni` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.uyuni` does not expose any component-specific debug information. +`discovery.uyuni` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.uyuni` does not expose any component-specific debug metrics. +`discovery.uyuni` doesn't expose any component-specific debug metrics. ## Example ```river discovery.uyuni "example" { server = "https://127.0.0.1/rpc/api" - username = UYUNI_USERNAME - password = UYUNI_PASSWORD + username = + password = } prometheus.scrape "demo" { @@ -109,18 +106,19 @@ prometheus.scrape "demo" { prometheus.remote_write "demo" { endpoint { - url = PROMETHEUS_REMOTE_WRITE_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = D } } } ``` + Replace the following: - - `UYUNI_USERNAME`: The username to use for authentication to the Uyuni server. - - `UYUNI_PASSWORD`: The password to use for authentication to the Uyuni server. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- _``_: The username to use for authentication to the Uyuni server. +- _``_: The password to use for authentication to the Uyuni server. +- _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. +- _``_: The username to use for authentication to the remote_write API. +- _``_: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/faro.receiver.md b/docs/sources/flow/reference/components/faro.receiver.md index 99d23c2b0842..7157d837fe0e 100644 --- a/docs/sources/flow/reference/components/faro.receiver.md +++ b/docs/sources/flow/reference/components/faro.receiver.md @@ -11,8 +11,7 @@ title: faro.receiver # faro.receiver -`faro.receiver` accepts web application telemetry data from the [Grafana Faro Web SDK][faro-sdk] -and forwards it to other components for future processing. +`faro.receiver` accepts web application telemetry data from the [Grafana Faro Web SDK][faro-sdk] and forwards it to other components for future processing. [faro-sdk]: https://github.com/grafana/faro-web-sdk @@ -31,21 +30,21 @@ faro.receiver "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`extra_log_labels` | `map(string)` | Extra labels to attach to emitted log lines. | `{}` | no +Name | Type | Description | Default | Required +-------------------|---------------|----------------------------------------------|---------|--------- +`extra_log_labels` | `map(string)` | Extra labels to attach to emitted log lines. | `{}` | no ## Blocks The following blocks are supported inside the definition of `faro.receiver`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -server | [server][] | Configures the HTTP server. | no -server > rate_limiting | [rate_limiting][] | Configures rate limiting for the HTTP server. | no -sourcemaps | [sourcemaps][] | Configures sourcemap retrieval. | no -sourcemaps > location | [location][] | Configures on-disk location for sourcemap retrieval. | no -output | [output][] | Configures where to send collected telemetry data. | yes +Hierarchy | Block | Description | Required +-----------------------|-------------------|------------------------------------------------------|--------- +output | [output][] | Configures where to send collected telemetry data. | yes +server | [server][] | Configures the HTTP server. | no +server > rate_limiting | [rate_limiting][] | Configures rate limiting for the HTTP server. | no +sourcemaps | [sourcemaps][] | Configures sourcemap retrieval. | no +sourcemaps > location | [location][] | Configures on-disk location for sourcemap retrieval. | no [server]: #server-block [rate_limiting]: #rate_limiting-block @@ -53,104 +52,95 @@ output | [output][] | Configures where to send collected telemetry data. | yes [location]: #location-block [output]: #output-block -### server block +### output -The `server` block configures the HTTP server managed by the `faro.receiver` -component. Clients using the [Grafana Faro Web SDK][faro-sdk] forward telemetry -data to this HTTP server for processing. +The `output` block specifies where to forward collected logs and traces. + +Name | Type | Description | Default | Required +---------|--------------------------|------------------------------------------------------|---------|--------- +`logs` | `list(LogsReceiver)` | A list of `loki` components to forward logs to. | `[]` | no +`traces` | `list(otelcol.Consumer)` | A list of `otelcol` components to forward traces to. | `[]` | no + +### server -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`listen_address` | `string` | Address to listen for HTTP traffic on. | `127.0.0.1` | no -`listen_port` | `number` | Port to listen for HTTP traffic on. | `12347` | no -`cors_allowed_origins` | `list(string)` | Origins for which cross-origin requests are permitted. | `[]` | no -`api_key` | `secret` | Optional API key to validate client requests with. | `""` | no -`max_allowed_payload_size` | `string` | Maximum size (in bytes) for client requests. | `"5MiB"` | no +The `server` block configures the HTTP server managed by the `faro.receiver` component. +Clients using the [Grafana Faro Web SDK][faro-sdk] forward telemetry data to this HTTP server for processing. -By default, telemetry data is only accepted from applications on the same local -network as the browser. To accept telemetry data from a wider set of clients, -modify the `listen_address` attribute to the IP address of the appropriate -network interface to use. +Name | Type | Description | Default | Required +---------------------------|----------------|--------------------------------------------------------|-------------|--------- +`listen_address` | `string` | Address to listen for HTTP traffic on. | `127.0.0.1` | no +`listen_port` | `number` | Port to listen for HTTP traffic on. | `12347` | no +`cors_allowed_origins` | `list(string)` | Origins for which cross-origin requests are permitted. | `[]` | no +`api_key` | `secret` | Optional API key to validate client requests with. | `""` | no +`max_allowed_payload_size` | `string` | Maximum size (in bytes) for client requests. | `"5MiB"` | no -The `cors_allowed_origins` argument determines what origins browser requests -may come from. The default value, `[]`, disables CORS support. To support -requests from all origins, set `cors_allowed_origins` to `["*"]`. The `*` -character indicates a wildcard. +By default, telemetry data is only accepted from applications on the same local network as the browser. +To accept telemetry data from a wider set of clients, modify the `listen_address` attribute to the IP address of the appropriate network interface to use. -When the `api_key` argument is non-empty, client requests must have an HTTP -header called `X-API-Key` matching the value of the `api_key` argument. -Requests that are missing the header or have the wrong value are rejected with -an `HTTP 401 Unauthorized` status code. If the `api_key` argument is empty, no -authentication checks are performed, and the `X-API-Key` HTTP header is -ignored. +The `cors_allowed_origins` argument determines what origins browser requests may come from. +The default value, `[]`, disables CORS support. To support requests from all origins, set `cors_allowed_origins` to `["*"]`. +The `*` character indicates a wildcard. -### rate_limiting block +When the `api_key` argument is non-empty, client requests must have an HTTP header called `X-API-Key` matching the value of the `api_key` argument. +Requests that are missing the header or have the wrong value are rejected with an `HTTP 401 Unauthorized` status code. +If the `api_key` argument is empty, no authentication checks are performed, and the `X-API-Key` HTTP header is ignored. + +### server > rate_limiting The `rate_limiting` block configures rate limiting for client requests. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Whether to enable rate limiting. | `true` | no -`rate` | `number` | Rate of allowed requests per second. | `50` | no -`burst_size` | `number` | Allowed burst size of requests. | `100` | no +Name | Type | Description | Default | Required +-------------|----------|--------------------------------------|---------|--------- +`enabled` | `bool` | Whether to enable rate limiting. | `true` | no +`rate` | `number` | Rate of allowed requests per second. | `50` | no +`burst_size` | `number` | Allowed burst size of requests. | `100` | no -Rate limiting functions as a [token bucket algorithm][token-bucket], where -a bucket has a maximum capacity for up to `burst_size` requests and refills at a -rate of `rate` per second. +Rate limiting functions as a [token bucket algorithm][token-bucket], where a bucket has a maximum capacity for up to `burst_size` requests and refills at a rate of `rate` per second. -Each HTTP request drains the capacity of the bucket by one. Once the bucket is -empty, HTTP requests are rejected with an `HTTP 429 Too Many Requests` status -code until the bucket has more available capacity. +Each HTTP request drains the capacity of the bucket by one. +Once the bucket is empty, HTTP requests are rejected with an `HTTP 429 Too Many Requests` status code until the bucket has more available capacity. -Configuring the `rate` argument determines how fast the bucket refills, and -configuring the `burst_size` argument determines how many requests can be -received in a burst before the bucket is empty and starts rejecting requests. +Configuring the `rate` argument determines how fast the bucket refills. +Configuring the `burst_size` argument determines how many requests can be received in a burst before the bucket is empty and starts rejecting requests. [token-bucket]: https://en.wikipedia.org/wiki/Token_bucket -### sourcemaps block +### sourcemaps -The `sourcemaps` block configures how to retrieve sourcemaps. Sourcemaps are -then used to transform file and line information from minified code into the -file and line information from the original source code. +The `sourcemaps` block configures how to retrieve sourcemaps. +Sourcemaps are then used to transform file and line information from minified code into the file and line information from the original source code. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`download` | `bool` | Whether to download sourcemaps. | `true` | no +Name | Type | Description | Default | Required +------------------------|----------------|--------------------------------------------|---------|--------- +`download` | `bool` | Whether to download sourcemaps. | `true` | no `download_from_origins` | `list(string)` | Which origins to download sourcemaps from. | `["*"]` | no -`download_timeout` | `duration` | Timeout when downloading sourcemaps. | `"1s"` | no +`download_timeout` | `duration` | Timeout when downloading sourcemaps. | `"1s"` | no -When exceptions are sent to the `faro.receiver` component, it can download -sourcemaps from the web application. You can disable this behavior by setting -the `download` argument to `false`. +When exceptions are sent to the `faro.receiver` component, it can download sourcemaps from the web application. +You can disable this behavior by setting the `download` argument to `false`. -The `download_from_origins` argument determines which origins a sourcemap may -be downloaded from. The origin is attached to the URL that a browser is sending -telemetry data from. The default value, `["*"]`, enables downloading sourcemaps -from all origins. The `*` character indicates a wildcard. +The `download_from_origins` argument determines which origins a sourcemap may be downloaded from. +The origin is attached to the URL that a browser is sending telemetry data from. +The default value, `["*"]`, enables downloading sourcemaps from all origins. The `*` character indicates a wildcard. -By default, sourcemap downloads are subject to a timeout of `"1s"`, specified -by the `download_timeout` argument. Setting `download_timeout` to `"0s"` -disables timeouts. +By default, sourcemap downloads are subject to a timeout of `"1s"`, specified by the `download_timeout` argument. +Setting `download_timeout` to `"0s"` disables timeouts. -To retrieve sourcemaps from disk instead of the network, specify one or more -[`location` blocks][location]. When `location` blocks are provided, they are -checked first for sourcemaps before falling back to downloading. +To retrieve sourcemaps from disk instead of the network, specify one or more [`location` blocks][location]. +When `location` blocks are provided, they are checked first for sourcemaps before falling back to downloading. -### location block +### sourcemaps > location -The `location` block declares a location where sourcemaps are stored on the -filesystem. The `location` block can be specified multiple times to declare -multiple locations where sourcemaps are stored. +The `location` block declares a location where sourcemaps are stored on the filesystem. +The `location` block can be specified multiple times to declare multiple locations where sourcemaps are stored. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`path` | `string` | The path on disk where sourcemaps are stored. | | yes -`minified_path_prefix` | `string` | The prefix of the minified path sent from browsers. | | yes +Name | Type | Description | Default | Required +-----------------------|----------|-----------------------------------------------------|---------|--------- +`path` | `string` | The path on disk where sourcemaps are stored. | | yes +`minified_path_prefix` | `string` | The prefix of the minified path sent from browsers. | | yes -The `minified_path_prefix` argument determines the prefix of paths to -Javascript files, such as `http://example.com/`. The `path` argument then -determines where to find the sourcemap for the file. +The `minified_path_prefix` argument determines the prefix of paths to Javascript files, such as `http://example.com/`. +The `path` argument then determines where to find the sourcemap for the file. For example, given the following location block: @@ -161,52 +151,39 @@ location { } ``` -To look up the sourcemaps for a file hosted at `http://example.com/foo.js`, the -`faro.receiver` component will: +To look up the sourcemaps for a file hosted at `http://example.com/foo.js`, the `faro.receiver` component will: 1. Remove the minified path prefix to extract the path to the file (`foo.js`). -2. Search for that file path with a `.map` extension (`foo.js.map`) in `path` - (`/var/my-app/build/foo.js.map`). - -Optionally, the value for the `path` argument may contain `{{ .Release }}` as a -template value, such as `/var/my-app/{{ .Release }}/build`. The template value -will be replaced with the release value provided by the [Faro Web App SDK][faro-sdk]. - -### output block +2. Search for that file path with a `.map` extension (`foo.js.map`) in `path` (`/var/my-app/build/foo.js.map`). -The `output` block specifies where to forward collected logs and traces. - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`logs` | `list(LogsReceiver)` | A list of `loki` components to forward logs to. | `[]` | no -`traces` | `list(otelcol.Consumer)` | A list of `otelcol` components to forward traces to. | `[]` | no +Optionally, the value for the `path` argument may contain `{{ .Release }}` as a template value, such as `/var/my-app/{{ .Release }}/build`. +The template value will be replaced with the release value provided by the [Faro Web App SDK][faro-sdk]. ## Exported fields -`faro.receiver` does not export any fields. +`faro.receiver` doesn't export any fields. ## Component health -`faro.receiver` is reported as unhealthy when the integrated server fails to -start. +`faro.receiver` is reported as unhealthy when the integrated server fails to start. ## Debug information -`faro.receiver` does not expose any component-specific debug information. +`faro.receiver` doesn't expose any component-specific debug information. ## Debug metrics `faro.receiver` exposes the following metrics for monitoring the component: -* `faro_receiver_logs_total` (counter): Total number of ingested logs. -* `faro_receiver_measurements_total` (counter): Total number of ingested measurements. -* `faro_receiver_exceptions_total` (counter): Total number of ingested exceptions. * `faro_receiver_events_total` (counter): Total number of ingested events. +* `faro_receiver_exceptions_total` (counter): Total number of ingested exceptions. * `faro_receiver_exporter_errors_total` (counter): Total number of errors produced by an internal exporter. +* `faro_receiver_inflight_requests` (gauge): Current number of inflight requests. +* `faro_receiver_logs_total` (counter): Total number of ingested logs. +* `faro_receiver_measurements_total` (counter): Total number of ingested measurements. * `faro_receiver_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. * `faro_receiver_request_message_bytes` (histogram): Size (in bytes) of HTTP requests received from clients. * `faro_receiver_response_message_bytes` (histogram): Size (in bytes) of HTTP responses sent to clients. -* `faro_receiver_inflight_requests` (gauge): Current number of inflight requests. * `faro_receiver_sourcemap_cache_size` (counter): Number of items in sourcemap cache per origin. * `faro_receiver_sourcemap_downloads_total` (counter): Total number of sourcemap downloads performed per origin and status. * `faro_receiver_sourcemap_file_reads_total` (counter): Total number of sourcemap retrievals using the filesystem per origin and status. @@ -216,13 +193,13 @@ start. ```river faro.receiver "default" { server { - listen_address = "NETWORK_ADDRESS" + listen_address = } sourcemaps { location { - path = "PATH_TO_SOURCEMAPS" - minified_path_prefix = "WEB_APP_PREFIX" + path = + minified_path_prefix = } } @@ -234,36 +211,26 @@ faro.receiver "default" { loki.write "default" { endpoint { - url = "https://LOKI_ADDRESS/api/v1/push" + url = "https:///api/v1/push" } } otelcol.exporter.otlp "traces" { client { - endpoint = "OTLP_ADDRESS" + endpoint = } } ``` Replace the following: - -* `NETWORK_ADDRESS`: IP address of the network interface to listen to traffic - on. This IP address must be reachable by browsers using the web application - to instrument. - -* `PATH_TO_SOURCEMAPS`: Path on disk where sourcemaps are located. - -* `WEB_APP_PREFIX`: Prefix of the web application being instrumented. - -* `LOKI_ADDRESS`: Address of the Loki server to send logs to. - - * If authentication is required to send logs to the Loki server, refer to the - documentation of [loki.write][] for more information. - -* `OTLP_ADDRESS`: The address of the OTLP-compatible server to send traces to. - - * If authentication is required to send logs to the Loki server, refer to the - documentation of [otelcol.exporter.otlp][] for more information. +* _``_: IP address of the network interface to listen to traffic on. + This IP address must be reachable by browsers using the web application to instrument. +* _``_: Path on disk where sourcemaps are located. +* _``_: Prefix of the web application being instrumented. +* _``_: Address of the Loki server to send logs to. + If authentication is required to send logs to the Loki server, refer to the documentation of [loki.write][] for more information. +* _``_: The address of the OTLP-compatible server to send traces to. + If authentication is required to send logs to the Loki server, refer to the documentation of [otelcol.exporter.otlp][] for more information. [loki.write]: {{< relref "./loki.write.md" >}} [otelcol.exporter.otlp]: {{< relref "./otelcol.exporter.otlp.md" >}} diff --git a/docs/sources/flow/reference/components/local.file.md b/docs/sources/flow/reference/components/local.file.md index 0199a088a71c..851818340042 100644 --- a/docs/sources/flow/reference/components/local.file.md +++ b/docs/sources/flow/reference/components/local.file.md @@ -11,14 +11,12 @@ title: local.file # local.file -`local.file` exposes the contents of a file on disk to other components. The -file will be watched for changes so that its latest content is always exposed. +`local.file` exposes the contents of a file on disk to other components. +The file will be watched for changes so that its latest content is always exposed. -The most common use of `local.file` is to load secrets (e.g., API keys) from -files. +The most common use of `local.file` is to load secrets (e.g., API keys) from files. -Multiple `local.file` components can be specified by giving them different -labels. +Multiple `local.file` components can be specified by giving them different labels. ## Usage @@ -32,47 +30,43 @@ local.file "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`filename` | `string` | Path of the file on disk to watch | | yes -`detector` | `string` | Which file change detector to use (fsnotify, poll) | `"fsnotify"` | no -`poll_frequency` | `duration` | How often to poll for file changes | `"1m"` | no -`is_secret` | `bool` | Marks the file as containing a [secret][] | `false` | no +Name | Type | Description | Default | Required +-----------------|------------|-----------------------------------------------------|--------------|--------- +`filename` | `string` | Path of the file on disk to watch. | | yes +`detector` | `string` | Which file change detector to use (fsnotify, poll). | `"fsnotify"` | no +`is_secret` | `bool` | Marks the file as containing a [secret][]. | `false` | no +`poll_frequency` | `duration` | How often to poll for file changes. | `"1m"` | no [secret]: {{< relref "../../config-language/expressions/types_and_values.md#secrets" >}} -{{< docs/shared lookup="flow/reference/components/local-file-arguments-text.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/local-file-arguments-text.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|----------------------|--------------------------------------------------- `content` | `string` or `secret` | The contents of the file from the most recent read -The `content` field will have the `secret` type only if the `is_secret` -argument was true. +The `content` field will have the `secret` type only if the `is_secret` argument was true. ## Component health `local.file` will be reported as healthy whenever if the watched file was read successfully. -Failing to read the file whenever an update is detected (or after the poll -period elapses) will cause the component to be reported as unhealthy. When -unhealthy, exported fields will be kept at the last healthy value. The read -error will be exposed as a log message and in the debug information for the -component. +Failing to read the file whenever an update is detected (or after the poll period elapses) will cause the component to be reported as unhealthy. +When unhealthy, exported fields will be kept at the last healthy value. +The read error will be exposed as a log message and in the debug information for the component. ## Debug information -`local.file` does not expose any component-specific debug information. +`local.file` doesn't expose any component-specific debug information. ## Debug metrics -* `agent_local_file_timestamp_last_accessed_unix_seconds` (gauge): The - timestamp, in Unix seconds, that the file was last successfully accessed. +* `agent_local_file_timestamp_last_accessed_unix_seconds` (gauge): The timestamp, in Unix seconds, that the file was last successfully accessed. ## Example diff --git a/docs/sources/flow/reference/components/local.file_match.md b/docs/sources/flow/reference/components/local.file_match.md index 72be1310a749..9b97b1f3671e 100644 --- a/docs/sources/flow/reference/components/local.file_match.md +++ b/docs/sources/flow/reference/components/local.file_match.md @@ -27,23 +27,22 @@ local.file_match "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ---------------- | ------------------- | ------------------------------------------------------------------------------------------ |---------| -------- -`path_targets` | `list(map(string))` | Targets to expand; looks for glob patterns on the `__path__` and `__path_exclude__` keys. | | yes -`sync_period` | `duration` | How often to sync filesystem and targets. | `"10s"` | no +Name | Type | Description | Default | Required +---------------|---------------------|--------------------------------------------------------------------------------------------|---------|--------- +`path_targets` | `list(map(string))` | Targets to expand; looks for glob patterns on the `__path__` and `__path_exclude__` keys. | | yes +`sync_period` | `duration` | How often to sync filesystem and targets. | `"10s"` | no `path_targets` uses [doublestar][] style paths. * `/tmp/**/*.log` will match all subfolders of `tmp` and include any files that end in `*.log`. * `/tmp/apache/*.log` will match only files in `/tmp/apache/` that end in `*.log`. * `/tmp/**` will match all subfolders of `tmp`, `tmp` itself, and all files. - ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +----------|---------------------|--------------------------------------------------- `targets` | `list(map(string))` | The set of targets discovered from the filesystem. Each target includes the following labels: @@ -52,24 +51,22 @@ Each target includes the following labels: ## Component health -`local.file_match` is only reported as unhealthy when given an invalid -configuration. In those cases, exported fields retain their last healthy -values. +`local.file_match` is only reported as unhealthy when given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`local.file_match` does not expose any component-specific debug information. +`local.file_match` doesn't expose any component-specific debug information. ## Debug metrics -`local.file_match` does not expose any component-specific debug metrics. +`local.file_match` doesn't expose any component-specific debug metrics. ## Examples ### Send `/tmp/logs/*.log` files to Loki -This example discovers all files and folders under `/tmp/logs`. The absolute paths are -used by `loki.source.file.files` targets. +The following example discovers all files and folders under `/tmp/logs`. The absolute paths are used by `loki.source.file.files` targets. ```river local.file_match "tmp" { @@ -83,22 +80,23 @@ loki.source.file "files" { loki.write "endpoint" { endpoint { - url = LOKI_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `LOKI_URL`: The URL of the Loki server to send logs to. - - `USERNAME`: The username to use for authentication to the Loki API. - - `PASSWORD`: The password to use for authentication to the Loki API. +- _``_: The URL of the Loki server to send logs to. +- _``_: The username to use for authentication to the Loki API. +- _``_: The password to use for authentication to the Loki API. ### Send Kubernetes pod logs to Loki -This example finds all the logs on pods and monitors them. +The following example finds all the logs on pods and monitors them. ```river discovery.kubernetes "k8s" { @@ -133,15 +131,16 @@ loki.source.file "pods" { loki.write "endpoint" { endpoint { - url = LOKI_URL + url = basic_auth { - username = USERNAME - password = PASSWORD + username = + password = } } } ``` + Replace the following: - - `LOKI_URL`: The URL of the Loki server to send logs to. - - `USERNAME`: The username to use for authentication to the Loki API. - - `PASSWORD`: The password to use for authentication to the Loki API. +- _``_: The URL of the Loki server to send logs to. +- _``_: The username to use for authentication to the Loki API. +- _``_: The password to use for authentication to the Loki API. diff --git a/docs/sources/flow/reference/components/loki.echo.md b/docs/sources/flow/reference/components/loki.echo.md index 4499bf3efde7..5e1e43944861 100644 --- a/docs/sources/flow/reference/components/loki.echo.md +++ b/docs/sources/flow/reference/components/loki.echo.md @@ -13,13 +13,11 @@ title: loki.echo # loki.echo -{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} -`loki.echo` receives log entries from other `loki` components and prints them -to the process' standard output (stdout). +`loki.echo` receives log entries from other `loki` components and prints them to the process' standard output (stdout). -Multiple `loki.echo` components can be specified by giving them -different labels. +Multiple `loki.echo` components can be specified by giving them different labels. ## Usage @@ -35,8 +33,8 @@ loki.echo "LABEL" {} The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +-----------|----------------|-------------------------------------------------------------- `receiver` | `LogsReceiver` | A value that other components can use to send log entries to. ## Component health @@ -45,7 +43,7 @@ Name | Type | Description ## Debug information -`loki.echo` does not expose any component-specific debug information. +`loki.echo` doesn't expose any component-specific debug information. ## Example diff --git a/docs/sources/flow/reference/components/loki.process.md b/docs/sources/flow/reference/components/loki.process.md index 0387bb3c0af8..d21e830f7ca6 100644 --- a/docs/sources/flow/reference/components/loki.process.md +++ b/docs/sources/flow/reference/components/loki.process.md @@ -11,20 +11,13 @@ title: loki.process # loki.process -`loki.process` receives log entries from other loki components, applies one or -more processing _stages_, and forwards the results to the list of receivers -in the component's arguments. +`loki.process` receives log entries from other loki components, applies one or more processing _stages_, and forwards the results to the list of receivers in the component's arguments. -A stage is a multi-purpose tool that can parse, transform, and filter log -entries before they're passed to a downstream component. These stages are -applied to each log entry in order of their appearance in the configuration -file. All stages within a `loki.process` block have access to the log entry's -label set, the log line, the log timestamp, as well as a shared map of -'extracted' values so that the results of one stage can be used in a subsequent -one. +A stage is a multi-purpose tool that can parse, transform, and filter log entries before they're passed to a downstream component. +These stages are applied to each log entry in order of their appearance in the configuration file. +All stages within a `loki.process` block have access to the log entry's label set, the log line, the log timestamp, as well as a shared map of 'extracted' values so that the results of one stage can be used in a subsequent one. -Multiple `loki.process` components can be specified by giving them -different labels. +Multiple `loki.process` components can be specified by giving them different labels. ## Usage @@ -44,7 +37,7 @@ loki.process "LABEL" { `loki.process` supports the following arguments: | Name | Type | Description | Default | Required | -| ------------ | -------------------- | ---------------------------------------------- | ------- | -------- | +|--------------|----------------------|------------------------------------------------|---------|----------| | `forward_to` | `list(LogsReceiver)` | Where to forward log entries after processing. | | yes | ## Blocks @@ -110,21 +103,19 @@ file. [stage.timestamp]: #stagetimestamp-block -### stage.cri block +### stage.cri -The `stage.cri` inner block enables a predefined pipeline which reads log lines using -the CRI logging format. +The `stage.cri` inner block enables a predefined pipeline which reads log lines using the CRI logging format. The following arguments are supported: | Name | Type | Description | Default | Required | | -------------------------------- | ---------- | -------------------------------------------------------------------- | -------------- | -------- | -| `max_partial_lines` | `number` | Maximum number of partial lines to hold in memory. | `100` | no | -| `max_partial_line_size` | `number` | Maximum number of characters which a partial line can have. | `0` | no | | `max_partial_line_size_truncate` | `bool` | Truncate partial lines that are longer than `max_partial_line_size`. | `false` | no | +| `max_partial_line_size` | `number` | Maximum number of characters which a partial line can have. | `0` | no | +| `max_partial_lines` | `number` | Maximum number of partial lines to hold in memory. | `100` | no | -`max_partial_line_size` is only taken into account if -`max_partial_line_size_truncate` is set to `true`. +`max_partial_line_size` is only taken into account if `max_partial_line_size_truncate` is set to `true`. ```river stage.cri {} @@ -133,13 +124,13 @@ stage.cri {} CRI specifies log lines as single space-delimited values with the following components: -* `time`: The timestamp string of the log -* `stream`: Either `stdout` or `stderr` -* `flags`: CRI flags including `F` or `P` -* `log`: The contents of the log line +* `flags`: CRI flags including `F` or `P`. +* `log`: The contents of the log line. +* `stream`: Either `stdout` or `stderr`. +* `time`: The timestamp string of the log. + +Given the following log line, the subsequent key-value pairs are created in the shared map of extracted data: -Given the following log line, the subsequent key-value pairs are created in the -shared map of extracted data: ``` "2019-04-30T02:12:41.8443515Z stdout F message" @@ -148,20 +139,17 @@ stream: stdout timestamp: 2019-04-30T02:12:41.8443515 ``` -### stage.decolorize block +### stage.decolorize -The `stage.decolorize` strips ANSI color codes from the log lines, thus making -it easier to parse logs further. +The `stage.decolorize` strips ANSI color codes from the log lines, thus making it easier to parse logs further. -The `stage.decolorize` block does not support any arguments or inner blocks, so -it is always empty. +The `stage.decolorize` block doesn't support any arguments or inner blocks, so it's always empty. ```river stage.decolorize {} ``` -`stage.decolorize` turns each line having a color code into a non-colored one, -for example: +`stage.decolorize` turns each line having a color code into a non-colored one, for example: ``` [2022-11-04 22:17:57.811] \033[0;32http\033[0m: GET /_health (0 ms) 204 @@ -173,13 +161,11 @@ is turned into [2022-11-04 22:17:57.811] http: GET /_health (0 ms) 204 ``` -### stage.docker block +### stage.docker -The `stage.docker` inner block enables a predefined pipeline which reads log lines in -the standard format of Docker log files. +The `stage.docker` inner block enables a predefined pipeline which reads log lines in the standard format of Docker log files. -The `stage.docker` block does not support any arguments or inner blocks, so it is -always empty. +The `stage.docker` block doesn't support any arguments or inner blocks, so it's always empty. ```river stage.docker {} @@ -187,12 +173,11 @@ stage.docker {} Docker log entries are formatted as JSON with the following keys: -* `log`: The content of log line -* `stream`: Either `stdout` or `stderr` -* `time`: The timestamp string of the log line +* `log`: The content of log line. +* `stream`: Either `stdout` or `stderr`. +* `time`: The timestamp string of the log line. -Given the following log line, the subsequent key-value pairs are created in the -shared map of extracted data: +Given the following log line, the subsequent key-value pairs are created in the shared map of extracted data: ``` {"log":"log message\n","stream":"stderr","time":"2019-04-30T02:12:41.8443515Z"} @@ -202,48 +187,37 @@ stream: stderr timestamp: 2019-04-30T02:12:41.8443515 ``` -### stage.drop block +### stage.drop -The `stage.drop` inner block configures a filtering stage that drops log entries -based on several options. If multiple options are provided, they're treated -as AND clauses and must _all_ be true for the log entry to be dropped. +The `stage.drop` inner block configures a filtering stage that drops log entries based on several options. +If multiple options are provided, they're treated as AND clauses and must _all_ be true for the log entry to be dropped. To drop entries with an OR clause, specify multiple `drop` blocks in sequence. The following arguments are supported: | Name | Type | Description | Default | Required | |-----------------------|------------|------------------------------------------------------------------------------------------------------------------------|----------------|----------| -| `source` | `string` | Name or comma-separated list of names from extracted data to match. If empty or not defined, it uses the log message. | `""` | no | -| `separator` | `string` | When `source` is a comma-separated list of names, this separator is placed between concatenated extracted data values. | `";"` | no | +| `drop_counter_reason` | `string` | A custom reason to report for dropped lines. | `"drop_stage"` | no | | `expression` | `string` | A valid RE2 regular expression. | `""` | no | -| `value` | `string` | If both `source` and `value` are specified, the stage drops lines where `value` exactly matches the source content. | `""` | no | -| `older_than` | `duration` | If specified, the stage drops lines whose timestamp is older than the current time minus this duration. | `""` | no | | `longer_than` | `string` | If specified, the stage drops lines whose size exceeds the configured value. | `""` | no | -| `drop_counter_reason` | `string` | A custom reason to report for dropped lines. | `"drop_stage"` | no | +| `older_than` | `duration` | If specified, the stage drops lines whose timestamp is older than the current time minus this duration. | `""` | no | +| `separator` | `string` | When `source` is a comma-separated list of names, this separator is placed between concatenated extracted data values. | `";"` | no | +| `source` | `string` | Name or comma-separated list of names from extracted data to match. If empty or not defined, it uses the log message. | `""` | no | +| `value` | `string` | If both `source` and `value` are specified, the stage drops lines where `value` exactly matches the source content. | `""` | no | The `expression` field must be a RE2 regex string. -* If `source` is empty or not provided, the regex attempts to match the log -line itself. -* If `source` is a single name, the regex attempts to match the corresponding -value from the extracted map. -* If `source` is a comma-separated list of names, the corresponding values from -the extracted map are concatenated using `separator` and the regex attempts to -match the concatenated string. - -The `value` field can only work with values from the extracted map, and must be -specified together with `source`. -* If `source` is a single name, the entries are dropped when there is an exact -match between the corresponding value from the extracted map and the `value`. -* If `source` is a comma-separated list of names, the entries are dropped when -the `value` matches the `source` values from extracted data, concatenated using -the `separator`. - -Whenever an entry is dropped, the metric `loki_process_dropped_lines_total` -is incremented. By default, the reason label is `"drop_stage"`, but you can -provide a custom label using the `drop_counter_reason` argument. - -The following stage drops log entries that contain the word `debug` _and_ are -longer than 1KB. +* If `source` is empty or not provided, the regex attempts to match the log line itself. +* If `source` is a single name, the regex attempts to match the corresponding value from the extracted map. +* If `source` is a comma-separated list of names, the corresponding values from the extracted map are concatenated using `separator` and the regex attempts to match the concatenated string. + +The `value` field can only work with values from the extracted map, and must be specified together with `source`. +* If `source` is a single name, the entries are dropped when there is an exact match between the corresponding value from the extracted map and the `value`. +* If `source` is a comma-separated list of names, the entries are dropped when the `value` matches the `source` values from extracted data, concatenated using the `separator`. + +Whenever an entry is dropped, the metric `loki_process_dropped_lines_total` is incremented. +By default, the reason label is `"drop_stage"`, but you can provide a custom label using the `drop_counter_reason` argument. + +The following stage drops log entries that contain the word `debug` _and_ are longer than 1KB. ```river stage.drop { @@ -252,9 +226,7 @@ stage.drop { } ``` -On the following example, we define multiple `drop` blocks so `loki.process` -drops entries that are either 24h or older, are longer than 8KB, _or_ the -extracted value of 'app' is equal to foo. +On the following example, we define multiple `drop` blocks so `loki.process` drops entries that are either 24h or older, are longer than 8KB, _or_ the extracted value of 'app' is equal to `extracted_value`. ```river stage.drop { @@ -269,30 +241,27 @@ stage.drop { stage.drop { source = "app" - value = "foo" + value = "extracted_value" } ``` -### stage.eventlogmessage block +### stage.eventlogmessage -The `eventlogmessage` stage extracts data from the Message string that appears -in the Windows Event Log. +The `eventlogmessage` stage extracts data from the Message string that appears in the Windows Event Log. The following arguments are supported: | Name | Type | Description | Default | Required | |-----------------------|----------|--------------------------------------------------------|-----------|----------| -| `source` | `string` | Name of the field in the extracted data to parse. | `message` | no | -| `overwrite_existing` | `bool` | Whether to overwrite existing extracted data fields. | `false` | no | | `drop_invalid_labels` | `bool` | Whether to drop fields that are not valid label names. | `false` | no | +| `overwrite_existing` | `bool` | Whether to overwrite existing extracted data fields. | `false` | no | +| `source` | `string` | Name of the field in the extracted data to parse. | `message` | no | -When `overwrite_existing` is set to `true`, the stage overwrites existing extracted data -fields with the same name. If set to `false`, the `_extracted` suffix will be -appended to an already existing field name. +When `overwrite_existing` is set to `true`, the stage overwrites existing extracted data fields with the same name. +If set to `false`, the `_extracted` suffix will be appended to an already existing field name. -When `drop_invalid_labels` is set to `true`, the stage drops fields that are -not valid label names. If set to `false`, the stage will automatically convert -them into valid labels replacing invalid characters with underscores. +When `drop_invalid_labels` is set to `true`, the stage drops fields that are not valid label names. +If set to `false`, the stage will automatically convert them into valid labels replacing invalid characters with underscores. #### Example combined with `stage.json` @@ -311,47 +280,41 @@ stage.eventlogmessage { ``` Given the following log line: + ``` {"event_id": 1, "Overwritten": "old", "message": "Message type:\r\nOverwritten: new\r\nImage: C:\\Users\\User\\agent.exe"} ``` -The first stage would create the following key-value pairs in the set of -extracted data: +The first stage would create the following key-value pairs in the set of extracted data: - `message`: `Message type:\r\nOverwritten: new\r\nImage: C:\Users\User\agent.exe` - `Overwritten`: `old` -The second stage will parse the value of `message` from the extracted data -and append/overwrite the following key-value pairs to the set of extracted data: +The second stage will parse the value of `message` from the extracted data and append/overwrite the following key-value pairs to the set of extracted data: - `Image`: `C:\\Users\\User\\agent.exe` - `Message_type`: (empty string) - `Overwritten`: `new` -### stage.json block +### stage.json -The `stage.json` inner block configures a JSON processing stage that parses incoming -log lines or previously extracted values as JSON and uses -[JMESPath expressions](https://jmespath.org/tutorial.html) to extract new -values from them. +The `stage.json` inner block configures a JSON processing stage that parses incoming log lines or previously extracted values as JSON and uses [JMESPath expressions](https://jmespath.org/tutorial.html) to extract new values from them. The following arguments are supported: | Name | Type | Description | Default | Required | | ---------------- | ------------- | ------------------------------------------------------ | ------- | -------- | | `expressions` | `map(string)` | Key-value pairs of JMESPath expressions. | | yes | -| `source` | `string` | Source of the data to parse as JSON. | `""` | no | | `drop_malformed` | `bool` | Drop lines whose input cannot be parsed as valid JSON. | `false` | no | +| `source` | `string` | Source of the data to parse as JSON. | `""` | no | -When configuring a JSON stage, the `source` field defines the source of data to -parse as JSON. By default, this is the log line itself, but it can also be a -previously extracted value. +When configuring a JSON stage, the `source` field defines the source of data to parse as JSON. +By default, this is the log line itself, but it can also be a previously extracted value. -The `expressions` field is the set of key-value pairs of JMESPath expressions to -run. The map key defines the name with which the data is extracted, while the -map value is the expression used to populate the value. +The `expressions` field is the set of key-value pairs of JMESPath expressions to run. +The map key defines the name with which the data is extracted, while the map value is the expression used to populate the value. -Here's a given log line and two JSON stages to run. +The following example shows a log line and two JSON stages to run. ```river {"log":"log message\n","extra":"{\"user\":\"agent\"}"} @@ -368,24 +331,23 @@ loki.process "username" { } ``` -In this example, the first stage uses the log line as the source and populates -these values in the shared map. An empty expression means using the same value -as the key (so `extra="extra"`). +In the following example, the first stage uses the log line as the source and populates these values in the shared map. +An empty expression means using the same value as the key, so `extra="extra"`. + ``` output: log message\n extra: {"user": "agent"} ``` -The second stage uses the value in `extra` as the input and appends the -following key-value pair to the set of extracted data. +The second stage uses the value in `extra` as the input and appends the following key-value pair to the set of extracted data. + ``` username: agent ``` -### stage.label_drop block +### stage.label_drop -The `stage.label_drop` inner block configures a processing stage that drops labels -from incoming log entries. +The `stage.label_drop` inner block configures a processing stage that drops labels from incoming log entries. The following arguments are supported: @@ -399,10 +361,9 @@ stage.label_drop { } ``` -### stage.label_keep block +### stage.label_keep -The `stage.label_keep` inner block configures a processing stage that filters the -label set of an incoming log entry down to a subset. +The `stage.label_keep` inner block configures a processing stage that filters the label set of an incoming log entry down to a subset. The following arguments are supported: @@ -417,10 +378,9 @@ stage.label_keep { } ``` -### stage.labels block +### stage.labels -The `stage.labels` inner block configures a labels processing stage that can read -data from the extracted values map and set new labels on incoming log entries. +The `stage.labels` inner block configures a labels processing stage that can read data from the extracted values map and set new labels on incoming log entries. The following arguments are supported: @@ -428,9 +388,8 @@ The following arguments are supported: | -------- | ------------- | --------------------------------------- | ------- | -------- | | `values` | `map(string)` | Configures a `labels` processing stage. | `{}` | no | -In a labels stage, the map's keys define the label to set and the values are -how to look them up. If the value is empty, it is inferred to be the same as -the key. +In a labels stage, the map's keys define the label to set and the values are how to look them up. +If the value is empty, it's inferred to be the same as the key. ```river stage.labels { @@ -441,10 +400,9 @@ stage.labels { } ``` -### stage.structured_metadata block +### stage.structured_metadata -The `stage.structured_metadata` inner block configures a stage that can read -data from the extracted values map and add them to log entries as structured metadata. +The `stage.structured_metadata` inner block configures a stage that can read data from the extracted values map and add them to log entries as structured metadata. The following arguments are supported: @@ -452,9 +410,8 @@ The following arguments are supported: | -------- | ------------- |-----------------------------------------------------------------------------| ------- | -------- | | `values` | `map(string)` | Specifies the list of labels to add from extracted values map to log entry. | `{}` | no | -In a structured_metadata stage, the map's keys define the label to set and the values are -how to look them up. If the value is empty, it is inferred to be the same as -the key. +In a structured_metadata stage, the map's keys define the label to set and the values are how to look them up. +If the value is empty, it's inferred to be the same as the key. ```river stage.structured_metadata { @@ -465,25 +422,23 @@ stage.structured_metadata { } ``` -### stage.limit block +### stage.limit -The `stage.limit` inner block configures a rate-limiting stage that throttles logs -based on several options. +The `stage.limit` inner block configures a rate-limiting stage that throttles logs based on several options. The following arguments are supported: | Name | Type | Description | Default | Required | | --------------------- | -------- | -------------------------------------------------------------------------------- | ------- | -------- | -| `rate` | `number` | The maximum rate of lines per second that the stage forwards. | | yes | | `burst` | `number` | The maximum number of burst lines that the stage forwards. | | yes | +| `rate` | `number` | The maximum rate of lines per second that the stage forwards. | | yes | | `by_label_name` | `string` | The label to use when rate-limiting on a label name. | `""` | no | | `drop` | `bool` | Whether to discard or backpressure lines that exceed the rate limit. | `false` | no | | `max_distinct_labels` | `number` | The number of unique values to keep track of when rate-limiting `by_label_name`. | `10000` | no | -The rate limiting is implemented as a "token bucket" of size `burst`, initially -full and refilled at `rate` tokens per second. Each received log entry consumes one token from the bucket. When `drop` is set to true, incoming entries -that exceed the rate-limit are dropped, otherwise they are queued until -more tokens are available. +The rate limiting is implemented as a "token bucket" of size `burst`, initially full and refilled at `rate` tokens per second. +Each received log entry consumes one token from the bucket. +When `drop` is set to true, incoming entries that exceed the rate-limit are dropped, otherwise they are queued until more tokens are available. ```river stage.limit { @@ -492,13 +447,13 @@ stage.limit { } ``` -If `by_label_name` is set, then `drop` must be set to `true`. This enables the -stage to rate-limit not by the number of lines but by the number of labels. +If `by_label_name` is set, then `drop` must be set to `true`. +This enables the stage to rate-limit not by the number of lines but by the number of labels. + +The following example rate-limits entries from each unique `namespace` value independently. +Any entries without the `namespace` label are not rate-limited. +The stage keeps track of up to `max_distinct_labels` unique values, defaulting at 10000. -The following example rate-limits entries from each unique `namespace` value -independently. Any entries without the `namespace` label are not rate-limited. -The stage keeps track of up to `max_distinct_labels` unique -values, defaulting at 10000. ```river stage.limit { rate = 10 @@ -509,10 +464,9 @@ stage.limit { } ``` -### stage.logfmt block +### stage.logfmt -The `stage.logfmt` inner block configures a processing stage that reads incoming log -lines as logfmt and extracts values from them. +The `stage.logfmt` inner block configures a processing stage that reads incoming log lines as logfmt and extracts values from them. The following arguments are supported: @@ -522,16 +476,13 @@ The following arguments are supported: | `source` | `string` | Source of the data to parse as logfmt. | `""` | no | -The `source` field defines the source of data to parse as logfmt. When `source` -is missing or empty, the stage parses the log line itself, but it can also be -used to parse a previously extracted value. +The `source` field defines the source of data to parse as logfmt. +When `source` is missing or empty, the stage parses the log line itself, but it can also be used to parse a previously extracted value. -This stage uses the [go-logfmt](https://github.com/go-logfmt/logfmt) -unmarshaler, so that numeric or boolean types are unmarshalled into their -correct form. The stage does not perform any other type conversions. If the -extracted value is a complex type, it is treated as a string. +This stage uses the [go-logfmt](https://github.com/go-logfmt/logfmt) unmarshaler, so that numeric or boolean types are unmarshalled into their correct form. +The stage does not perform any other type conversions. If the extracted value is a complex type, it is treated as a string. -Let's see how this works on the following log line and stages. +The following example log line and stages shows how this works. ``` time=2012-11-01T22:08:41+00:00 app=loki level=WARN duration=125 message="this is a log line" extra="user=foo" @@ -546,43 +497,37 @@ stage.logfmt { } ``` -The first stage parses the log line itself and inserts the `extra` key in the -set of extracted data, with the value of `user=foo`. +The first stage parses the log line itself and inserts the `extra` key in the set of extracted data, with the value of `user=foo`. -The second stage parses the contents of `extra` and appends the `username: foo` -key-value pair to the set of extracted data. +The second stage parses the contents of `extra` and appends the `username: foo` key-value pair to the set of extracted data. -### stage.match block +### stage.match -The `stage.match` inner block configures a filtering stage that can conditionally -either apply a nested set of processing stages or drop an entry when a log -entry matches a configurable LogQL stream selector and filter expressions. +The `stage.match` inner block configures a filtering stage that can conditionally either apply a nested set of processing stages or drop an entry when a log entry matches a configurable LogQL stream selector and filter expressions. The following arguments are supported: | Name | Type | Description | Default | Required | | --------------------- | -------- | ----------------------------------------------------------------------------------------------------- | --------------- | -------- | | `selector` | `string` | The LogQL stream selector and line filter expressions to use. | | yes | -| `pipeline_name` | `string` | A custom name to use for the nested pipeline. | `""` | no | | `action` | `string` | The action to take when the selector matches the log line. Supported values are `"keep"` and `"drop"` | `"keep"` | no | | `drop_counter_reason` | `string` | A custom reason to report for dropped lines. | `"match_stage"` | no | +| `pipeline_name` | `string` | A custom name to use for the nested pipeline. | `""` | no | {{% admonition type="note" %}} -The filters do not include label filter expressions such as `| label == "foobar"`. +The filters don't include label filter expressions such as `| label == "foo"`. {{% /admonition %}} -The `stage.match` block supports a number of `stage.*` inner blocks, like the top-level -block. These are used to construct the nested set of stages to run if the -selector matches the labels and content of the log entries. It supports all the -same `stage.NAME` blocks as the in the top level of the loki.process component. +The `stage.match` block supports a number of `stage.*` inner blocks, like the top-level block. +These are used to construct the nested set of stages to run if the selector matches the labels and content of the log entries. +It supports all the same `stage.NAME` blocks as the in the top level of the loki.process component. + +If the specified action is `"drop"`, the metric `loki_process_dropped_lines_total` is incremented with every line dropped. +By default, the reason label is `"match_stage"`, but a custom reason can be provided by using the `drop_counter_reason` argument. -If the specified action is `"drop"`, the metric -`loki_process_dropped_lines_total` is incremented with every line dropped. -By default, the reason label is `"match_stage"`, but a custom reason can be -provided by using the `drop_counter_reason` argument. +The following log lines and stages example shows how this works. -Let's see this in action, with the following log lines and stages ``` { "time":"2023-01-18T17:08:41+00:00", "app":"foo", "component": ["parser","type"], "level" : "WARN", "message" : "app1 log line" } { "time":"2023-01-18T17:08:42+00:00", "app":"bar", "component": ["parser","type"], "level" : "ERROR", "message" : "foo noisy error" } @@ -622,35 +567,25 @@ stage.output { } ``` -The first two stages parse the log lines as JSON, decode the `app` value into -the shared extracted map as `appname`, and use its value as the `applbl` label. +The first two stages parse the log lines as JSON, decode the `app` value into the shared extracted map as `appname`, and use its value as the `applbl` label. -The third stage uses the LogQL selector to only execute the nested stages on -lines where the `applbl="foo"`. So, for the first line, the nested JSON stage -adds `msg="app1 log line"` into the extracted map. +The third stage uses the LogQL selector to only execute the nested stages on lines where the `applbl="foo"`. +For the first line, the nested JSON stage adds `msg="app1 log line"` into the extracted map. -The fourth stage uses the LogQL selector to only execute on lines where -`applbl="qux"`; that means it won't match any of the input, and the nested -JSON stage does not run. +The fourth stage uses the LogQL selector to only execute on lines where `applbl="qux"`; that means it won't match any of the input, and the nested JSON stage doesn't run. -The fifth stage drops entries from lines where `applbl` is set to 'bar' and the -line contents matches the regex `.*noisy error.*`. It also increments the -`loki_process_dropped_lines_total` metric with a label -`drop_counter_reason="discard_noisy_errors"`. +The fifth stage drops entries from lines where `applbl` is set to 'bar' and the line contents matches the regex `.*noisy error.*`. +It also increments the `loki_process_dropped_lines_total` metric with a label `drop_counter_reason="discard_noisy_errors"`. -The final output stage changes the contents of the log line to be the value of -`msg` from the extracted map. In this case, the first log entry's content is -changed to `app1 log line`. +The final output stage changes the contents of the log line to be the value of `msg` from the extracted map. +In this case, the first log entry's content is changed to `app1 log line`. ### stage.metrics block -The `stage.metrics` inner block configures stage that allows to define and -update metrics based on values from the shared extracted map. The created -metrics are available at the Agent's root /metrics endpoint. +The `stage.metrics` inner block configures stage that allows you to define and update metrics based on values from the shared extracted map. +The created metrics are available at the Agent's root /metrics endpoint. -The `stage.metrics` block does not support any arguments and is only configured via -a number of nested inner `metric.*` blocks, one for each metric that should be -generated. +The `stage.metrics` block doesn't support any arguments and is only configured via a number of nested inner `metric.*` blocks, one for each metric that should be generated. The following blocks are supported inside the definition of `stage.metrics`: @@ -665,54 +600,54 @@ The following blocks are supported inside the definition of `stage.metrics`: [metric.histogram]: #metrichistogram-block -#### metric.counter block +#### metric.counter + Defines a metric whose value only goes up. The following arguments are supported: | Name | Type | Description | Default | Required | |---------------------|------------|----------------------------------------------------------------------------------------------------------|--------------------------|----------| -| `name` | `string` | The metric name. | | yes | | `action` | `string` | The action to take. Valid actions are `set`, `inc`, `dec`,` add`, or `sub`. | | yes | +| `name` | `string` | The metric name. | | yes | +| `count_entry_bytes` | `bool` | If set to true, counts all log lines bytes. | `false` | no | | `description` | `string` | The metric's description and help text. | `""` | no | -| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | -| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | +| `match_all` | `bool` | If set to true, all log lines are counted, without attemptng to match the `source` to the extracted map. | `false` | no | | `max_idle_duration` | `duration` | Maximum amount of time to wait until the metric is marked as 'stale' and removed. | `"5m"` | no | +| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | +| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | | `value` | `string` | If set, the metric only changes if `source` exactly matches the `value`. | `""` | no | -| `match_all` | `bool` | If set to true, all log lines are counted, without attemptng to match the `source` to the extracted map. | `false` | no | -| `count_entry_bytes` | `bool` | If set to true, counts all log lines bytes. | `false` | no | -A counter cannot set both `match_all` to true _and_ a `value`. -A counter cannot set `count_entry_bytes` without also setting `match_all=true` -_or_ `action=add`. -The valid `action` values are `inc` and `add`. The `inc` action increases the -metric value by 1 for each log line that passed the filter. The `add` action -converts the extracted value to a positive float and adds it to the metric. +A counter can't set both `match_all` to true _and_ a `value`. +A counter can't set `count_entry_bytes` without also setting `match_all=true` _or_ `action=add`. +The valid `action` values are `inc` and `add`. The `inc` action increases the metric value by 1 for each log line that passed the filter. +The `add` action converts the extracted value to a positive float and adds it to the metric. + +#### metric.gauge -#### metric.gauge block Defines a gauge metric whose value can go up or down. The following arguments are supported: | Name | Type | Description | Default | Required | |---------------------|------------|-------------------------------------------------------------------------------------|--------------------------|----------| -| `name` | `string` | The metric name. | | yes | | `action` | `string` | The action to take. Valid actions are `inc` and `add`. | | yes | +| `name` | `string` | The metric name. | | yes | | `description` | `string` | The metric's description and help text. | `""` | no | -| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | -| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | | `max_idle_duration` | `duration` | Maximum amount of time to wait until the metric is marked as 'stale' and removed. | `"5m"` | no | +| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | +| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | | `value` | `string` | If set, the metric only changes if `source` exactly matches the `value`. | `""` | no | The valid `action` values are `inc`, `dec`, `set`, `add`, or `sub`. `inc` and `dec` increment and decrement the metric's value by 1 respectively. -If `set`, `add, or `sub` is chosen, the extracted value must be convertible -to a positive float and is set, added to, or subtracted from the metric's value. +If `set`, `add, or `sub` is chosen, the extracted value must be convertible to a positive float and is set, added to, or subtracted from the metric's value. -#### metric.histogram block +#### metric.histogram + Defines a histogram metric whose values are recorded in predefined buckets. @@ -720,26 +655,23 @@ The following arguments are supported: | Name | Type | Description | Default | Required | |---------------------|---------------|-------------------------------------------------------------------------------------|--------------------------|----------| -| `name` | `string` | The metric name. | | yes | | `buckets` | `list(float)` | The action to take. Valid actions are `set`, `inc`, `dec`,` add`, or `sub`. | | yes | +| `name` | `string` | The metric name. | | yes | | `description` | `string` | The metric's description and help text. | `""` | no | -| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | -| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | | `max_idle_duration` | `duration` | Maximum amount of time to wait until the metric is marked as 'stale' and removed. | `"5m"` | no | +| `prefix` | `string` | The prefix to the metric name. | `"loki_process_custom_"` | no | +| `source` | `string` | Key from the extracted data map to use for the metric. Defaults to the metric name. | `""` | no | | `value` | `string` | If set, the metric only changes if `source` exactly matches the `value`. | `""` | no | #### metrics behavior -If `value` is not present, all incoming log entries match. +If `value` isn't present, all incoming log entries match. -Label values on created metrics can be dynamic, which can cause exported -metrics to explode in cardinality or go stale, for example, when a stream stops -receiving new logs. To prevent unbounded growth of the `/metrics` endpoint, any -metrics which have not been updated within `max_idle_duration` are removed. The -`max_idle_duration` must be greater or equal to `"1s"`, and it defaults to `"5m"`. +Label values on created metrics can be dynamic, which can cause exported metrics to explode in cardinality or go stale, for example, when a stream stops receiving new logs. +To prevent unbounded growth of the `/metrics` endpoint, any metrics which haven't been updated within `max_idle_duration` are removed. +The `max_idle_duration` must be greater or equal to `"1s"`, and it defaults to `"5m"`. -The metric values extracted from the log data are internally converted to -floats. The supported values are the following: +The metric values extracted from the log data are internally converted to floats. The supported values are the following: * integer * floating point number @@ -750,9 +682,12 @@ floats. The supported values are the following: * true is converted to 1. * false is converted to 0. -The following pipeline creates a counter which increments every time any log line is received by using the `match_all` parameter. The pipeline creates a second counter which adds the byte size of these log lines by using the `count_entry_bytes` parameter. +The following pipeline creates a counter which increments every time any log line is received by using the `match_all` parameter. +The pipeline creates a second counter which adds the byte size of these log lines by using the `count_entry_bytes` parameter. + +These two metrics disappear after 24 hours if no new entries are received, to avoid building up metrics which no longer serve any use. +These two metrics are a good starting point to track the volume of log streams in both the number of entries and their byte size, to identify sources of high-volume or high-cardinality data. -These two metrics disappear after 24 hours if no new entries are received, to avoid building up metrics which no longer serve any use. These two metrics are a good starting point to track the volume of log streams in both the number of entries and their byte size, to identify sources of high-volume or high-cardinality data. ```river stage.metrics { metric.counter { @@ -779,8 +714,7 @@ stage.metrics { } ``` -Here, the first stage uses a regex to extract text in the format -`order_status=` in the log line. +The first stage uses a regex to extract text in the format `order_status=` in the log line. The second stage, defines a counter which increments the `successful_orders_total` and `failed_orders_total` based on the previously extracted values. ```river @@ -807,7 +741,8 @@ stage.metrics { } ``` -In this example, the first stage extracts text in the format of `retries=`, from the log line. The second stage creates a gauge whose current metric value is increased by the number extracted from the retries field. +In this example, the first stage extracts text in the format of `retries=`, from the log line. +The second stage creates a gauge whose current metric value is increased by the number extracted from the retries field. ```river stage.regex { @@ -823,9 +758,7 @@ stage.metrics { } ``` -The following example shows a histogram that reads `response_time` from the extracted -map and places it into a bucket, both increasing the count of the bucket and -the sum for that particular bucket: +The following example shows a histogram that reads `response_time` from the extracted map and places it into a bucket, both increasing the count of the bucket and the sum for that particular bucket: ```river stage.metrics { @@ -838,30 +771,26 @@ stage.metrics { } ``` -### stage.multiline block +### stage.multiline -The `stage.multiline` inner block merges multiple lines into a single block before -passing it on to the next stage in the pipeline. +The `stage.multiline` inner block merges multiple lines into a single block before passing it on to the next stage in the pipeline. The following arguments are supported: | Name | Type | Description | Default | Required | | --------------- | ---------- | -------------------------------------------------- | ------- | -------- | | `firstline` | `string` | Name from extracted data to use for the log entry. | | yes | -| `max_wait_time` | `duration` | The maximum time to wait for a multiline block. | `"3s"` | no | | `max_lines` | `number` | The maximum number of lines a block can have. | `128` | no | +| `max_wait_time` | `duration` | The maximum time to wait for a multiline block. | `"3s"` | no | A new block is identified by the RE2 regular expression passed in `firstline`. +Any line that does _not_ match the expression is considered to be part of the block of the previous match. +If no new logs arrive with `max_wait_time`, the block is sent on. +The `max_lines` field defines the maximum number of lines ablock can have. If this is exceeded, a new block is started. -Any line that does _not_ match the expression is considered to be part of the -block of the previous match. If no new logs arrive with `max_wait_time`, the -block is sent on. The `max_lines` field defines the maximum number of lines a -block can have. If this is exceeded, a new block is started. - -Let's see how this works in practice with an example stage and a stream of log -entries from a Flask web service. +The following example shows how this works with a stage and a stream of log entries from a Flask web service. ``` stage.multiline { @@ -891,16 +820,12 @@ Exception: Sorry, this route always breaks [2023-01-18 17:42:29] "GET /hello HTTP/1.1" 200 - ``` -All 'blocks' that form log entries of separate web requests start with a -timestamp in square brackets. The stage detects this with the regular -expression in `firstline` to collapse all lines of the traceback into a single -block and thus a single Loki log entry. +All 'blocks' that form log entries of separate web requests start with a timestamp in square brackets. +The stage detects this with the regular expression in `firstline` to collapse all lines of the traceback into a single block and thus a single Loki log entry. -### stage.output block +### stage.output -The `stage.output` inner block configures a processing stage that reads from the -extracted map and changes the content of the log entry that is forwarded -to the next component. +The `stage.output` inner block configures a processing stage that reads from the extracted map and changes the content of the log entry that is forwarded to the next component. The following arguments are supported: @@ -909,7 +834,7 @@ The following arguments are supported: | `source` | `string` | Name from extracted data to use for the log entry. | | yes | -Let's see how this works for the following log line and three-stage pipeline: +The following example shows how this works with a log line and three-stage pipeline: ``` {"user": "John Doe", "message": "hello, world!"} @@ -928,19 +853,17 @@ stage.output { ``` The first stage extracts the following key-value pairs into the shared map: + ``` user: John Doe message: hello, world! ``` -Then, the second stage adds `user="John Doe"` to the label set of the log -entry, and the final output stage changes the log line from the original -JSON to `hello, world!`. +Then, the second stage adds `user="John Doe"` to the label set of the log entry, and the final output stage changes the log line from the original JSON to `hello, world!`. -### stage.pack block +### stage.pack -The `stage.pack` inner block configures a transforming stage that replaces the log -entry with a JSON object that embeds extracted values and labels with it. +The `stage.pack` inner block configures a transforming stage that replaces the log entry with a JSON object that embeds extracted values and labels with it. The following arguments are supported: @@ -949,30 +872,29 @@ The following arguments are supported: | `labels` | `list(string)` | The values from the extracted data and labels to pack with the log entry. | | yes | | `ingest_timestamp` | `bool` | Whether to replace the log entry timestamp with the time the `pack` stage runs. | `true | no | -This stage lets you embed extracted values and labels together with the log -line, by packing them into a JSON object. The original message is stored under -the `_entry` key, and all other keys retain their values. This is useful in -cases where you _do_ want to keep a certain label or metadata, but you don't -want it to be indexed as a label due to high cardinality. +This stage lets you embed extracted values and labels together with the log line, by packing them into a JSON object. +The original message is stored under the `_entry` key, and all other keys retain their values. +This is useful in cases where you _do_ want to keep a certain label or metadata, but you don't want it to be indexed as a label due to high cardinality. -The querying capabilities of Loki make it easy to still access this data so it can -be filtered and aggregated at query time. +The querying capabilities of Loki make it easy to still access this data so it can be filtered and aggregated at query time. For example, consider the following log entry: + ``` log_line: "something went wrong" labels: { "level" = "error", "env" = "dev", "user_id" = "f8fas0r" } ``` and this processing stage: + ```river stage.pack { labels = ["env", "user_id"] } ``` -The stage transforms the log entry into the following JSON object, where the two -embedded labels are removed from the original log entry: +The stage transforms the log entry into the following JSON object, where the two embedded labels are removed from the original log entry: + ```json { "_entry": "something went wrong", @@ -981,19 +903,13 @@ embedded labels are removed from the original log entry: } ``` -At query time, Loki's [`unpack` parser](/docs/loki/latest/logql/log_queries/#unpack) -can be used to access these embedded labels and replace the log line with the -original one stored in the `_entry` field automatically. +At query time, Loki's [`unpack` parser](/docs/loki/latest/logql/log_queries/#unpack) can be used to access these embedded labels and replace the log line with the original one stored in the `_entry` field automatically. -When combining several log streams to use with the `pack` stage, you can set -`ingest_timestamp` to true to avoid interlaced timestamps and -out-of-order ingestion issues. +When combining several log streams to use with the `pack` stage, you can set `ingest_timestamp` to true to avoid interlaced timestamps and out-of-order ingestion issues. -### stage.regex block +### stage.regex -The `stage.regex` inner block configures a processing stage that parses log lines -using regular expressions and uses named capture groups for adding data into -the shared extracted map of values. +The `stage.regex` inner block configures a processing stage that parses log lines using regular expressions and uses named capture groups for adding data into the shared extracted map of values. The following arguments are supported: @@ -1003,19 +919,16 @@ The following arguments are supported: | `source` | `string` | Name from extracted data to parse. If empty, uses the log message. | `""` | no | -The `expression` field needs to be a RE2 regex string. Every matched capture -group is added to the extracted map, so it must be named like: `(?Pre)`. -The name of the capture group is then used as the key in the extracted map for -the matched value. +The `expression` field needs to be a RE2 regex string. +Every matched capture group is added to the extracted map, so it must be named like: `(?Pre)`. +The name of the capture group is then used as the key in the extracted map for the matched value. -Because of how River strings work, any backslashes in `expression` must be -escaped with a double backslash; for example `"\\w"` or `"\\S+"`. +Because of how River strings work, any backslashes in `expression` must be escaped with a double backslash; for example `"\\w"` or `"\\S+"`. If the `source` is empty or missing, then the stage parses the log line itself. If it's set, the stage parses a previously extracted value with the same name. -Given the following log line and regex stage, the extracted values are shown -below: +Given the following log line and regex stage, the extracted values are shown below: ``` 2019-01-01T01:00:00.000000001Z stderr P i'm a log message! @@ -1030,11 +943,10 @@ flags: P, content: i'm a log message ``` -On the other hand, if the `source` value is set, then the regex is applied to -the value stored in the shared map under that name. +On the other hand, if the `source` value is set, then the regex is applied to the value stored in the shared map under that name. + +The following example shows what happens when the following log line is put through this two-stage pipeline: -Let's see what happens when the following log line is put through this -two-stage pipeline: ``` {"timestamp":"2022-01-01T01:00:00.000000001Z"} @@ -1048,44 +960,41 @@ stage.regex { ``` The first stage adds the following key-value pair into the extracted map: + ``` time: 2022-01-01T01:00:00.000000001Z ``` -Then, the regex stage parses the value for time from the shared values and -appends the subsequent key-value pair back into the extracted values map: +Then, the regex stage parses the value for time from the shared values and appends the subsequent key-value pair back into the extracted values map: + ``` year: 2022 ``` -### stage.replace block +### stage.replace -The `stage.replace` inner block configures a stage that parses a log line using a -regular expression and replaces the log line contents. Named capture groups in -the regex also support adding data into the shared extracted map. +The `stage.replace` inner block configures a stage that parses a log line using a regular expression and replaces the log line contents. +Named capture groups in the regex also support adding data into the shared extracted map. The following arguments are supported: | Name | Type | Description | Default | Required | | ------------ | -------- | --------------------------------------------------------------- | ------- | -------- | | `expression` | `string` | Name from extracted data to use for the log entry. | | yes | -| `source` | `string` | Source of the data to parse. If empty, it uses the log message. | | no | | `replace` | `string` | Value replaced by the capture group. | | no | +| `source` | `string` | Source of the data to parse. If empty, it uses the log message. | | no | -The `source` field defines the source of data to parse using `expression`. When -`source` is missing or empty, the stage parses the log line itself, but it can -also be used to parse a previously extracted value. The replaced value is -assigned back to the `source` key. +The `source` field defines the source of data to parse using `expression`. +When `source` is missing or empty, the stage parses the log line itself, but it can also be used to parse a previously extracted value. +The replaced value is assigned back to the `source` key. -The `expression` must be a valid RE2 regex. Every named capture group -`(?Pre)` is set into the extracted map with its name. +The `expression` must be a valid RE2 regex. Every named capture group `(?Pre)` is set into the extracted map with its name. -Because of how River treats backslashes in double-quoted strings, note that all -backslashes in a regex expression must be escaped like `"\\w*"`. +Because of how River treats backslashes in double-quoted strings, note that all backslashes in a regex expression must be escaped like `"\\w*"`. -Let's see how this works with the following log line and stage. Since `source` -is omitted, the replacement occurs on the log line itself. +The follwoing example shows how this works with the following log line and stage. +Since `source` is omitted, the replacement occurs on the log line itself. ``` 2023-01-01T01:00:00.000000001Z stderr P i'm a log message who has sensitive information with password xyz! @@ -1097,6 +1006,7 @@ stage.replace { ``` The log line is transformed to + ``` 2023-01-01T01:00:00.000000001Z stderr P i'm a log message who has sensitive information with password *****! ``` @@ -1104,6 +1014,7 @@ The log line is transformed to If `replace` is empty, then the captured value is omitted instead. In the following example, `source` is defined. + ``` {"time":"2023-01-01T01:00:00.000000001Z", "level": "info", "msg":"11.11.11.11 - \"POST /loki/api/push/ HTTP/1.1\" 200 932 \"-\" \"Mozilla/5.0\"} @@ -1119,25 +1030,25 @@ stage.replace { ``` The JSON stage adds the following key-value pairs into the extracted map: + ``` time: 2023-01-01T01:00:00.000000001Z level: info msg: "11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0" ``` -The `replace` stage acts on the `msg` value. The capture group matches against -`/loki/api/push` and is replaced by `redacted_url`. +The `replace` stage acts on the `msg` value. The capture group matches against `/loki/api/push` and is replaced by `redacted_url`. The `msg` value is finally transformed into: + ``` msg: "11.11.11.11 - "POST redacted_url HTTP/1.1" 200 932 "-" "Mozilla/5.0" ``` -The `replace` field can use a set of templating functions, by utilizing Go's -[text/template](https://pkg.go.dev/text/template) package. +The `replace` field can use a set of templating functions, by utilizing Go's [text/template](https://pkg.go.dev/text/template) package. + +The following example shows how this works with named capture groups with a sample log line and stage. -Let's see how this works with named capture groups with a sample log line -and stage. ``` 11.11.11.11 - agent [01/Jan/2023:00:00:01 +0200] @@ -1147,9 +1058,9 @@ stage.replace { } ``` -Since `source` is empty, the regex parses the log line itself and extracts the -named capture groups to the shared map of values. The `replace` field acts on -these extracted values and converts them to uppercase: +Since `source` is empty, the regex parses the log line itself and extracts the named capture groups to the shared map of values. +The `replace` field acts on these extracted values and converts them to uppercase: + ``` ip: 11.11.11.11 identd: - @@ -1158,6 +1069,7 @@ timestamp: 01/JAN/2023:00:00:01 +0200 ``` and the log line becomes: + ``` 11.11.11.11 - FRANK [01/JAN/2023:00:00:01 +0200] ``` @@ -1171,11 +1083,11 @@ ToLower, ToUpper, Replace, Trim, TrimLeftTrimRight, TrimPrefix, TrimSuffix, Trim "*IP4*{{ .Value | Hash "salt" }}*" ``` -### stage.sampling block +### stage.sampling -The `sampling` stage is used to sample the logs. Configuring the value -`rate = 0.1` means that 10% of the logs will continue to be processed. The -remaining 90% of the logs will be dropped. +The `sampling` stage is used to sample the logs. +Configuring the value `rate = 0.1` means that 10% of the logs will continue to be processed. +The remaining 90% of the logs will be dropped. The following arguments are supported: @@ -1184,9 +1096,8 @@ The following arguments are supported: | `rate` | `float` | The sampling rate in a range of `[0, 1]` | | yes | | `drop_counter_reason` | `string` | The label to add to `loki_process_dropped_lines_total` metric when logs are dropped by this stage. | sampling_stage | no | -For example, the configuration below will sample 25% of the logs and drop the -remaining 75%. When logs are dropped, the `loki_process_dropped_lines_total` -metric is incremented with an additional `reason=logs_sampling` label. +For example, the configuration below will sample 25% of the logs and drop the remaining 75%. +When logs are dropped, the `loki_process_dropped_lines_total` metric is incremented with an additional `reason=logs_sampling` label. ```river stage.sampling { @@ -1195,10 +1106,9 @@ stage.sampling { } ``` -### stage.static_labels block +### stage.static_labels -The `stage.static_labels` inner block configures a static_labels processing stage -that adds a static set of labels to incoming log entries. +The `stage.static_labels` inner block configures a static_labels processing stage that adds a static set of labels to incoming log entries. The following arguments are supported: @@ -1206,7 +1116,6 @@ The following arguments are supported: | -------- | ------------- | ---------------------------------------------- | ------- | -------- | | `values` | `map(string)` | Configures a `static_labels` processing stage. | `{}` | no | - ```river stage.static_labels { values = { @@ -1216,15 +1125,11 @@ stage.static_labels { } ``` -### stage.template block +### stage.template -The `stage.template` inner block configures a transforming stage that allows users to -manipulate the values in the extracted map by using Go's `text/template` -[package](https://pkg.go.dev/text/template) syntax. This stage is primarily -useful for manipulating and standardizing data from previous stages before -setting them as labels in a subsequent stage. Example use cases are replacing -spaces with underscores, converting uppercase strings to lowercase, or hashing -a value. +The `stage.template` inner block configures a transforming stage that allows users to manipulate the values in the extracted map by using Go's `text/template` [package](https://pkg.go.dev/text/template) syntax. +This stage is primarily useful for manipulating and standardizing data from previous stages before setting them as labels in a subsequent stage. +Example use cases are replacing spaces with underscores, converting uppercase strings to lowercase, or hashing a value. The template stage can also create new keys in the extracted map. @@ -1235,18 +1140,19 @@ The following arguments are supported: | `source` | `string` | Name from extracted data to parse. If the key doesn't exist, a new entry is created. | | yes | | `template` | `string` | Go template string to use. | | yes | -The template string can be any valid template that can be used by Go's `text/template`. It supports all functions from the [sprig package](http://masterminds.github.io/sprig/), as well as the following list of custom functions: +The template string can be any valid template that can be used by Go's `text/template`. +It supports all functions from the [sprig package](http://masterminds.github.io/sprig/), as well as the following list of custom functions: + ``` ToLower, ToUpper, Replace, Trim, TrimLeftTrimRight, TrimPrefix, TrimSuffix, TrimSpace, Hash, Sha2Hash, regexReplaceAll, regexReplaceAllLiteral ``` -More details on each of these functions can be found in the [supported -functions][] section below. +More details on each of these functions can be found in the [supported functions][] section below. [supported functions]: #supported-functions -Assuming no data is present on the extracted map, the following stage simply -adds the `new_key: "hello_world"`key-value pair to the shared map. +Assuming no data is present on the extracted map, the following stage simply adds the `new_key: "hello_world"` key-value pair to the shared map. + ```river stage.template { source = "new_key" @@ -1255,8 +1161,8 @@ stage.template { ``` If the `source` value exists in the extract fields, its value can be referred to as `.Value` in the template. -The next stage takes the current value of `app` from the extracted map, -converts it to lowercase, and adds a suffix to its value: +The next stage takes the current value of `app` from the extracted map, converts it to lowercase, and adds a suffix to its value: + ```river stage.template { source = "app" @@ -1265,8 +1171,8 @@ stage.template { ``` Any previously extracted keys are available for `template` to expand and use. -The next stage takes the current values for `level`, `app` and `module` and -creates a new key named `output_message`: +The next stage takes the current values for `level`, `app` and `module` and creates a new key named `output_message`: + ```river stage.template { source = "output_msg" @@ -1274,8 +1180,8 @@ stage.template { } ``` -A special key named `Entry` can be used to reference the current line; this can -be useful when you need to append/prepend something to the log line, like this snippet: +A special key named `Entry` can be used to reference the current line; this can be useful when you need to append/prepend something to the log line, like this snippet: + ```river stage.template { source = "message" @@ -1287,11 +1193,12 @@ stage.output { ``` #### Supported functions + In addition to supporting all functions from the [sprig package](http://masterminds.github.io/sprig/), the `template` stage supports the following custom functions. ##### ToLower and ToUpper -`ToLower` and `ToUpper` convert the entire string to lowercase and -uppercase, respectively. + +`ToLower` and `ToUpper` convert the entire string to lowercase and uppercase, respectively. Examples: ```river @@ -1306,15 +1213,16 @@ stage.template { ``` ##### Replace + The `Replace` function syntax is defined as `{{ Replace }}`. -The function returns a copy of the input string, with instances of the `` -argument being replaced by ``. The function replaces up to `` -non-overlapping instances of the second argument. If `` is less than zero, -there is no limit on the number of replacement. Finally, if `` is empty, -it matches before and after every UTF-8 character in the string. +The function returns a copy of the input string, with instances of the `` argument being replaced by ``. +The function replaces up to `` non-overlapping instances of the second argument. +If `` is less than zero, there is no limit on the number of replacement. +Finally, if `` is empty, it matches before and after every UTF-8 character in the string. This example replaces the first two instances of the `loki` word by `Loki`: + ```river stage.template { source = "output" @@ -1323,14 +1231,14 @@ stage.template { ``` ##### Trim, TrimLeft, TrimRight, TrimSpace, TrimPrefix, TrimSuffix -* `Trim` returns a slice of the string `s` with all leading and trailing Unicode - code points contained in `cutset` removed. -* `TrimLeft` and `TrimRight` are the same as Trim except that they - trim only leading and trailing characters, respectively. -* `TrimSpace` returns a slice of the string s, with all leading and trailing -white space removed, as defined by Unicode. + +* `Trim` returns a slice of the string `s` with all leading and trailing Unicode code points contained in `cutset` removed. +* `TrimLeft` and `TrimRight` are the same as Trim except that they trim only leading and trailing characters, respectively. +* `TrimSpace` returns a slice of the string s, with all leading and trailing white space removed, as defined by Unicode. * `TrimPrefix` and `TrimSuffix` trim the supplied prefix or suffix, respectively. + Examples: + ```river stage.template { source = "output" @@ -1347,14 +1255,12 @@ stage.template { ``` ##### Regex -`regexReplaceAll` returns a copy of the input string, replacing matches of the -Regexp with the replacement string. Inside the replacement string, `$` characters -are interpreted as in Expand functions, so for instance, $1 represents the first captured -submatch. -`regexReplaceAllLiteral` returns a copy of the input string, replacing matches -of the Regexp with the replacement string. The replacement string is -substituted directly, without using Expand. +`regexReplaceAll` returns a copy of the input string, replacing matches of the regular expression with the replacement string. +Inside the replacement string, `$` characters are interpreted as in Expand functions, so for example, $1 represents the first captured submatch. + +`regexReplaceAllLiteral` returns a copy of the input string, replacing matches of the regular expression with the replacement string. +The replacement string is substituted directly, without using Expand. ```river stage.template { @@ -1368,10 +1274,14 @@ stage.template { ``` ##### Hash and Sha2Hash -`Hash` returns a `Sha3_256` hash of the string, represented as a hexadecimal number of 64 digits. You can use it to obfuscate sensitive data and PII in the logs. It requires a (fixed) salt value, to add complexity to low input domains (e.g., all possible social security numbers). -`Sha2Hash` returns a `Sha2_256` of the string which is faster and less CPU-intensive than `Hash`, however it is less secure. + +`Hash` returns a `Sha3_256` hash of the string, represented as a hexadecimal number of 64 digits. +You can use it to obfuscate sensitive data and PII in the logs. +It requires a fixed salt value, to add complexity to low input domains, for example, all possible social security numbers. +`Sha2Hash` returns a `Sha2_256` of the string which is faster and less CPU-intensive than `Hash`, however it's less secure. Examples: + ```river stage.template { source = "output" @@ -1385,10 +1295,9 @@ stage.template { We recommend using Hash as it has a stronger hashing algorithm. -### stage.tenant block +### stage.tenant -The `stage.tenant` inner block sets the tenant ID for the log entry by obtaining it from a -field in the extracted data map, a label, or a provided value. +The `stage.tenant` inner block sets the tenant ID for the log entry by obtaining it from a field in the extracted data map, a label, or a provided value. The following arguments are supported: @@ -1401,14 +1310,15 @@ The following arguments are supported: The block expects only one of `label`, `source` or `value` to be provided. The following stage assigns the fixed value `team-a` as the tenant ID: + ```river stage.tenant { value = "team-a" } ``` -This stage extracts the tenant ID from the `customer_id` field after -parsing the log entry as JSON in the shared extracted map: +This stage extracts the tenant ID from the `customer_id` field after parsing the log entry as JSON in the shared extracted map: + ```river stage.json { expressions = { "customer_id" = "" } @@ -1419,6 +1329,7 @@ stage.tenant { ``` The final example extracts the tenant ID from a label set by a previous stage: + ```river stage.labels { "namespace" = "k8s_namespace" @@ -1428,30 +1339,27 @@ stage.tenant { } ``` -### stage.timestamp block +### stage.timestamp -The `stage.timestamp` inner block configures a processing stage that sets the -timestamp of log entries before they're forwarded to the next component. When -no timestamp stage is set, the log entry timestamp defaults to the time when -the log entry was scraped. +The `stage.timestamp` inner block configures a processing stage that sets the timestamp of log entries before they're forwarded to the next component. +When no timestamp stage is set, the log entry timestamp defaults to the time when the log entry was scraped. The following arguments are supported: | Name | Type | Description | Default | Required | | ------------------- | -------------- | ----------------------------------------------------------- | --------- | -------- | -| `source` | `string` | Name from extracted values map to use for the timestamp. | | yes | | `format` | `string` | Determines how to parse the source string. | | yes | +| `source` | `string` | Name from extracted values map to use for the timestamp. | | yes | +| `action_on_failure` | `string` | What to do when the timestamp can't be extracted or parsed. | `"fudge"` | no | | `fallback_formats` | `list(string)` | Fallback formats to try if the `format` field fails. | `[]` | no | | `location` | `string` | IANA Timezone Database location to use when parsing. | `""` | no | -| `action_on_failure` | `string` | What to do when the timestamp can't be extracted or parsed. | `"fudge"` | no | -The `source` field defines which value from the shared map of extracted values -the stage should attempt to parse as a timestamp. +The `source` field defines which value from the shared map of extracted values the stage should attempt to parse as a timestamp. The `format` field defines _how_ that source should be parsed. -First off, the `format` can be set to one of the following shorthand values for -commonly-used forms: +The `format` can be set to one of the following shorthand values for commonly used forms: + ``` ANSIC: Mon Jan _2 15:04:05 2006 UnixDate: Mon Jan _2 15:04:05 MST 2006 @@ -1465,8 +1373,8 @@ RFC3339: 2006-01-02T15:04:05-07:00 RFC3339Nano: 2006-01-02T15:04:05.999999999-07:00 ``` -Additionally, support for common Unix timestamps is supported with the -following format values: +Additionally, support for common Unix timestamps is supported with the following format values: + ``` Unix: 1562708916 or with fractions 1562708916.000000123 UnixMs: 1562708916414 @@ -1474,18 +1382,14 @@ UnixUs: 1562708916414123 UnixNs: 1562708916000000123 ``` -Otherwise, the field accepts a custom format string that defines how an -arbitrary reference point in history should -be interpreted by the stage. The arbitrary reference point is Mon Jan 2 15:04:05 -0700 MST 2006. +Otherwise, the field accepts a custom format string that defines how an arbitrary reference point in history should be interpreted by the stage. +The arbitrary reference point is Mon Jan 2 15:04:05 -0700 MST 2006. -The string value of the field is passed directly to the layout parameter in -Go's [`time.Parse`](https://pkg.go.dev/time#Parse) function. +The string value of the field is passed directly to the layout parameter in Go's [`time.Parse`](https://pkg.go.dev/time#Parse) function. -If the custom format has no year component, the stage uses the current year, -according to the system's clock. +If the custom format has no year component, the stage uses the current year, according to the system's clock. -The following table shows the supported reference values to use when defining a -custom format. +The following table shows the supported reference values to use when defining a custom format. | Timestamp Component | Format value | | ------------------- | ------------------------------------------------------------------------------------------------------------------------ | @@ -1502,24 +1406,18 @@ custom format. | Timezone offset | -0700, -070000 (with seconds), -07, 07:00, -07:00:00 (with seconds) | | Timezone ISO-8601 | Z0700 (Z for UTC or time offset), Z070000, Z07, Z07:00, Z07:00:00 | -The `fallback_formats` field defines one or more format fields to try and parse -the timestamp with, if parsing with `format` fails. +The `fallback_formats` field defines one or more format fields to try and parse the timestamp with, if parsing with `format` fails. -The `location` field must be a valid IANA Timezone Database location and -determines in which timezone the timestamp value is interpreted to be in. +The `location` field must be a valid IANA Timezone Database location and determines in which timezone the timestamp value is interpreted to be in. -The `action_on_failure` field defines what should happen when the source field -doesn't exist in the shared extracted map, or if the timestamp parsing fails. +The `action_on_failure` field defines what should happen when the source field doesn't exist in the shared extracted map, or if the timestamp parsing fails. The supported actions are: -* fudge (default): Change the timestamp to the last known timestamp, summing up - 1 nanosecond (to guarantee log entries ordering). -* skip: Do not change the timestamp and keep the time when the log entry was - scraped. +* fudge (default): Change the timestamp to the last known timestamp, summing up 1 nanosecond (to guarantee log entries ordering). +* skip: Don't change the timestamp and keep the time when the log entry was scraped. -The following stage fetches the `time` value from the shared values map, parses -it as a RFC3339 format, and sets it as the log entry's timestamp. +The following stage fetches the `time` value from the shared values map, parses it as a RFC3339 format, and sets it as the log entry's timestamp. ```river stage.timestamp { @@ -1528,9 +1426,10 @@ stage.timestamp { } ``` -### stage.geoip block +### stage.geoip -The `stage.geoip` inner block configures a processing stage that reads an IP address and populates the shared map with geoip fields. Maxmind’s GeoIP2 database is used for the lookup. +The `stage.geoip` inner block configures a processing stage that reads an IP address and populates the shared map with geoip fields. +Maxmind’s GeoIP2 database is used for the lookup. The following arguments are supported: @@ -1538,9 +1437,8 @@ The following arguments are supported: | ---------------- | ------------- | -------------------------------------------------- | ------- | -------- | | `db` | `string` | Path to the Maxmind DB file. | | yes | | `source` | `string` | IP from extracted data to parse. | | yes | -| `db_type` | `string` | Maxmind DB type. Allowed values are "city", "asn". | | no | | `custom_lookups` | `map(string)` | Key-value pairs of JMESPath expressions. | | no | - +| `db_type` | `string` | Maxmind DB type. Allowed values are "city", "asn". | | no | #### GeoIP with City database example: @@ -1575,8 +1473,9 @@ loki.process "example" { } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. -Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the following fields in the shared map which are added as labels using the `labels` stage. +The `json` stage extracts the IP address from the `client_ip` key in the log line. +Then the extracted `ip` value is given as source to geoip stage. +The geoip stage performs a lookup on the IP and populates the following fields in the shared map which are added as labels using the `labels` stage. The extracted data from the IP used in this example: @@ -1614,8 +1513,8 @@ loki.process "example" { } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. -Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the shared map. +The `json` stage extracts the IP address from the `client_ip` key in the log line. +Then the extracted `ip` value is given as source to GeoIP stage. The GeoIP stage performs a lookup on the IP and populates the shared map. The extracted data from the IP used in this example: @@ -1653,8 +1552,10 @@ loki.process "example" { } } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. -Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the shared map with the data from the city database results in addition to the custom lookups. Lastly, the custom lookup fields from the shared map are added as labels. + +The `json` stage extracts the IP address from the `client_ip` key in the log line. +Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the shared map with the data from the city database results in addition to the custom lookups. +Lastly, the custom lookup fields from the shared map are added as labels. ## Exported fields @@ -1670,16 +1571,15 @@ The following fields are exported and can be referenced by other components: ## Debug information -`loki.process` does not expose any component-specific debug information. +`loki.process` doesn't expose any component-specific debug information. ## Debug metrics +* `loki_process_dropped_lines_by_label_total` (counter): Number of lines dropped when `by_label_name` is non-empty in [stage.limit][]. * `loki_process_dropped_lines_total` (counter): Number of lines dropped as part of a processing stage. -* `loki_process_dropped_lines_by_label_total` (counter): Number of lines dropped when `by_label_name` is non-empty in [stage.limit][]. ## Example -This example creates a `loki.process` component that extracts the `environment` -value from a JSON log line and sets it as a label named 'env'. +This example creates a `loki.process` component that extracts the `environment` value from a JSON log line and sets it as a label named 'env'. ```river loki.process "local" { diff --git a/docs/sources/flow/reference/components/loki.relabel.md b/docs/sources/flow/reference/components/loki.relabel.md index 14425715d3b2..a85239666c32 100644 --- a/docs/sources/flow/reference/components/loki.relabel.md +++ b/docs/sources/flow/reference/components/loki.relabel.md @@ -11,26 +11,19 @@ title: loki.relabel # loki.relabel -The `loki.relabel` component rewrites the label set of each log entry passed to -its receiver by applying one or more relabeling `rule`s and forwards the -results to the list of receivers in the component's arguments. +The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling `rule`s and forwards the results to the list of receivers in the component's arguments. -If no labels remain after the relabeling rules are applied, then the log -entries are dropped. +If no labels remain after the relabeling rules are applied, then the log entries are dropped. -The most common use of `loki.relabel` is to filter log entries or standardize -the label set that is passed to one or more downstream receivers. The `rule` -blocks are applied to the label set of each log entry in order of their -appearance in the configuration file. The configured rules can be retrieved by -calling the function in the `rules` export field. +The most common use of `loki.relabel` is to filter log entries or standardize the label set that is passed to one or more downstream receivers. +The `rule` blocks are applied to the label set of each log entry in order of their appearance in the configuration file. +The configured rules can be retrieved by calling the function in the `rules` export field. -If you're looking for a way to process the log entry contents, take a look at -[the `loki.process` component][loki.process] instead. +If you're looking for a way to process the log entry contents, take a look at [the `loki.process` component][loki.process] instead. [loki.process]: {{< relref "./loki.process.md" >}} -Multiple `loki.relabel` components can be specified by giving them -different labels. +Multiple `loki.relabel` components can be specified by giving them different labels. ## Usage @@ -50,32 +43,32 @@ loki.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes -`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no +Name | Type | Description | Default | Required +-----------------|------------------|----------------------------------------------------------------|---------|--------- +`forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes +`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no ## Blocks The following blocks are supported inside the definition of `loki.relabel`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to received log entries. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +rule | [rule][] | Relabeling rules to apply to received log entries. | no [rule]: #rule-block -### rule block +### rule -{{< docs/shared lookup="flow/reference/components/rule-block-logs.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/rule-block-logs.md" source="agent" version="" >}} ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. +Name | Type | Description +-----------|----------------|------------------------------------------------------------- +`receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. `rules` | `RelabelRules` | The currently configured relabeling rules. ## Component health @@ -89,16 +82,15 @@ In those cases, exported fields are kept at their last healthy values. ## Debug metrics -* `loki_relabel_entries_processed` (counter): Total number of log entries processed. -* `loki_relabel_entries_written` (counter): Total number of log entries forwarded. -* `loki_relabel_cache_misses` (counter): Total number of cache misses. * `loki_relabel_cache_hits` (counter): Total number of cache hits. +* `loki_relabel_cache_misses` (counter): Total number of cache misses. * `loki_relabel_cache_size` (gauge): Total size of relabel cache. +* `loki_relabel_entries_processed` (counter): Total number of log entries processed. +* `loki_relabel_entries_written` (counter): Total number of log entries forwarded. ## Example -The following example creates a `loki.relabel` component that only forwards -entries whose 'level' value is set to 'error'. +The following example creates a `loki.relabel` component that only forwards entries whose 'level' value is set to 'error'. ```river loki.relabel "keep_error_only" { @@ -111,4 +103,3 @@ loki.relabel "keep_error_only" { } } ``` - diff --git a/docs/sources/flow/reference/components/loki.source.api.md b/docs/sources/flow/reference/components/loki.source.api.md index 966589bd64a1..2f29f507ec81 100644 --- a/docs/sources/flow/reference/components/loki.source.api.md +++ b/docs/sources/flow/reference/components/loki.source.api.md @@ -13,7 +13,8 @@ title: loki.source.api `loki.source.api` receives log entries over HTTP and forwards them to other `loki.*` components. -The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the `logproto` format. This means that other [`loki.write`][loki.write] components can be used as a client and send requests to `loki.source.api` which enables using the Agent as a proxy for logs. +The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the `logproto` format. +This means that other [`loki.write`][loki.write] components can be used as a client and send requests to `loki.source.api` which enables using the Agent as a proxy for logs. [loki.write]: {{< relref "./loki.write.md" >}} [loki-push-api]: https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki @@ -24,7 +25,7 @@ The HTTP API exposed is compatible with [Loki push API][loki-push-api] and the ` loki.source.api "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -33,9 +34,9 @@ loki.source.api "LABEL" { The component will start HTTP server on the configured port and address with the following endpoints: - `/loki/api/v1/push` - accepting `POST` requests compatible with [Loki push API][loki-push-api], for example, from another Grafana Agent's [`loki.write`][loki.write] component. -- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored. +- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with Promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored. - `/loki/ready` - accepting `GET` requests - can be used to confirm the server is reachable and healthy. -- `/api/v1/push` - internally reroutes to `/loki/api/v1/push` +- `/api/v1/push` - internally reroutes to `/loki/api/v1/push` - `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw` @@ -45,15 +46,14 @@ The component will start HTTP server on the configured port and address with the `loki.source.api` supports the following arguments: - Name | Type | Description | Default | Required ---------------------------|----------------------|------------------------------------------------------------|---------|---------- - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no - `labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +-------------------------|----------------------|------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no -The `relabel_rules` field can make use of the `rules` export value from a -[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -61,19 +61,19 @@ The `relabel_rules` field can make use of the `rules` export value from a The following blocks are supported inside the definition of `loki.source.api`: - Hierarchy | Name | Description | Required ------------|----------|----------------------------------------------------|---------- - `http` | [http][] | Configures the HTTP server that receives requests. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +`http` | [http][] | Configures the HTTP server that receives requests. | no [http]: #http ### http -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Exported fields -`loki.source.api` does not export any fields. +`loki.source.api` doesn't export any fields. ## Component health @@ -81,7 +81,8 @@ The following blocks are supported inside the definition of `loki.source.api`: ## Debug metrics -The following are some of the metrics that are exposed when this component is used. Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. +The following are some of the metrics that are exposed when this component is used. +Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. * `loki_source_api_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. * `loki_source_api_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. @@ -90,7 +91,9 @@ The following are some of the metrics that are exposed when this component is us ## Example -This example starts an HTTP server on `0.0.0.0` address and port `9999`. The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. +This example starts an HTTP server on `0.0.0.0` address and port `9999`. +The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. +The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. ```river loki.write "local" { @@ -116,4 +119,3 @@ loki.source.api "loki_push_api" { } } ``` - diff --git a/docs/sources/flow/reference/components/loki.source.awsfirehose.md b/docs/sources/flow/reference/components/loki.source.awsfirehose.md index b080adcaced9..800eb4739787 100644 --- a/docs/sources/flow/reference/components/loki.source.awsfirehose.md +++ b/docs/sources/flow/reference/components/loki.source.awsfirehose.md @@ -11,46 +11,42 @@ title: loki.source.awsfirehose # loki.source.awsfirehose -`loki.source.awsfirehose` receives log entries over HTTP -from [AWS Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) -and forwards them to other `loki.*` components. +`loki.source.awsfirehose` receives log entries over HTTP from [AWS Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) and forwards them to other `loki.*` components. -The HTTP API exposed is compatible -with the [Firehose HTTP Delivery API](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). -Since the API model that AWS Firehose uses to deliver data over HTTP is generic enough, the same component can be used -to receive data from multiple origins: +The HTTP API exposed is compatible with the [Firehose HTTP Delivery API](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). +Since the API model that AWS Firehose uses to deliver data over HTTP is generic enough, the same component can be used to receive data from multiple origins: - [AWS CloudWatch logs](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-cloudwatch-logs.html) - [AWS CloudWatch events](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-cloudwatch-events.html) - Custom data through [DirectPUT requests](https://docs.aws.amazon.com/firehose/latest/dev/writing-with-sdk.html) -The component uses a heuristic to try to decode as much information as possible from each log record, and it falls back to writing -the raw records to Loki. The decoding process goes as follows: +The component uses a heuristic to try to decode as much information as possible from each log record, and it falls back to writing the raw records to Loki. +The decoding process goes as follows: -- AWS Firehose sends batched requests -- Each record is treated individually +- AWS Firehose sends batched requests. +- Each record is treated individually. - For each `record` received in each request: - - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it is decoded and each logging event is written to Loki - - All other records are written raw to Loki + - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it is decoded and each logging event is written to Loki. + - All other records are written raw to Loki. -The component exposes some internal labels, available for relabeling. The following tables describes internal labels available -in records coming from any source. +The component exposes some internal labels, available for relabeling. +The following tables describes internal labels available in records coming from any source. -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | -| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | +| Name | Description | Example | +|-----------------------------|-------------------------------|--------------------------------------------------------------------------| +| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | +| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | If the source of the Firehose record is CloudWatch logs, the request is further decoded and enriched with even more labels, exposed as follows: -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | -| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | -| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | -| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | -| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | +| Name | Description | Example | +|----------------------------|--------------------------------------------------|------------------------------------------| +| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | +| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | +| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | +| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | +| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | See [Examples](#example) for a full example configuration showing how to enrich each log entry with these labels. @@ -60,7 +56,7 @@ See [Examples](#example) for a full example configuration showing how to enrich loki.source.awsfirehose "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -68,22 +64,19 @@ loki.source.awsfirehose "LABEL" { The component will start an HTTP server on the configured port and address with the following endpoints: -- `/awsfirehose/api/v1/push` - accepting `POST` requests compatible - with [AWS Firehose HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). +- `/awsfirehose/api/v1/push` - accepting `POST` requests compatible with [AWS Firehose HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html). ## Arguments `loki.source.awsfirehose` supports the following arguments: -| Name | Type | Description | Default | Required | - |--------------------------|----------------------|------------------------------------------------------------|---------|----------| -| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| Name | Type | Description | Default | Required | +|--------------------------|----------------------|----------------------------------------------------------------|---------|----------| +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | | `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from the request. | `false` | no | -| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | -The `relabel_rules` field can make use of the `rules` export value from a -[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded -to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -92,25 +85,25 @@ to the list of receivers in `forward_to`. The following blocks are supported inside the definition of `loki.source.awsfirehose`: | Hierarchy | Name | Description | Required | - |-----------|----------|----------------------------------------------------|----------| -| `http` | [http][] | Configures the HTTP server that receives requests. | no | +|-----------|----------|----------------------------------------------------|----------| | `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no | +| `http` | [http][] | Configures the HTTP server that receives requests. | no | [http]: #http [grpc]: #grpc -### http +### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} -### grpc +### http -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Exported fields -`loki.source.awsfirehose` does not export any fields. +`loki.source.awsfirehose` doesn't export any fields. ## Component health @@ -118,21 +111,22 @@ The following blocks are supported inside the definition of `loki.source.awsfire ## Debug metrics -The following are some of the metrics that are exposed when this component is used. +The following are some of the metrics that are exposed when this component is used. + {{% admonition type="note" %}} The metrics include labels such as `status_code` where relevant, which you can use to measure request success rates. {{%/admonition %}} -- `loki_source_awsfirehose_request_errors` (counter): Count of errors while receiving a request. +- `loki_source_awsfirehose_batch_size` (histogram): Size (in units) of the number of records received per request. - `loki_source_awsfirehose_record_errors` (counter): Count of errors while decoding an individual record. - `loki_source_awsfirehose_records_received` (counter): Count of records received. -- `loki_source_awsfirehose_batch_size` (histogram): Size (in units) of the number of records received per request. +- `loki_source_awsfirehose_request_errors` (counter): Count of errors while receiving a request. ## Example -This example starts an HTTP server on `0.0.0.0` address and port `9999`. The server receives log entries and forwards -them to a `loki.write` component. The `loki.write` component will send the logs to the specified loki instance using -basic auth credentials provided. +This example starts an HTTP server on `0.0.0.0` address and port `9999`. +The server receives log entries and forwards them to a `loki.write` component. +The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. ```river loki.write "local" { @@ -156,9 +150,8 @@ loki.source.awsfirehose "loki_fh_receiver" { } ``` -As another example, if you are receiving records that originated from a CloudWatch logs subscription, you can enrich each -received entry by relabeling internal labels. The following configuration builds upon the one above but keeps the origin -log stream and group as `log_stream` and `log_group`, respectively. +As another example, if you are receiving records that originated from a CloudWatch logs subscription, you can enrich each received entry by relabeling internal labels. +The following configuration builds upon the one above but keeps the origin log stream and group as `log_stream` and `log_group`, respectively. ```river loki.write "local" { diff --git a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md index a90320e069ef..c2ff92ad1747 100644 --- a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md +++ b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md @@ -42,25 +42,23 @@ loki.source.azure_event_hubs "LABEL" { `loki.source.azure_event_hubs` supports the following arguments: - Name | Type | Description | Default | Required ------------------------------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------- - `fully_qualified_namespace` | `string` | Event hub namespace. | | yes - `event_hubs` | `list(string)` | Event Hubs to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no - `labels` | `map(string)` | The labels to associate with each received event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no - `disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +----------------------------|----------------------|--------------------------------------------------------------------|----------------------------------|--------- +`fully_qualified_namespace` | `string` | Event hub namespace. | | yes +`event_hubs` | `list(string)` | Event Hubs to consume. | | yes +`group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no +`assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no +`labels` | `map(string)` | The labels to associate with each received event. | `{}` | no +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no The `fully_qualified_namespace` argument must refer to a full `HOST:PORT` that points to your event hub, such as `NAMESPACE.servicebus.windows.net:9093`. The `assignor` argument must be set to one of `"range"`, `"roundrobin"`, or `"sticky"`. -The `relabel_rules` field can make use of the `rules` export value from a -`loki.relabel` component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a `loki.relabel` component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. ### Labels @@ -68,42 +66,40 @@ The `labels` map is applied to every message that the component reads. The following internal labels prefixed with `__` are available but are discarded if not relabeled: +- `__azure_event_hubs_category` +- `__meta_kafka_group_id` +- `__meta_kafka_member_id` - `__meta_kafka_message_key` -- `__meta_kafka_topic` - `__meta_kafka_partition` -- `__meta_kafka_member_id` -- `__meta_kafka_group_id` -- `__azure_event_hubs_category` +- `__meta_kafka_topic` ## Blocks The following blocks are supported inside the definition of `loki.source.azure_event_hubs`: - Hierarchy | Name | Description | Required -----------------|------------------|----------------------------------------------------|---------- - authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes +Hierarchy | Name | Description | Required +---------------|------------------|----------------------------------------------------|--------- +authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes [authentication]: #authentication-block -### authentication block +### authentication The `authentication` block defines the authentication method when communicating with Azure Event Hub. - Name | Type | Description | Default | Required ----------------------|----------------|---------------------------------------------------------------------------|---------|---------- - `mechanism` | `string` | Authentication mechanism. | | yes - `connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no - `scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no +Name | Type | Description | Default | Required +--------------------|----------------|---------------------------------------------------------------------------|---------|--------- +`mechanism` | `string` | Authentication mechanism. | | yes +`connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no +`scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no -`mechanism` supports the values `"connection_string"` and `"oauth"`. If `"connection_string"` is used, -you must set the `connection_string` attribute. If `"oauth"` is used, you must configure one of the supported credential -types as documented -here: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azidentity/README.md#credential-types via environment -variables or Azure CLI. +`mechanism` supports the values `"connection_string"` and `"oauth"`. +If `"connection_string"` is used, you must set the `connection_string` attribute. +If `"oauth"` is used, you must configure one of the [supported credential types](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azidentity/README.md#credential-types) via environment variables or the Azure CLI. ## Exported fields -`loki.source.azure_event_hubs` does not export any fields. +`loki.source.azure_event_hubs` doesn't export any fields. ## Component health @@ -112,7 +108,7 @@ configuration. ## Debug information -`loki.source.azure_event_hubs` does not expose additional debug info. +`loki.source.azure_event_hubs` doesn't expose additional debug info. ## Example @@ -134,4 +130,4 @@ loki.write "example" { url = "loki:3100/api/v1/push" } } -``` \ No newline at end of file +``` diff --git a/docs/sources/flow/reference/components/loki.source.cloudflare.md b/docs/sources/flow/reference/components/loki.source.cloudflare.md index 33d1bf0015a5..9b0f8d453c9f 100644 --- a/docs/sources/flow/reference/components/loki.source.cloudflare.md +++ b/docs/sources/flow/reference/components/loki.source.cloudflare.md @@ -11,12 +11,9 @@ title: loki.source.cloudflare # loki.source.cloudflare -`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and -forwards them to other `loki.*` components. +`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and forwards them to other `loki.*` components. -These logs contain data related to the connecting client, the request path -through the Cloudflare network, and the response from the origin web server and -can be useful for enriching existing logs on an origin server. +These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server and can be useful for enriching existing logs on an origin server. Multiple `loki.source.cloudflare` components can be specified by giving them different labels. @@ -36,65 +33,60 @@ loki.source.cloudflare "LABEL" { `loki.source.cloudflare` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`api_token` | `string` | The API token to authenticate with. | | yes -`zone_id` | `string` | The Cloudflare zone ID to use. | | yes -`labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no -`workers` | `int` | The number of workers to use for parsing logs. | `3` | no -`pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no -`fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no -`additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no +Name | Type | Description | Default | Required +--------------------|----------------------|-------------------------------------------------------------------------------|-------------|--------- +`api_token` | `string` | The API token to authenticate with. | | yes +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`zone_id` | `string` | The Cloudflare zone ID to use. | | yes +`additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no +`fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no +`labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no +`pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no +`workers` | `int` | The number of workers to use for parsing logs. | `3` | no -By default `loki.source.cloudflare` fetches logs with the `default` set of -fields. Here are the different sets of `fields_type` available for selection, -and the fields they include: +By default `loki.source.cloudflare` fetches logs with the `default` set of fields. +The following list shows the different sets of `fields_type` available for selection, and the fields they include: * `default` includes: -``` -"ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" + ``` + plus any extra fields provided via `additional_fields` argument. * `minimal` includes all `default` fields and adds: -``` -"ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer", "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer", "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType" + ``` + plus any extra fields provided via `additional_fields` argument. * `extended` includes all `minimal` fields and adds: -``` -"ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID", "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol", "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified" -``` -plus any extra fields provided via `additional_fields` argument. + ``` + "ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID", "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol", "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified" + ``` + plus any extra fields provided via `additional_fields` argument. * `all` includes all `extended` fields and adds: -``` - "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` -``` -plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). + ``` + "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` + ``` + plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). * `custom` includes only the fields defined in `additional_fields`. -The component saves the last successfully-fetched timestamp in its positions -file. If a position is found in the file for a given zone ID, the component -restarts pulling logs from that timestamp. When no position is found, the -component starts pulling logs from the current time. +The component saves the last successfully-fetched timestamp in its positions file. +If a position is found in the file for a given zone ID, the component restarts pulling logs from that timestamp. +When no position is found, the component starts pulling logs from the current time. -Logs are fetched using multiple `workers` which request the last available -`pull_range` repeatedly. It is possible to fall behind due to having too many -log lines to process for each pull; adding more workers, decreasing the pull -range, or decreasing the quantity of fields fetched can mitigate this -performance issue. +Logs are fetched using multiple `workers` which request the last available `pull_range` repeatedly. +It's possible to fall behind due to having too many log lines to process for each pull. +Adding more workers, decreasing the pull range, or decreasing the quantity of fields fetched can mitigate this performance issue. -The last timestamp fetched by the component is recorded in the -`loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. +The last timestamp fetched by the component is recorded in the `loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. + +All incoming Cloudflare log entries are in JSON format. You can make use of the `loki.process` component and a JSON processing stage to extract more labels or change the log line format. +A sample log looks like this: -All incoming Cloudflare log entries are in JSON format. You can make use of the -`loki.process` component and a JSON processing stage to extract more labels or -change the log line format. A sample log looks like this: ```json { "CacheCacheStatus": "miss", @@ -165,15 +157,13 @@ change the log line format. A sample log looks like this: } ``` - ## Exported fields -`loki.source.cloudflare` does not export any fields. +`loki.source.cloudflare` doesn't export any fields. ## Component health -`loki.source.cloudflare` is only reported as unhealthy if given an invalid -configuration. +`loki.source.cloudflare` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -192,8 +182,7 @@ configuration. ## Example -This example pulls logs from Cloudflare's API and forwards them to a -`loki.write` component. +This example pulls logs from Cloudflare's API and forwards them to a `loki.write` component. ```river loki.source.cloudflare "dev" { diff --git a/docs/sources/flow/reference/components/loki.source.docker.md b/docs/sources/flow/reference/components/loki.source.docker.md index 0bb11ddecb17..b50b0ef2121a 100644 --- a/docs/sources/flow/reference/components/loki.source.docker.md +++ b/docs/sources/flow/reference/components/loki.source.docker.md @@ -12,12 +12,10 @@ title: loki.source.docker # loki.source.docker -`loki.source.docker` reads log entries from Docker containers and forwards them -to other `loki.*` components. Each component can read from a single Docker -daemon. +`loki.source.docker` reads log entries from Docker containers and forwards them to other `loki.*` components. +Each component can read from a single Docker daemon. -Multiple `loki.source.docker` components can be specified by giving them -different labels. +Multiple `loki.source.docker` components can be specified by giving them different labels. ## Usage @@ -30,38 +28,36 @@ loki.source.docker "LABEL" { ``` ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.file` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`host` | `string` | Address of the Docker daemon. | | yes -`targets` | `list(map(string))` | List of containers to read logs from. | | yes -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no -`refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no +Name | Type | Description | Default | Required +-------------------|----------------------|--------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`host` | `string` | Address of the Docker daemon. | | yes +`targets` | `list(map(string))` | List of containers to read logs from. | | yes +`labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no +`refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no ## Blocks The following blocks are supported inside the definition of `loki.source.docker`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | HTTP client settings when connecting to the endpoint. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +-----------------------------|-------------------|----------------------------------------------------------|--------- +client | [client][] | HTTP client settings when connecting to the endpoint. | no +client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > -basic_auth` refers to an `basic_auth` block defined inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > basic_auth` refers to an `basic_auth` block defined inside a `client` block. -These blocks are only applicable when connecting to a Docker daemon over HTTP -or HTTPS and has no effect when connecting via a `unix:///` socket +These blocks are only applicable when connecting to a Docker daemon over HTTP or HTTPS and has no effect when connecting via a `unix:///` socket. [client]: #client-block [basic_auth]: #basic_auth-block @@ -69,49 +65,49 @@ or HTTPS and has no effect when connecting via a `unix:///` socket [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### client block +### client -The `client` block configures settings used to connect to HTTP(S) Docker -daemons. +The `client` block configures settings used to connect to HTTP(S) Docker daemons. {{< docs/shared lookup="flow/reference/components/http-client-config-block.md" source="agent" version="" >}} -### basic_auth block +### client > authorization -The `basic_auth` block configures basic authentication for HTTP(S) Docker -daemons. +The `authorization` block configures custom authorization to use for the Docker daemon. -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### client > basic_auth -The `authorization` block configures custom authorization to use for the Docker -daemon. +The `basic_auth` block configures basic authentication for HTTP(S) Docker daemons. -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### client > oauth2 -The `oauth2` block configures OAuth2 authorization to use for the Docker -daemon. +The `oauth2` block configures OAuth2 authorization to use for the Docker daemon. {{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### client > oauth2 > tls_config + +The `tls_config` block configures TLS settings for connecting to HTTPS Docker daemons. + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### client > tls_config -The `tls_config` block configures TLS settings for connecting to HTTPS Docker -daemons. +The `tls_config` block configures TLS settings for connecting to HTTPS Docker daemons. {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields -`loki.source.docker` does not export any fields. +`loki.source.docker` doesn't export any fields. ## Component health -`loki.source.docker` is only reported as unhealthy if given an invalid -configuration. +`loki.source.docker` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -126,15 +122,12 @@ configuration. * `loki_source_docker_target_parsing_errors_total` (gauge): Total number of parsing errors while receiving Docker messages. ## Component behavior -The component uses its data path (a directory named after the domain's -fully qualified name) to store its _positions file_. The positions file -stores the read offsets so that if there is a component or Agent restart, -`loki.source.docker` can pick up tailing from the same spot. +The component uses its data path (a directory named after the domain's fully qualified name) to store its _positions file_. +The positions file stores the read offsets so that if there is a component or Agent restart, `loki.source.docker` can pick up tailing from the same spot. ## Example -This example collects log entries from the files specified in the `targets` -argument and forwards them to a `loki.write` component to be written to Loki. +This example collects log entries from the files specified in the `targets` argument and forwards them to a `loki.write` component to be written to Loki. ```river discovery.docker "linux" { @@ -143,7 +136,7 @@ discovery.docker "linux" { loki.source.docker "default" { host = "unix:///var/run/docker.sock" - targets = discovery.docker.linux.targets + targets = discovery.docker.linux.targets forward_to = [loki.write.local.receiver] } diff --git a/docs/sources/flow/reference/components/loki.source.file.md b/docs/sources/flow/reference/components/loki.source.file.md index 2e9c8d9f333b..18e87bf520a2 100644 --- a/docs/sources/flow/reference/components/loki.source.file.md +++ b/docs/sources/flow/reference/components/loki.source.file.md @@ -11,14 +11,12 @@ title: loki.source.file # loki.source.file -`loki.source.file` reads log entries from files and forwards them to other -`loki.*` components. +`loki.source.file` reads log entries from files and forwards them to other `loki.*` components. -Multiple `loki.source.file` components can be specified by giving them -different labels. +Multiple `loki.source.file` components can be specified by giving them different labels. {{% admonition type="note" %}} -`loki.source.file` does not handle file discovery. You can use `local.file_match` for file discovery. Refer to the [File Globbing](#file-globbing) example for more information. +`loki.source.file` doesn't handle file discovery. You can use `local.file_match` for file discovery. Refer to the [File Globbing](#file-globbing) example for more information. {{% /admonition %}} ## Usage @@ -32,20 +30,18 @@ loki.source.file "LABEL" { ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.file` supports the following arguments: -| Name | Type | Description | Default | Required | -| --------------- | -------------------- | ----------------------------------------------------------------------------------- | ------- | -------- | -| `targets` | `list(map(string))` | List of files to read from. | | yes | -| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | -| `encoding` | `string` | The encoding to convert from when reading files. | `""` | no | -| `tail_from_end` | `bool` | Whether a log file should be tailed from the end if a stored position is not found. | `false` | no | +| Name | Type | Description | Default | Required | +|-----------------|----------------------|------------------------------------------------------------------------------------|---------|----------| +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `targets` | `list(map(string))` | List of files to read from. | | yes | +| `encoding` | `string` | The encoding to convert from when reading files. | `""` | no | +| `tail_from_end` | `bool` | Whether a log file should be tailed from the end if a stored position isn't found. | `false` | no | -The `encoding` argument must be a valid [IANA encoding][] name. If not set, it -defaults to UTF-8. +The `encoding` argument must be a valid [IANA encoding][] name. If not set, it defaults to UTF-8. You can use the `tail_from_end` argument when you want to tail a large file without reading its entire content. When set to true, only new logs will be read, ignoring the existing ones. @@ -62,20 +58,18 @@ The following blocks are supported inside the definition of `loki.source.file`: [decompresssion]: #decompresssion-block [file_watch]: #file_watch-block -### decompresssion block +### decompresssion -The `decompression` block contains configuration for reading logs from -compressed files. The following arguments are supported: +The `decompression` block contains configuration for reading logs from compressed files. The following arguments are supported: | Name | Type | Description | Default | Required | | --------------- | ---------- | --------------------------------------------------------------- | ------- | -------- | | `enabled` | `bool` | Whether decompression is enabled. | | yes | -| `initial_delay` | `duration` | Time to wait before starting to read from new compressed files. | 0 | no | | `format` | `string` | Compression format. | | yes | +| `initial_delay` | `duration` | Time to wait before starting to read from new compressed files. | 0 | no | -If you compress a file under a folder being scraped, `loki.source.file` might -try to ingest your file before you finish compressing it. To avoid it, pick -an `initial_delay` that is enough to avoid it. +If you compress a file under a folder being scraped, `loki.source.file` might try to ingest your file before you finish compressing it. +To avoid it, pick an `initial_delay` that is enough to avoid it. Currently supported compression formats are: @@ -83,8 +77,7 @@ Currently supported compression formats are: - `z` - for zlib - `bz2` - for bzip2 -The component can only support one compression format at a time, in order to -handle multiple formats, you will need to create multiple components. +The component can only support one compression format at a time. To handle multiple formats, you must to create multiple components. ### file_watch block @@ -93,8 +86,8 @@ The following arguments are supported: | Name | Type | Description | Default | Required | | -------------------- | ---------- | ------------------------------------ | ------- | -------- | -| `min_poll_frequency` | `duration` | Minimum frequency to poll for files. | 250ms | no | | `max_poll_frequency` | `duration` | Maximum frequency to poll for files. | 250ms | no | +| `min_poll_frequency` | `duration` | Minimum frequency to poll for files. | 250ms | no | If no file changes are detected, the poll frequency doubles until a file change is detected or the poll frequency reaches the `max_poll_frequency`. @@ -102,12 +95,11 @@ If file changes are detected, the poll frequency is reset to `min_poll_frequency ## Exported fields -`loki.source.file` does not export any fields. +`loki.source.file` doesn't export any fields. ## Component health -`loki.source.file` is only reported as unhealthy if given an invalid -configuration. +`loki.source.file` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -119,45 +111,38 @@ configuration. ## Debug metrics -- `loki_source_file_read_bytes_total` (gauge): Number of bytes read. -- `loki_source_file_file_bytes_total` (gauge): Number of bytes total. -- `loki_source_file_read_lines_total` (counter): Number of lines read. - `loki_source_file_encoding_failures_total` (counter): Number of encoding failures. +- `loki_source_file_file_bytes_total` (gauge): Number of bytes total. - `loki_source_file_files_active_total` (gauge): Number of active files. +- `loki_source_file_read_bytes_total` (gauge): Number of bytes read. +- `loki_source_file_read_lines_total` (counter): Number of lines read. ## Component behavior If the decompression feature is deactivated, the component will continuously monitor and 'tail' the files. -In this mode, upon reaching the end of a file, the component remains active, awaiting and reading new entries in real-time as they are appended. +In this mode, upon reaching the end of a file, the component remains active, awaiting, and reading new entries in real-time as they are appended. -Each element in the list of `targets` as a set of key-value pairs called -_labels_. -The set of targets can either be _static_, or dynamically provided periodically -by a service discovery component. The special label `__path__` _must always_ be -present and must point to the absolute path of the file to read from. +Each element in the list of `targets` as a set of key-value pairs called _labels_. +The set of targets can either be _static_, or dynamically provided periodically by a service discovery component. +The special label `__path__` _must always_ be present and must point to the absolute path of the file to read from. -The `__path__` value is available as the `filename` label to each log entry -the component reads. All other labels starting with a double underscore are -considered _internal_ and are removed from the log entries before they're -passed to other `loki.*` components. +The `__path__` value is available as the `filename` label to each log entry the component reads. +All other labels starting with a double underscore are considered _internal_ and are removed from the log entries before they're passed to other `loki.*` components. -The component uses its data path (a directory named after the domain's -fully qualified name) to store its _positions file_. The positions file is used -to store read offsets, so that in case of a component or Agent restart, +The component uses its data path (a directory named after the domain's fully qualified name) to store its _positions file_. +The positions file is used to store read offsets, so that in case of a component or Agent restart, `loki.source.file` can pick up tailing from the same spot. -If a file is removed from the `targets` list, its positions file entry is also -removed. When it's added back on, `loki.source.file` starts reading it from the -beginning. +If a file is removed from the `targets` list, its positions file entry is also removed. +When it's added back on, `loki.source.file` starts reading it from the beginning. ## Examples ### Static targets -This example collects log entries from the files specified in the targets -argument and forwards them to a `loki.write` component to be written to Loki. +This example collects log entries from the files specified in the targets argument and forwards them to a `loki.write` component to be written to Loki. ```river loki.source.file "tmpfiles" { @@ -178,9 +163,8 @@ loki.write "local" { ### File globbing -This example collects log entries from the files matching `*.log` pattern -using `local.file_match` component. When files appear or disappear, the list of -targets will be updated accordingly. +This example collects log entries from the files matching `*.log` pattern using `local.file_match` component. +When files appear or disappear, the list of targets will be updated accordingly. ```river @@ -204,9 +188,7 @@ loki.write "local" { ### Decompression -This example collects log entries from the compressed files matching `*.gz` -pattern using `local.file_match` component and the decompression configuration -on the `loki.source.file` component. +This example collects log entries from the compressed files matching `*.gz` pattern using `local.file_match` component and the decompression configuration on the `loki.source.file` component. ```river diff --git a/docs/sources/flow/reference/components/loki.source.gcplog.md b/docs/sources/flow/reference/components/loki.source.gcplog.md index 3379a43c32a9..1e38d2f0a568 100644 --- a/docs/sources/flow/reference/components/loki.source.gcplog.md +++ b/docs/sources/flow/reference/components/loki.source.gcplog.md @@ -11,15 +11,11 @@ title: loki.source.gcplog # loki.source.gcplog -`loki.source.gcplog` retrieves logs from cloud resources such as GCS buckets, -load balancers, or Kubernetes clusters running on GCP by making use of Pub/Sub -[subscriptions](https://cloud.google.com/pubsub/docs/subscriber). +`loki.source.gcplog` retrieves logs from cloud resources such as GCS buckets, load balancers, or Kubernetes clusters running on GCP by making use of Pub/Sub [subscriptions](https://cloud.google.com/pubsub/docs/subscriber). -The component uses either the 'push' or 'pull' strategy to retrieve log -entries and forward them to the list of receivers in `forward_to`. +The component uses either the 'push' or 'pull' strategy to retrieve log entries and forward them to the list of receivers in `forward_to`. -Multiple `loki.source.gcplog` components can be specified by giving them -different labels. +Multiple `loki.source.gcplog` components can be specified by giving them different labels. ## Usage @@ -52,12 +48,11 @@ The following blocks are supported inside the definition of |-------------|----------|-------------------------------------------------------------------------------|----------| | pull | [pull][] | Configures a target to pull logs from a GCP Pub/Sub subscription. | no | | push | [push][] | Configures a server to receive logs as GCP Pub/Sub push requests. | no | -| push > http | [http][] | Configures the HTTP server that receives requests when using the `push` mode. | no | | push > grpc | [grpc][] | Configures the gRPC server that receives requests when using the `push` mode. | no | +| push > http | [http][] | Configures the HTTP server that receives requests when using the `push` mode. | no | -The `pull` and `push` inner blocks are mutually exclusive; a component must -contain exactly one of the two in its definition. The `http` and `grpc` block -are just used when the `push` block is configured. +The `pull` and `push` inner blocks are mutually exclusive; a component must contain exactly one of the two in its definition. +The `http` and `grpc` block are just used when the `push` block is configured. [pull]: #pull-block [push]: #push-block @@ -66,72 +61,58 @@ are just used when the `push` block is configured. ### pull block -The `pull` block defines which GCP project ID and subscription to read log -entries from. +The `pull` block defines which GCP project ID and subscription to read log entries from. -The following arguments can be used to configure the `pull` block. Any omitted -fields take their default values. +The following arguments can be used to configure the `pull` block. Any omitted fields take their default values. | Name | Type | Description | Default | Required | |--------------------------|---------------|---------------------------------------------------------------------------|---------|----------| | `project_id` | `string` | The GCP project id the subscription belongs to. | | yes | | `subscription` | `string` | The subscription to pull logs from. | | yes | | `labels` | `map(string)` | Additional labels to associate with incoming logs. | `"{}"` | no | -| `use_incoming_timestamp` | `bool` | Whether to use the incoming log timestamp. | `false` | no | | `use_full_line` | `bool` | Send the full line from Cloud Logging even if `textPayload` is available. | `false` | no | +| `use_incoming_timestamp` | `bool` | Whether to use the incoming log timestamp. | `false` | no | -To make use of the `pull` strategy, the GCP project must have been -[configured](/docs/loki/next/clients/promtail/gcplog-cloud/) -to forward its cloud resource logs onto a Pub/Sub topic for -`loki.source.gcplog` to consume. +To make use of the `pull` strategy, the GCP project must have been [configured](/docs/loki/next/clients/promtail/gcplog-cloud/) to forward its cloud resource logs onto a Pub/Sub topic for `loki.source.gcplog` to consume. -Typically, the host system also needs to have its GCP -[credentials](https://cloud.google.com/docs/authentication/application-default-credentials) -configured. One way to do it is to point the `GOOGLE_APPLICATION_CREDENTIALS` -environment variable to the location of a credential configuration JSON file or -a service account key. +Typically, the host system also needs to have its GCP [credentials](https://cloud.google.com/docs/authentication/application-default-credentials) configured. +One way to do it is to point the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the location of a credential configuration JSON file or a service account key. ### push block -The `push` block defines the configuration of the server that receives -push requests from GCP's Pub/Sub servers. +The `push` block defines the configuration of the server that receives push requests from GCP's Pub/Sub servers. -The following arguments can be used to configure the `push` block. Any omitted -fields take their default values. +The following arguments can be used to configure the `push` block. Any omitted fields take their default values. -| Name | Type | Description | Default | Required | -|-----------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------| -| `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | -| `push_timeout` | `duration` | Sets a maximum processing time for each incoming GCP log entry. | `"0s"` | no | -| `labels` | `map(string)` | Additional labels to associate with incoming entries. | `"{}"` | no | -| `use_incoming_timestamp` | `bool` | Whether to use the incoming entry timestamp. | `false` | no | +| Name | Type | Description | Default | Required | +|-----------------------------|---------------|------------------------------------------------------------------------------------|---------|----------| +| `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | +| `labels` | `map(string)` | Additional labels to associate with incoming entries. | `"{}"` | no | +| `push_timeout` | `duration` | Sets a maximum processing time for each incoming GCP log entry. | `"0s"` | no | +| `use_incoming_timestamp` | `bool` | Whether to use the incoming entry timestamp. | `false` | no | | `use_full_line` | `bool` | Send the full line from Cloud Logging even if `textPayload` is available. By default, if `textPayload` is present in the line, then it's used as log line | `false` | no | -The server listens for POST requests from GCP's Push subscriptions on -`HOST:PORT/gcp/api/v1/push`. +The server listens for POST requests from GCP's Push subscriptions on `HOST:PORT/gcp/api/v1/push`. -By default, for both strategies the component assigns the log entry timestamp -as the time it was processed, except if `use_incoming_timestamp` is set to -true. +By default, for both strategies the component assigns the log entry timestamp as the time it was processed, except if `use_incoming_timestamp` is set to true. The `labels` map is applied to every entry that passes through the component. ### http -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} ## Exported fields -`loki.source.gcplog` does not export any fields. +`loki.source.gcplog` doesn't export any fields. ## Component health -`loki.source.gcplog` is only reported as unhealthy if given an invalid -configuration. +`loki.source.gcplog` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -142,22 +123,19 @@ configuration. ## Debug metrics -When using the `pull` strategy, the component exposes the following debug -metrics: +When using the `pull` strategy, the component exposes the following debug metrics: * `loki_source_gcplog_pull_entries_total` (counter): Number of entries received by the gcplog target. -* `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. * `loki_source_gcplog_pull_last_success_scrape` (gauge): Timestamp of target's last successful poll. +* `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. -When using the `push` strategy, the component exposes the following debug -metrics: +When using the `push` strategy, the component exposes the following debug metrics: * `loki_source_gcplog_push_entries_total` (counter): Number of entries received by the gcplog target. * `loki_source_gcplog_push_entries_total` (counter): Number of parsing errors while receiving gcplog messages. ## Example -This example listens for GCP Pub/Sub PushRequests on `0.0.0.0:8080` and -forwards them to a `loki.write` component. +This example listens for GCP Pub/Sub PushRequests on `0.0.0.0:8080` and forwards them to a `loki.write` component. ```river loki.source.gcplog "local" { @@ -173,8 +151,7 @@ loki.write "local" { } ``` -On the other hand, if we need the server to listen on `0.0.0.0:4040`, and forwards them -to a `loki.write` component. +On the other hand, if we need the server to listen on `0.0.0.0:4040`, and forwards them to a `loki.write` component. ```river loki.source.gcplog "local" { diff --git a/docs/sources/flow/reference/components/loki.source.gelf.md b/docs/sources/flow/reference/components/loki.source.gelf.md index e8544fe0248f..8b2aeb8831b8 100644 --- a/docs/sources/flow/reference/components/loki.source.gelf.md +++ b/docs/sources/flow/reference/components/loki.source.gelf.md @@ -11,11 +11,9 @@ title: loki.source.gelf # loki.source.gelf -`loki.source.gelf` reads [Graylog Extended Long Format (GELF) logs](https://github.com/Graylog2/graylog2-server) from a UDP listener and forwards them to other -`loki.*` components. +`loki.source.gelf` reads [Graylog Extended Long Format (GELF) logs](https://github.com/Graylog2/graylog2-server) from a UDP listener and forwards them to other `loki.*` components. -Multiple `loki.source.gelf` components can be specified by giving them -different labels and ports. +Multiple `loki.source.gelf` components can be specified by giving them different labels and ports. ## Usage @@ -26,42 +24,37 @@ loki.source.gelf "LABEL" { ``` ## Arguments -The component starts a new UDP listener and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new UDP listener and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.gelf` supports the following arguments: -Name | Type | Description | Default | Required ------------- |----------------------|--------------------------------------------------------------------------------|----------------------------| -------- -`listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no -`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +Name | Type | Description | Default | Required +-------------------------|----------------|----------------------------------------------------------------------------|-----------------|--------- +`listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no +`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +{{% admonition type="note" %}} +GELF logs can be sent uncompressed or compressed with GZIP or ZLIB. A `job` label is added with the full name of the component `loki.source.gelf.LABEL`. +{{% /admonition %}} -> **NOTE**: GELF logs can be sent uncompressed or compressed with GZIP or ZLIB. -> A `job` label is added with the full name of the component `loki.source.gelf.LABEL`. - -The `relabel_rules` argument can make use of the `rules` export from a -[loki.relabel][] component to apply one or more relabling rules to log entries -before they're forward to the list of receivers specified in `forward_to`. +The `relabel_rules` argument can make use of the `rules` export from a [loki.relabel][] component to apply one or more relabling rules to log entries before they're forward to the list of receivers specified in `forward_to`. Incoming messages have the following internal labels available: -* `__gelf_message_level`: The GELF level as a string. -* `__gelf_message_host`: The host sending the GELF message. -* `__gelf_message_host`: The GELF level message version sent by the client. * `__gelf_message_facility`: The GELF facility. +* `__gelf_message_host`: The GELF level message version sent by the client. +* `__gelf_message_host`: The host sending the GELF message. +* `__gelf_message_level`: The GELF level as a string. -All labels starting with `__` are removed prior to forwarding log entries. To -keep these labels, relabel them using a [loki.relabel][] component and pass its -`rules` export to the `relabel_rules` argument. +All labels starting with `__` are removed prior to forwarding log entries. +To keep these labels, relabel them using a [loki.relabel][] component and pass its `rules` export to the `relabel_rules` argument. [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Component health -`loki.source.gelf` is only reported as unhealthy if given an invalid -configuration. +`loki.source.gelf` is only reported as unhealthy if given an invalid configuration. ## Debug Metrics diff --git a/docs/sources/flow/reference/components/loki.source.heroku.md b/docs/sources/flow/reference/components/loki.source.heroku.md index f98b00312062..37172a5738eb 100644 --- a/docs/sources/flow/reference/components/loki.source.heroku.md +++ b/docs/sources/flow/reference/components/loki.source.heroku.md @@ -11,20 +11,18 @@ title: loki.source.heroku # loki.source.heroku -`loki.source.heroku` listens for Heroku messages over TCP connections -and forwards them to other `loki.*` components. +`loki.source.heroku` listens for Heroku messages over TCP connections and forwards them to other `loki.*` components. -The component starts a new heroku listener for the given `listener` -block and fans out incoming entries to the list of receivers in `forward_to`. +The component starts a new heroku listener for the given `listener` block and fans out incoming entries to the list of receivers in `forward_to`. -Before using `loki.source.heroku`, Heroku should be configured with the URL where the Agent will be listening. Follow the steps in [Heroku HTTPS Drain docs](https://devcenter.heroku.com/articles/log-drains#https-drains) for using the Heroku CLI with a command like the following: +Before using `loki.source.heroku`, Heroku should be configured with the URL where the Agent will be listening. +Follow the steps in [Heroku HTTPS Drain docs](https://devcenter.heroku.com/articles/log-drains#https-drains) for using the Heroku CLI with a command like the following: ```shell heroku drains:add [http|https]://HOSTNAME:PORT/heroku/api/v1/drain -a HEROKU_APP_NAME ``` -Multiple `loki.source.heroku` components can be specified by giving them -different labels. +Multiple `loki.source.heroku` components can be specified by giving them different labels. ## Usage @@ -42,60 +40,57 @@ loki.source.heroku "LABEL" { `loki.source.heroku` supports the following arguments: -Name | Type | Description | Default | Required ------------------------- | ---------------------- |------------------------------------------------------------------------------------| ------- | -------- -`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no -`labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no +Name | Type | Description | Default | Required +----------------------------|----------------------|------------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no +`labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no -The `relabel_rules` field can make use of the `rules` export value from a -`loki.relabel` component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a `loki.relabel` component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. ## Blocks The following blocks are supported inside the definition of `loki.source.heroku`: - Hierarchy | Name | Description | Required ------------|----------|----------------------------------------------------|---------- - `http` | [http][] | Configures the HTTP server that receives requests. | no - `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no +Hierarchy | Name | Description | Required +----------|----------|----------------------------------------------------|--------- +`grpc` | [grpc][] | Configures the gRPC server that receives requests. | no +`http` | [http][] | Configures the HTTP server that receives requests. | no [http]: #http [grpc]: #grpc -### http +### grpc -{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} -### grpc +### http -{{< docs/shared lookup="flow/reference/components/loki-server-grpc.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/loki-server-http.md" source="agent" version="" >}} ## Labels The `labels` map is applied to every message that the component reads. The following internal labels all prefixed with `__` are available but will be discarded if not relabeled: -- `__heroku_drain_host` - `__heroku_drain_app` -- `__heroku_drain_proc` +- `__heroku_drain_host` - `__heroku_drain_log_id` +- `__heroku_drain_proc` -All url query params will be translated to `__heroku_drain_param_` +All URL query parameters will be translated to `__heroku_drain_param_` If the `X-Scope-OrgID` header is set it will be translated to `__tenant_id__` ## Exported fields -`loki.source.heroku` does not export any fields. +`loki.source.heroku` doesn't export any fields. ## Component health -`loki.source.heroku` is only reported as unhealthy if given an invalid -configuration. +`loki.source.heroku` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -109,7 +104,7 @@ configuration. ## Example -This example listens for Heroku messages over TCP in the specified port and forwards them to a `loki.write` component using the Heroku timestamp. +The following example listens for Heroku messages over TCP in the specified port and forwards them to a `loki.write` component using the Heroku timestamp. ```river loki.source.heroku "local" { diff --git a/docs/sources/flow/reference/components/loki.source.journal.md b/docs/sources/flow/reference/components/loki.source.journal.md index 26a1922b7aeb..c984783aaa0c 100644 --- a/docs/sources/flow/reference/components/loki.source.journal.md +++ b/docs/sources/flow/reference/components/loki.source.journal.md @@ -11,11 +11,9 @@ title: loki.source.journal # loki.source.journal -`loki.source.journal` reads from the systemd journal and forwards them to other -`loki.*` components. +`loki.source.journal` reads from the systemd journal and forwards them to other `loki.*` components. -Multiple `loki.source.journal` components can be specified by giving them -different labels. +Multiple `loki.source.journal` components can be specified by giving them different labels. ## Usage @@ -26,56 +24,48 @@ loki.source.journal "LABEL" { ``` ## Arguments -The component starts a new journal reader and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new journal reader and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.journal` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no -`max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no -`path` | `string` | Path to a directory to read entries from. | `""` | no -`matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no +Name | Type | Description | Default | Required +-----------------|----------------------|--------------------------------------------------------------------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no +`labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no +`matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no +`max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no +`path` | `string` | Path to a directory to read entries from. | `""` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -> **NOTE**: A `job` label is added with the full name of the component `loki.source.journal.LABEL`. +{{% admonition type="note" %}} +A `job` label is added with the full name of the component `loki.source.journal.LABEL`. +{{% /admonition %}} -When the `format_as_json` argument is true, log messages are passed through as -JSON with all of the original fields from the journal entry. Otherwise, the log -message is taken from the content of the `MESSAGE` field from the journal -entry. +When the `format_as_json` argument is true, log messages are passed through as JSON with all of the original fields from the journal entry. +Otherwise, the log message is taken from the content of the `MESSAGE` field from the journal entry. -When the `path` argument is empty, `/var/log/journal` and `/run/log/journal` -will be used for discovering journal entries. +When the `path` argument is empty, `/var/log/journal` and `/run/log/journal` will be used for discovering journal entries. -The `relabel_rules` argument can make use of the `rules` export value from a -[loki.relabel][] component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` argument can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. -All messages read from the journal include internal labels following the -pattern of `__journal_FIELDNAME` and will be dropped before sending to the list -of receivers specified in `forward_to`. To keep these labels, use the -`relabel_rules` argument and relabel them to not be prefixed with `__`. +All messages read from the journal include internal labels following the pattern of `__journal_FIELDNAME` and will be dropped before sending to the list of receivers specified in `forward_to`. +To keep these labels, use the `relabel_rules` argument and relabel them to not be prefixed with `__`. -> **NOTE**: many field names from journald start with an `_`, such as -> `_systemd_unit`. The final internal label name would be -> `__journal__systemd_unit`, with _two_ underscores between `__journal` and -> `systemd_unit`. +{{% admonition type="note" %}} +Many field names from journald start with an `_`, such as `_systemd_unit`. The final internal label name would be `__journal__systemd_unit`, with _two_ underscores between `__journal` and `systemd_unit`. +{{% /admonition %}} [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Component health -`loki.source.journal` is only reported as unhealthy if given an invalid -configuration. +`loki.source.journal` is only reported as unhealthy if given an invalid configuration. ## Debug Metrics -* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. * `agent_loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read. +* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. ## Example diff --git a/docs/sources/flow/reference/components/loki.source.kafka.md b/docs/sources/flow/reference/components/loki.source.kafka.md index 4110177d7d09..2b51625029da 100644 --- a/docs/sources/flow/reference/components/loki.source.kafka.md +++ b/docs/sources/flow/reference/components/loki.source.kafka.md @@ -11,19 +11,14 @@ title: loki.source.kafka # loki.source.kafka -`loki.source.kafka` reads messages from Kafka using a consumer group -and forwards them to other `loki.*` components. +`loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. -The component starts a new Kafka consumer group for the given arguments -and fans out incoming entries to the list of receivers in `forward_to`. +The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`. -Before using `loki.source.kafka`, Kafka should have at least one producer -writing events to at least one topic. Follow the steps in the -[Kafka Quick Start](https://kafka.apache.org/documentation/#quickstart) -to get started with Kafka. +Before using `loki.source.kafka`, Kafka should have at least one producer writing events to at least one topic. +Follow the steps in the [Kafka Quick Start](https://kafka.apache.org/documentation/#quickstart) to get started with Kafka. -Multiple `loki.source.kafka` components can be specified by giving them -different labels. +Multiple `loki.source.kafka` components can be specified by giving them different labels. ## Usage @@ -39,38 +34,35 @@ loki.source.kafka "LABEL" { `loki.source.kafka` supports the following arguments: - Name | Type | Description | Default | Required ---------------------------|----------------------|----------------------------------------------------------|-----------------------|---------- - `brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes - `topics` | `list(string)` | The list of Kafka topics to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `version` | `string` | Kafka version to connect to. | `"2.2.1"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no - `labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +Name | Type | Description | Default | Required +-------------------------|----------------------|----------------------------------------------------------|-----------------------|--------- +`brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`topics` | `list(string)` | The list of Kafka topics to consume. | | yes +`assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no +`group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no +`labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no +`version` | `string` | Kafka version to connect to. | `"2.2.1"` | no `assignor` values can be either `"range"`, `"roundrobin"`, or `"sticky"`. Labels from the `labels` argument are applied to every message that the component reads. -The `relabel_rules` field can make use of the `rules` export value from a -[loki.relabel][] component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. In addition to custom labels, the following internal labels prefixed with `__` are available: +- `__meta_kafka_group_id` +- `__meta_kafka_member_id` - `__meta_kafka_message_key` - `__meta_kafka_message_offset` -- `__meta_kafka_topic` - `__meta_kafka_partition` -- `__meta_kafka_member_id` -- `__meta_kafka_group_id` +- `__meta_kafka_topic` -All labels starting with `__` are removed prior to forwarding log entries. To -keep these labels, relabel them using a [loki.relabel][] component and pass its -`rules` export to the `relabel_rules` argument. +All labels starting with `__` are removed prior to forwarding log entries. +To keep these labels, relabel them using a [loki.relabel][] component and pass its `rules` export to the `relabel_rules` argument. [loki.relabel]: {{< relref "./loki.relabel.md" >}} @@ -80,11 +72,11 @@ The following blocks are supported inside the definition of `loki.source.kafka`: Hierarchy | Name | Description | Required ---------------------------------------------|------------------|-----------------------------------------------------------|---------- - authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no - authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no +authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no +authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no +authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no [authentication]: #authentication-block @@ -94,7 +86,7 @@ The following blocks are supported inside the definition of `loki.source.kafka`: [oauth_config]: #oauth_config-block -### authentication block +### authentication The `authentication` block defines the authentication method when communicating with the Kafka event brokers. @@ -102,17 +94,13 @@ The `authentication` block defines the authentication method when communicating --------|----------|-------------------------|----------|---------- `type` | `string` | Type of authentication. | `"none"` | no -`type` supports the values `"none"`, `"ssl"`, and `"sasl"`. If `"ssl"` is used, -you must set the `tls_config` block. If `"sasl"` is used, you must set the `sasl_config` block. +`type` supports the values `"none"`, `"ssl"`, and `"sasl"`. +If `"ssl"` is used, you must set the `tls_config` block. +If `"sasl"` is used, you must set the `sasl_config` block. -### tls_config block +### authentication > sasl_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} - -### sasl_config block - -The `sasl_config` block defines the listen address and port where the listener -expects Kafka messages to be sent to. +The `sasl_config` block defines the listen address and port where the listener expects Kafka messages to be sent to. Name | Type | Description | Default | Required -------------|----------|--------------------------------------------------------------------|----------|----------------------- @@ -121,7 +109,7 @@ expects Kafka messages to be sent to. `password` | `secret` | The password to use for SASL authentication. | `""` | no `use_tls` | `bool` | If true, SASL authentication is executed over TLS. | `false` | no -### oauth_config block +### authentication > sasl_config > oauth_config The `oauth_config` is required when the SASL mechanism is set to `OAUTHBEARER`. @@ -130,23 +118,29 @@ The `oauth_config` is required when the SASL mechanism is set to `OAUTHBEARER`. `token_provider` | `string` | The OAuth provider to be used. The only supported provider is `azure`. | `""` | yes `scopes` | `list(string)` | The scopes to set in the access token | `[]` | yes +### authentication > sasl_config > tls_config block + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### authentication > tls_config block + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + ## Exported fields -`loki.source.kafka` does not export any fields. +`loki.source.kafka` doesn't export any fields. ## Component health -`loki.source.kafka` is only reported as unhealthy if given an invalid -configuration. +`loki.source.kafka` is only reported as unhealthy if given an invalid configuration. ## Debug information -`loki.source.kafka` does not expose additional debug info. +`loki.source.kafka` doesn't expose additional debug info. ## Example -This example consumes Kafka events from the specified brokers and topics -then forwards them to a `loki.write` component using the Kafka timestamp. +This example consumes Kafka events from the specified brokers and topics then forwards them to a `loki.write` component using the Kafka timestamp. ```river loki.source.kafka "local" { diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes.md b/docs/sources/flow/reference/components/loki.source.kubernetes.md index cde01d3172bc..bc74408ed8d2 100644 --- a/docs/sources/flow/reference/components/loki.source.kubernetes.md +++ b/docs/sources/flow/reference/components/loki.source.kubernetes.md @@ -13,23 +13,21 @@ title: loki.source.kubernetes # loki.source.kubernetes -{{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} -`loki.source.kubernetes` tails logs from Kubernetes containers using the -Kubernetes API. It has the following benefits over `loki.source.file`: +`loki.source.kubernetes` tails logs from Kubernetes containers using the Kubernetes API. +It has the following benefits over `loki.source.file`: * It works without a privileged container. * It works without a root user. * It works without needing access to the filesystem of the Kubernetes node. -* It doesn't require a DaemonSet to collect logs, so one agent could collect - logs for the whole cluster. +* It doesn't require a DaemonSet to collect logs, so one agent could collect logs for the whole cluster. -> **NOTE**: Because `loki.source.kubernetes` uses the Kubernetes API to tail -> logs, it uses more network traffic and CPU consumption of Kubelets than -> `loki.source.file`. +{{% admonition type="note" %}} +Because `loki.source.kubernetes` uses the Kubernetes API to tail logs, it uses more network traffic and CPU consumption of Kubelets than `loki.source.file`. +{{% /admonition %}} -Multiple `loki.source.kubernetes` components can be specified by giving them -different labels. +Multiple `loki.source.kubernetes` components can be specified by giving them different labels. ## Usage @@ -42,52 +40,43 @@ loki.source.kubernetes "LABEL" { ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.kubernetes` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | List of files to read from. | | yes -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +Name | Type | Description | Default | Required +-------------|----------------------|-------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`targets` | `list(map(string))` | List of files to read from. | | yes Each target in `targets` must have the following labels: -* `__meta_kubernetes_namespace` or `__pod_namespace__` to specify the namespace - of the pod to tail. -* `__meta_kubernetes_pod_name` or `__pod_name__` to specify the name of the pod - to tail. -* `__meta_kubernetes_pod_container_name` or `__pod_container_name__` to specify - the container within the pod to tail. -* `__meta_kubernetes_pod_uid` or `__pod_uid__` to specify the UID of the pod to - tail. +* `__meta_kubernetes_namespace` or `__pod_namespace__` to specify the namespace of the pod to tail. +* `__meta_kubernetes_pod_container_name` or `__pod_container_name__` to specify the container within the pod to tail. +* `__meta_kubernetes_pod_name` or `__pod_name__` to specify the name of the pod to tail. +* `__meta_kubernetes_pod_uid` or `__pod_uid__` to specify the UID of the pod to tail. -By default, all of these labels are present when the output -`discovery.kubernetes` is used. +By default, all of these labels are present when the output `discovery.kubernetes` is used. -A log tailer is started for each unique target in `targets`. Log tailers will -reconnect with exponential backoff to Kubernetes if the log stream returns -before the container has permanently terminated. +A log tailer is started for each unique target in `targets`. +Log tailers will reconnect with exponential backoff to Kubernetes if the log stream returns before the container has permanently terminated. ## Blocks -The following blocks are supported inside the definition of -`loki.source.kubernetes`: +The following blocks are supported inside the definition of `loki.source.kubernetes`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no +Hierarchy | Block | Description | Required +-----------------------------|-------------------|--------------------------------------------------------------------------|--------- +client | [client][] | Configures Kubernetes client used to tail logs. | no +client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > -basic_auth` refers to a `basic_auth` block defined -inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > basic_auth` refers to a `basic_auth` block defined inside a `client` block. [client]: #client-block [basic_auth]: #basic_auth-block @@ -96,61 +85,59 @@ inside a `client` block. [tls_config]: #tls_config-block [clustering]: #clustering-beta -### client block +### client -The `client` block configures the Kubernetes client used to tail logs from -containers. If the `client` block isn't provided, the default in-cluster -configuration with the service account of the running Grafana Agent pod is -used. +The `client` block configures the Kubernetes client used to tail logs from containers. +If the `client` block isn't provided, the default in-cluster configuration with the service account of the running Grafana Agent pod is used. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|----------|--------------------------------------------------------------------|---------|--------- +`api_server` | `string` | URL of the Kubernetes API server. | | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no - At most one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most one of the following can be provided: +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument][client]. +- [`bearer_token` argument][client]. +- [`oauth2` block][oauth2]. -### basic_auth block +### client > authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### client > basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### client > oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### client > oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### client > tls_config + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ### clustering (beta) -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes +Name | Type | Description | Default | Required +----------|--------|-----------------------------------------------------|---------|--------- +`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes -When the agent is [using clustering][], and `enabled` is set to true, then this -`loki.source.kubernetes` component instance opts-in to participating in the -cluster to distribute the load of log collection between all cluster nodes. +When the agent is [using clustering][], and `enabled` is set to true, then this `loki.source.kubernetes` component instance opts-in to participating in the cluster to distribute the load of log collection between all cluster nodes. -If the agent is _not_ running in clustered mode, then the block is a no-op and -`loki.source.kubernetes` collects logs from every target it receives in its -arguments. +If the agent is _not_ running in clustered mode, then the block is a no-op and `loki.source.kubernetes` collects logs from every target it receives in its arguments. [using clustering]: {{< relref "../../concepts/clustering.md" >}} @@ -160,28 +147,24 @@ arguments. ## Component health -`loki.source.kubernetes` is only reported as unhealthy if given an invalid -configuration. +`loki.source.kubernetes` is only reported as unhealthy if given an invalid configuration. ## Debug information -`loki.source.kubernetes` exposes some target-level debug information per -target: +`loki.source.kubernetes` exposes some target-level debug information per target: * The labels associated with the target. * The full set of labels which were found during service discovery. -* The most recent time a log line was read and forwarded to the next components - in the pipeline. +* The most recent time a log line was read and forwarded to the next components in the pipeline. * The most recent error from tailing, if any. ## Debug metrics -`loki.source.kubernetes` does not expose any component-specific debug metrics. +`loki.source.kubernetes` doesn't expose any component-specific debug metrics. ## Example -This example collects logs from all Kubernetes pods and forwards them to a -`loki.write` component so they are written to Loki. +This example collects logs from all Kubernetes pods and forwards them to a `loki.write` component so they are written to Loki. ```river discovery.kubernetes "pods" { diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md index 9e7df1f037d9..afb0eaaacb32 100644 --- a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md +++ b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md @@ -11,11 +11,9 @@ title: loki.source.kubernetes_events # loki.source.kubernetes_events -`loki.source.kubernetes_events` tails events from the Kubernetes API and -converts them into log lines to forward to other `loki` components. +`loki.source.kubernetes_events` tails events from the Kubernetes API and converts them into log lines to forward to other `loki` components. -Multiple `loki.source.kubernetes_events` components can be specified by giving them -different labels. +Multiple `loki.source.kubernetes_events` components can be specified by giving them different labels. ## Usage @@ -27,64 +25,57 @@ loki.source.kubernetes_events "LABEL" { ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.kubernetes_events` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`job_name` | `string` | Value to use for `job` label for generated logs. | `"loki.source.kubernetes_events"` | no -`log_format` | `string` | Format of the log. | `"logfmt"` | no -`namespaces` | `list(string)` | Namespaces to watch for Events in. | `[]` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +Name | Type | Description | Default | Required +-------------|----------------------|--------------------------------------------------|-----------------------------------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`job_name` | `string` | Value to use for `job` label for generated logs. | `"loki.source.kubernetes_events"` | no +`log_format` | `string` | Format of the log. | `"logfmt"` | no +`namespaces` | `list(string)` | Namespaces to watch for Events in. | `[]` | no -By default, `loki.source.kubernetes_events` will watch for events in all -namespaces. A list of explicit namespaces to watch can be provided in the -`namespaces` argument. -By default, the generated log lines will be in the `logfmt` format. Use the -`log_format` argument to change it to `json`. These formats are also names of -LogQL parsers, which can be used for processing the logs. +By default, `loki.source.kubernetes_events` will watch for events in all namespaces. +A list of explicit namespaces to watch can be provided in the `namespaces` argument. -> **NOTE**: When watching all namespaces, Grafana Agent must have permissions -> to watch events at the cluster scope (such as using a ClusterRoleBinding). If -> an explicit list of namespaces is provided, Grafana Agent only needs -> permissions to watch events for those namespaces. +By default, the generated log lines will be in the `logfmt` format. Use the `log_format` argument to change it to `json`. +These formats are also names of LogQL parsers, which can be used for processing the logs. -Log lines generated by `loki.source.kubernetes_events` have the following -labels: +{{% admonition type="note" %}} +When watching all namespaces, Grafana Agent must have permissions to watch events at the cluster scope (such as using a ClusterRoleBinding). +If an explicit list of namespaces is provided, Grafana Agent only needs permissions to watch events for those namespaces. +{{% /admonition %}} + +Log lines generated by `loki.source.kubernetes_events` have the following labels: -* `namespace`: Namespace of the Kubernetes object involved in the event. -* `job`: Value specified by the `job_name` argument. * `instance`: Value matching the component ID. +* `job`: Value specified by the `job_name` argument. +* `namespace`: Namespace of the Kubernetes object involved in the event. -If `job_name` argument is the empty string, the component will fail to load. To -remove the job label, forward the output of `loki.source.kubernetes_events` to -[a `loki.relabel` component][loki.relabel]. +If `job_name` argument is the empty string, the component will fail to load. +To remove the job label, forward the output of `loki.source.kubernetes_events` to [a `loki.relabel` component][loki.relabel]. -For compatibility with the `eventhandler` integration from static mode, -`job_name` can be set to `"integrations/kubernetes/eventhandler"`. +For compatibility with the `eventhandler` integration from static mode, `job_name` can be set to `"integrations/kubernetes/eventhandler"`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Blocks -The following blocks are supported inside the definition of -`loki.source.kubernetes_events`: +The following blocks are supported inside the definition of `loki.source.kubernetes_events`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +-----------------------------|-------------------|----------------------------------------------------------|--------- +client | [client][] | Configures Kubernetes client used to tail logs. | no +client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > -basic_auth` refers to a `basic_auth` block defined -inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > basic_auth` refers to a `basic_auth` block defined inside a `client` block. [client]: #client-block [basic_auth]: #basic_auth-block @@ -92,61 +83,61 @@ inside a `client` block. [oauth2]: #oauth2-block [tls_config]: #tls_config-block -### client block +### client -The `client` block configures the Kubernetes client used to tail logs from -containers. If the `client` block isn't provided, the default in-cluster -configuration with the service account of the running Grafana Agent pod is -used. +The `client` block configures the Kubernetes client used to tail logs from containers. +If the `client` block isn't provided, the default in-cluster configuration with the service account of the running Grafana Agent pod is used. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|----------|--------------------------------------------------------------------|---------|--------- +`api_server` | `string` | URL of the Kubernetes API server. | | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no At most one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument][client]. +- [`bearer_token` argument][client]. +- [`oauth2` block][oauth2]. + +### client > authorization + +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### basic_auth block +### client > basic_auth -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### authorization block +### client > oauth2 -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### oauth2 block +### client > oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} -### tls_config block +### client > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields -`loki.source.kubernetes_events` does not export any fields. +`loki.source.kubernetes_events` doesn't export any fields. ## Component health -`loki.source.kubernetes_events` is only reported as unhealthy if given an invalid -configuration. +`loki.source.kubernetes_events` is only reported as unhealthy if given an invalid configuration. ## Debug information -`loki.source.kubernetes_events` exposes the most recently read timestamp for -events in each watched namespace. +`loki.source.kubernetes_events` exposes the most recently read timestamp for events in each watched namespace. ## Debug metrics @@ -154,8 +145,7 @@ events in each watched namespace. ## Example -This example collects watches events in the `kube-system` namespace and -forwards them to a `loki.write` component so they are written to Loki. +The following example collects watches events in the `kube-system` namespace and forwards them to a `loki.write` component so they are written to Loki. ```river loki.source.kubernetes_events "example" { diff --git a/docs/sources/flow/reference/components/loki.source.podlogs.md b/docs/sources/flow/reference/components/loki.source.podlogs.md index 9fd5ad109dcd..e03ada4f1663 100644 --- a/docs/sources/flow/reference/components/loki.source.podlogs.md +++ b/docs/sources/flow/reference/components/loki.source.podlogs.md @@ -13,26 +13,22 @@ title: loki.source.podlogs # loki.source.podlogs -{{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} -`loki.source.podlogs` discovers `PodLogs` resources on Kubernetes and, using -the Kubernetes API, tails logs from Kubernetes containers of Pods specified by -the discovered them. +`loki.source.podlogs` discovers `PodLogs` resources on Kubernetes and, using the Kubernetes API, tails logs from Kubernetes containers of Pods specified by the discovered resources. -`loki.source.podlogs` is similar to `loki.source.kubernetes`, but uses custom -resources rather than being fed targets from another Flow component. +`loki.source.podlogs` is similar to `loki.source.kubernetes`, but uses custom resources rather than being fed targets from another Flow component. -> **NOTE**: Unlike `loki.source.kubernetes`, it is not possible to distribute -> responsibility of collecting logs across multiple agents. To avoid collecting -> duplicate logs, only one agent should be running a `loki.source.podlogs` -> component. +{{% admonition type="note" %}} +It's not possible to distribute responsibility of collecting logs across multiple agents. +To avoid collecting duplicate logs, only one agent should be running a `loki.source.podlogs` component. +{{% /admonition %}} -> **NOTE**: Because `loki.source.podlogs` uses the Kubernetes API to tail logs, -> it uses more network traffic and CPU consumption of Kubelets than -> `loki.source.file`. +{{% admonition type="note" %}} +Because `loki.source.podlogs` uses the Kubernetes API to tail logs, it uses more network traffic and CPU consumption of Kubelets than `loki.source.file`. +{{% /admonition %}} -Multiple `loki.source.podlogs` components can be specified by giving them -different labels. +Multiple `loki.source.podlogs` components can be specified by giving them different labels. ## Usage @@ -44,32 +40,30 @@ loki.source.podlogs "LABEL" { ## Arguments -The component starts a new reader for each of the given `targets` and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.podlogs` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +Name | Type | Description | Default | Required +-------------|----------------------|-------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`loki.source.podlogs` searches for `PodLogs` resources on Kubernetes. Each -`PodLogs` resource describes a set of pods to tail logs from. +`loki.source.podlogs` searches for `PodLogs` resources on Kubernetes. Each `PodLogs` resource describes a set of pods to tail logs from. ## PodLogs custom resource The `PodLogs` resource describes a set of Pods to collect logs from. -> **NOTE**: `loki.source.podlogs` looks for `PodLogs` of -> `monitoring.grafana.com/v1alpha2`, and is not compatible with `PodLogs` from -> the Grafana Agent Operator, which are version `v1alpha1`. +{{% admonition type="note" %}} +`loki.source.podlogs` looks for `PodLogs` of `monitoring.grafana.com/v1alpha2`, and is not compatible with `PodLogs` from the Grafana Agent Operator, which are version `v1alpha1`. +{{% /admonition %}} -Field | Type | Description ------ | ---- | ----------- -`apiVersion` | string | `monitoring.grafana.com/v1alpha2` -`kind` | string | `PodLogs` -`metadata` | [ObjectMeta][] | Metadata for the PodLogs. -`spec` | [PodLogsSpec][] | Definition of what Pods to collect logs from. +Field | Type | Description +-------------|-----------------|---------------------------------------------- +`apiVersion` | string | `monitoring.grafana.com/v1alpha2` +`kind` | string | `PodLogs` +`metadata` | [ObjectMeta][] | Metadata for the PodLogs. +`spec` | [PodLogsSpec][] | Definition of what Pods to collect logs from. [ObjectMeta]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta [PodLogsSpec]: #podlogsspec @@ -78,50 +72,41 @@ Field | Type | Description `PodLogsSpec` describes a set of Pods to collect logs from. -Field | Type | Description ------ | ---- | ----------- -`selector` | [LabelSelector][] | Label selector of Pods to collect logs from. +Field | Type | Description +--------------------|-------------------|------------------------------------------------------------- `namespaceSelector` | [LabelSelector][] | Label selector of Namespaces that Pods can be discovered in. -`relabelings` | [RelabelConfig][] | Relabel rules to apply to discovered Pods. +`relabelings` | [RelabelConfig][] | Relabel rules to apply to discovered Pods. +`selector` | [LabelSelector][] | Label selector of Pods to collect logs from. -If `selector` is left as the default value, all Pods are discovered. If -`namespaceSelector` is left as the default value, all Namespaces are used for -Pod discovery. +If `selector` is left as the default value, all Pods are discovered. +If `namespaceSelector` is left as the default value, all Namespaces are used for Pod discovery. -The `relabelings` field can be used to modify labels from discovered Pods. The -following meta labels are available for relabeling: +The `relabelings` field can be used to modify labels from discovered Pods. +The following meta labels are available for relabeling: * `__meta_kubernetes_namespace`: The namespace of the Pod. -* `__meta_kubernetes_pod_name`: The name of the Pod. -* `__meta_kubernetes_pod_ip`: The pod IP of the Pod. -* `__meta_kubernetes_pod_label_`: Each label from the Pod. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from - the Pod. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the - Pod. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each - annotation from the Pod. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an - `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container. +* `__meta_kubernetes_pod_annotation_`: Each annotation from the Pod. +* `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the Pod. * `__meta_kubernetes_pod_container_image`: The image the container is using. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the Pod's ready - state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or - `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled - onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the Pod. +* `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. +* `__meta_kubernetes_pod_container_name`: Name of the container. * `__meta_kubernetes_pod_controller_kind`: Object kind of the Pod's controller. * `__meta_kubernetes_pod_controller_name`: Name of the Pod's controller. +* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +* `__meta_kubernetes_pod_ip`: The pod IP of the Pod. +* `__meta_kubernetes_pod_label_`: Each label from the Pod. +* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the Pod. +* `__meta_kubernetes_pod_name`: The name of the Pod. +* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. +* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. +* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the Pod's ready state. +* `__meta_kubernetes_pod_uid`: The UID of the Pod. -In addition to the meta labels, the following labels are exposed to tell -`loki.source.podlogs` which container to tail: +In addition to the meta labels, the following labels are exposed to tell `loki.source.podlogs` which container to tail: -* `__pod_namespace__`: The namespace of the Pod. -* `__pod_name__`: The name of the Pod. * `__pod_container_name__`: The container name within the Pod. +* `__pod_name__`: The name of the Pod. +* `__pod_namespace__`: The namespace of the Pod. * `__pod_uid__`: The UID of the Pod. [LabelSelector]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta @@ -129,26 +114,24 @@ In addition to the meta labels, the following labels are exposed to tell ## Blocks -The following blocks are supported inside the definition of -`loki.source.podlogs`: - -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -selector | [selector][] | Label selector for which `PodLogs` to discover. | no -selector > match_expression | [match_expression][] | Label selector expression for which `PodLogs` to discover. | no -namespace_selector | [selector][] | Label selector for which namespaces to discover `PodLogs` in. | no +The following blocks are supported inside the definition of `loki.source.podlogs`: + +Hierarchy | Block | Description | Required +--------------------------------------|----------------------|--------------------------------------------------------------------------|--------- +client | [client][] | Configures Kubernetes client used to tail logs. | no +client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no +namespace_selector | [selector][] | Label selector for which namespaces to discover `PodLogs` in. | no namespace_selector > match_expression | [match_expression][] | Label selector expression for which namespaces to discover `PodLogs` in. | no -clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no +selector | [selector][] | Label selector for which `PodLogs` to discover. | no +selector > match_expression | [match_expression][] | Label selector expression for which `PodLogs` to discover. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > -basic_auth` refers to a `basic_auth` block defined -inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > basic_auth` refers to a `basic_auth` block defined inside a `client` block. [client]: #client-block [basic_auth]: #basic_auth-block @@ -159,72 +142,84 @@ inside a `client` block. [match_expression]: #match_expression-block [clustering]: #clustering-beta -### client block +### client -The `client` block configures the Kubernetes client used to tail logs from -containers. If the `client` block isn't provided, the default in-cluster -configuration with the service account of the running Grafana Agent pod is -used. +The `client` block configures the Kubernetes client used to tail logs from containers. +If the `client` block isn't provided, the default in-cluster configuration with the service account of the running Grafana Agent pod is used. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +Name | Type | Description | Default | Required +--------------------|----------|--------------------------------------------------------------------|---------|--------- +`api_server` | `string` | URL of the Kubernetes API server. | | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no - At most one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most one of the following can be provided: +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument][client]. +- [`bearer_token` argument][client]. +- [`oauth2` block][oauth2]. -### basic_auth block +### client > authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### client > basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### client > oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### client > oauth2 > tls_config block -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} -### selector block +### client > tls_config block -The `selector` block describes a Kubernetes label selector for `PodLogs` or -Namespace discovery. +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### clustering (beta) + +Name | Type | Description | Default | Required +----------|--------|-----------------------------------------------------|---------|--------- +`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes + +When the agent is [using clustering][], and `enabled` is set to true, then this `loki.source.podlogs` component instance opts-in to participating in the cluster to distribute the load of log collection between all cluster nodes. + +If the agent is _not_ running in clustered mode, then the block is a no-op and `loki.source.podlogs` collects logs based on every PodLogs resource discovered. + +[using clustering]: {{< relref "../../concepts/clustering.md" >}} + +### selector + +The `selector` block describes a Kubernetes label selector for `PodLogs` or Namespace discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no +Name | Type | Description | Default | Required +---------------|---------------|---------------------------------------------------|---------|--------- +`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no When the `match_labels` argument is empty, all resources will be matched. -### match_expression block +### selector > match_expression -The `match_expression` block describes a Kubernetes label match expression for -`PodLogs` or Namespace discovery. +The `match_expression` block describes a Kubernetes label match expression for `PodLogs` or Namespace discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values`| `list(string)` | The values used when matching. | | no +Name | Type | Description | Default | Required +-----------|----------------|------------------------------------|---------|--------- +`key` | `string` | The label name to match against. | | yes +`operator` | `string` | The operator to use when matching. | | yes +`values` | `list(string)` | The values used when matching. | | no The `operator` argument must be one of the following strings: @@ -233,32 +228,15 @@ The `operator` argument must be one of the following strings: * `"Exists"` * `"DoesNotExist"` -Both `selector` and `namespace_selector` can make use of multiple -`match_expression` inner blocks which are treated as AND clauses. - -### clustering (beta) - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes - -When the agent is [using clustering][], and `enabled` is set to true, then this -`loki.source.podlogs` component instance opts-in to participating in the -cluster to distribute the load of log collection between all cluster nodes. - -If the agent is _not_ running in clustered mode, then the block is a no-op and -`loki.source.podlogs` collects logs based on every PodLogs resource discovered. - -[using clustering]: {{< relref "../../concepts/clustering.md" >}} +Both `selector` and `namespace_selector` can make use of multiple `match_expression` inner blocks which are treated as AND clauses. ## Exported fields -`loki.source.podlogs` does not export any fields. +`loki.source.podlogs` doesn't export any fields. ## Component health -`loki.source.podlogs` is only reported as unhealthy if given an invalid -configuration. +`loki.source.podlogs` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -266,18 +244,16 @@ configuration. * The labels associated with the target. * The full set of labels which were found during service discovery. -* The most recent time a log line was read and forwarded to the next components - in the pipeline. +* The most recent time a log line was read and forwarded to the next components in the pipeline. * The most recent error from tailing, if any. ## Debug metrics -`loki.source.podlogs` does not expose any component-specific debug metrics. +`loki.source.podlogs` doesn't expose any component-specific debug metrics. ## Example -This example discovers all `PodLogs` resources and forwards collected logs to a -`loki.write` component so they are written to Loki. +This example discovers all `PodLogs` resources and forwards collected logs to a `loki.write` component so they are written to Loki. ```river loki.source.podlogs "default" { diff --git a/docs/sources/flow/reference/components/loki.source.syslog.md b/docs/sources/flow/reference/components/loki.source.syslog.md index 3b91c152b8bc..3e6f17478717 100644 --- a/docs/sources/flow/reference/components/loki.source.syslog.md +++ b/docs/sources/flow/reference/components/loki.source.syslog.md @@ -11,15 +11,12 @@ title: loki.source.syslog # loki.source.syslog -`loki.source.syslog` listens for syslog messages over TCP or UDP connections -and forwards them to other `loki.*` components. The messages must be compliant -with the [RFC5424](https://www.rfc-editor.org/rfc/rfc5424) format. +`loki.source.syslog` listens for syslog messages over TCP or UDP connections and forwards them to other `loki.*` components. +The messages must be compliant with the [RFC5424](https://www.rfc-editor.org/rfc/rfc5424) format. -The component starts a new syslog listener for each of the given `config` -blocks and fans out incoming entries to the list of receivers in `forward_to`. +The component starts a new syslog listener for each of the given `config` blocks and fans out incoming entries to the list of receivers in `forward_to`. -Multiple `loki.source.syslog` components can be specified by giving them -different labels. +Multiple `loki.source.syslog` components can be specified by giving them different labels. ## Usage @@ -38,80 +35,67 @@ loki.source.syslog "LABEL" { `loki.source.syslog` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | ---------------------- | -------------------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +Name | Type | Description | Default | Required +----------------|----------------------|-------------------------------------------|---------|--------- +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no -The `relabel_rules` field can make use of the `rules` export value from a -[loki.relabel][] component to apply one or more relabeling rules to log entries -before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: {{< relref "./loki.relabel.md" >}} ## Blocks -The following blocks are supported inside the definition of -`loki.source.syslog`: +The following blocks are supported inside the definition of `loki.source.syslog`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -listener | [listener][] | Configures a listener for IETF Syslog (RFC5424) messages. | no +Hierarchy | Name | Description | Required +----------------------|----------------|-----------------------------------------------------------------------------|--------- +listener | [listener][] | Configures a listener for IETF Syslog (RFC5424) messages. | no listener > tls_config | [tls_config][] | Configures TLS settings for connecting to the endpoint for TCP connections. | no -The `>` symbol indicates deeper levels of nesting. For example, `config > tls_config` -refers to a `tls_config` block defined inside a `config` block. +The `>` symbol indicates deeper levels of nesting. For example, `config > tls_config` refers to a `tls_config` block defined inside a `config` block. [listener]: #listener-block [tls_config]: #tls_config-block -### listener block +### listener -The `listener` block defines the listen address and protocol where the listener -expects syslog messages to be sent to, as well as its behavior when receiving -messages. +The `listener` block defines the listen address and protocol where the listener expects syslog messages to be sent to, as well as its behavior when receiving messages. -The following arguments can be used to configure a `listener`. Only the -`address` field is required and any omitted fields take their default -values. +The following arguments can be used to configure a `listener`. +Only the `address` field is required and any omitted fields take their default values. -Name | Type | Description | Default | Required ------------------------- | ------------- | ----------- | ------- | -------- -`address` | `string` | The `` address to listen to for syslog messages. | | yes -`protocol` | `string` | The protocol to listen to for syslog messages. Must be either `tcp` or `udp`. | `tcp` | no -`idle_timeout` | `duration` | The idle timeout for tcp connections. | `"120s"` | no -`label_structured_data` | `bool` | Whether to translate syslog structured data to loki labels. | `false` | no -`labels` | `map(string)` | The labels to associate with each received syslog record. | `{}` | no -`use_incoming_timestamp` | `bool` | Whether to set the timestamp to the incoming syslog record timestamp. | `false` | no -`use_rfc5424_message` | `bool` | Whether to forward the full RFC5424-formatted syslog message. | `false` | no -`max_message_length` | `int` | The maximum limit to the length of syslog messages. | `8192` | no +Name | Type | Description | Default | Required +-------------------------|---------------|-------------------------------------------------------------------------------|----------|--------- +`address` | `string` | The `` address to listen to for syslog messages. | | yes +`idle_timeout` | `duration` | The idle timeout for TCP connections. | `"120s"` | no +`label_structured_data` | `bool` | Whether to translate syslog structured data to Loki labels. | `false` | no +`labels` | `map(string)` | The labels to associate with each received syslog record. | `{}` | no +`max_message_length` | `int` | The maximum limit to the length of syslog messages. | `8192` | no +`protocol` | `string` | The protocol to listen to for syslog messages. Must be either `tcp` or `udp`. | `tcp` | no +`use_incoming_timestamp` | `bool` | Whether to set the timestamp to the incoming syslog record timestamp. | `false` | no +`use_rfc5424_message` | `bool` | Whether to forward the full RFC5424-formatted syslog message. | `false` | no -By default, the component assigns the log entry timestamp as the time it -was processed. +By default, the component assigns the log entry timestamp as the time it was processed. The `labels` map is applied to every message that the component reads. -All header fields from the parsed RFC5424 messages are brought in as -internal labels, prefixed with `__syslog_`. +All header fields from the parsed RFC5424 messages are brought in as internal labels, prefixed with `__syslog_`. -If `label_structured_data` is set, structured data in the syslog header is also -translated to internal labels in the form of -`__syslog_message_sd__`. For example, a structured data entry of -`[example@99999 test="yes"]` becomes the label -`__syslog_message_sd_example_99999_test` with the value `"yes"`. +If `label_structured_data` is set, structured data in the syslog header is also translated to internal labels in the form of `__syslog_message_sd__`. +For example, a structured data entry of `[example@99999 test="yes"]` becomes the label `__syslog_message_sd_example_99999_test` with the value `"yes"`. -### tls_config block +### listener > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} ## Exported fields -`loki.source.syslog` does not export any fields. +`loki.source.syslog` doesn't export any fields. ## Component health -`loki.source.syslog` is only reported as unhealthy if given an invalid -configuration. +`loki.source.syslog` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -121,14 +105,14 @@ configuration. * The labels that the listener applies to incoming log entries. ## Debug metrics + * `loki_source_syslog_entries_total` (counter): Total number of successful entries sent to the syslog component. * `loki_source_syslog_parsing_errors_total` (counter): Total number of parsing errors while receiving syslog messages. * `loki_source_syslog_empty_messages_total` (counter): Total number of empty messages received from the syslog component. ## Example -This example listens for Syslog messages in valid RFC5424 format over TCP and -UDP in the specified ports and forwards them to a `loki.write` component. +The following example listens for Syslog messages in valid RFC5424 format over TCP and UDP in the specified ports and forwards them to a `loki.write` component. ```river loki.source.syslog "local" { @@ -152,4 +136,3 @@ loki.write "local" { } } ``` - diff --git a/docs/sources/flow/reference/components/loki.source.windowsevent.md b/docs/sources/flow/reference/components/loki.source.windowsevent.md index 4c8faf4059f2..abd1a1cf8c0b 100644 --- a/docs/sources/flow/reference/components/loki.source.windowsevent.md +++ b/docs/sources/flow/reference/components/loki.source.windowsevent.md @@ -11,11 +11,9 @@ title: loki.source.windowsevent # loki.source.windowsevent -`loki.source.windowsevent` reads events from Windows Event Logs and forwards them to other -`loki.*` components. +`loki.source.windowsevent` reads events from Windows Event Logs and forwards them to other `loki.*` components. -Multiple `loki.source.windowsevent` components can be specified by giving them -different labels. +Multiple `loki.source.windowsevent` components can be specified by giving them different labels. ## Usage @@ -27,41 +25,38 @@ loki.source.windowsevent "LABEL" { ``` ## Arguments -The component starts a new reader and fans out -log entries to the list of receivers passed in `forward_to`. +The component starts a new reader and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.windowsevent` supports the following arguments: -Name | Type | Description | Default | Required ------------------------- |----------------------|--------------------------------------------------------------------------------|----------------------------| -------- -`locale` | `number` | Locale ID for event rendering. 0 default is Windows Locale. | `0` | no -`eventlog_name` | `string` | Event log to read from. | | See below. -`xpath_query` | `string` | Event log to read from. | `"*"` | See below. -`bookmark_path` | `string` | Keeps position in event log. | `"DATA_PATH/bookmark.xml"` | no -`poll_interval` | `duration` | How often to poll the event log. | `"3s"` | no -`exclude_event_data` | `bool` | Exclude event data. | `false` | no -`exclude_user_data` | `bool` | Exclude user data. | `false` | no -`exclude_event_message` | `bool` | Exclude the human-friendly event message. | `false` | no -`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed. | `false` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`labels` | `map(string)` | The labels to associate with incoming logs. | | no - - -> **NOTE**: `eventlog_name` is required if `xpath_query` does not specify the event log. -> You can define `xpath_query` in [short or xml form](https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events). -> When using the XML form you can specify `event_log` in the `xpath_query`. -> If using short form, you must define `eventlog_name`. - +Name | Type | Description | Default | Required +-------------------------|----------------------|-----------------------------------------------------------------------------|----------------------------|----------- +`locale` | `number` | Locale ID for event rendering. 0 default is Windows Locale. | `0` | no +`eventlog_name` | `string` | Event log to read from. | | See below. +`xpath_query` | `string` | Event log to read from. | `"*"` | See below. +`bookmark_path` | `string` | Keeps position in event log. | `"DATA_PATH/bookmark.xml"` | no +`poll_interval` | `duration` | How often to poll the event log. | `"3s"` | no +`exclude_event_data` | `bool` | Exclude event data. | `false` | no +`exclude_user_data` | `bool` | Exclude user data. | `false` | no +`exclude_event_message` | `bool` | Exclude the human-friendly event message. | `false` | no +`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed. | `false` | no +`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +`labels` | `map(string)` | The labels to associate with incoming logs. | | no + +{{% admonition type="note" %}} +`eventlog_name` is required if `xpath_query` does not specify the event log. +You can define `xpath_query` in [short or xml form](https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events). +When using the XML form you can specify `event_log` in the `xpath_query`. +If using short form, you must define `eventlog_name`. +{{% /admonition %}} ## Component health -`loki.source.windowsevent` is only reported as unhealthy if given an invalid -configuration. +`loki.source.windowsevent` is only reported as unhealthy if given an invalid configuration. ## Example -This example collects log entries from the Event Log specified in `eventlog_name` and -forwards them to a `loki.write` component so they are written to Loki. +This example collects log entries from the Event Log specified in `eventlog_name` and forwards them to a `loki.write` component so they are written to Loki. ```river loki.source.windowsevent "application" { diff --git a/docs/sources/flow/reference/components/loki.write.md b/docs/sources/flow/reference/components/loki.write.md index 4dd21097b720..6eac40f871b5 100644 --- a/docs/sources/flow/reference/components/loki.write.md +++ b/docs/sources/flow/reference/components/loki.write.md @@ -11,11 +11,9 @@ title: loki.write # loki.write -`loki.write` receives log entries from other loki components and sends them -over the network using Loki's `logproto` format. +`loki.write` receives log entries from other loki components and sends them over the network using Loki's `logproto` format. -Multiple `loki.write` components can be specified by giving them -different labels. +Multiple `loki.write` components can be specified by giving them different labels. ## Usage @@ -31,30 +29,28 @@ loki.write "LABEL" { `loki.write` supports the following arguments: -Name | Type | Description | Default | Required ------------------ | ------------- | ------------------------------------------------ | ------- | -------- -`max_streams` | `int` | Maximum number of active streams. | 0 (no limit) | no -`external_labels` | `map(string)` | Labels to add to logs sent over the network. | | no +Name | Type | Description | Default | Required +------------------|---------------|----------------------------------------------|--------------|--------- +`external_labels` | `map(string)` | Labels to add to logs sent over the network. | | no +`max_streams` | `int` | Maximum number of active streams. | 0 (no limit) | no ## Blocks -The following blocks are supported inside the definition of -`loki.write`: +The following blocks are supported inside the definition of `loki.write`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -endpoint | [endpoint][] | Location to send logs to. | no -wal | [wal][] | Write-ahead log configuration. | no -endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -| endpoint > queue_config | [queue_config][] | When WAL is enabled, configures the queue client. | no | +Hierarchy | Block | Description | Required +-------------------------------|-------------------|----------------------------------------------------------|--------- +endpoint | [endpoint][] | Location to send logs to. | no +endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no +endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +endpoint > queue_config | [queue_config][] | When WAL is enabled, configures the queue client. | no +endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +wal | [wal][] | Write-ahead log configuration. | no -The `>` symbol indicates deeper levels of nesting. For example, `endpoint > -basic_auth` refers to a `basic_auth` block defined inside an -`endpoint` block. +The `>` symbol indicates deeper levels of nesting. +For example, `endpoint > basic_auth` refers to a `basic_auth` block defined inside an `endpoint` block. [endpoint]: #endpoint-block [wal]: #wal-block @@ -64,108 +60,103 @@ basic_auth` refers to a `basic_auth` block defined inside an [tls_config]: #tls_config-block [queue_config]: #queue_config-block -### endpoint block +### endpoint -The `endpoint` block describes a single location to send logs to. Multiple -`endpoint` blocks can be provided to send logs to multiple locations. +The `endpoint` block describes a single location to send logs to. +Multiple `endpoint` blocks can be provided to send logs to multiple locations. The following arguments are supported: -Name | Type | Description | Default | Required ---------------------- | ------------- | ------------------------------------- | -------------- | -------- -`url` | `string` | Full URL to send logs to. | | yes -`name` | `string` | Optional name to identify this endpoint with. | | no -`headers` | `map(string)` | Extra headers to deliver with the request. | | no -`batch_wait` | `duration` | Maximum amount of time to wait before sending a batch. | `"1s"` | no -`batch_size` | `string` | Maximum batch size of logs to accumulate before sending. | `"1MiB"` | no -`remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no -`tenant_id` | `string` | The tenant ID used by default to push logs. | | no -`min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no -`max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no -`max_backoff_retries` | `int` | Maximum number of retries. | 10 | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no +Name | Type | Description | Default | Required +----------------------|---------------|--------------------------------------------------------------|-----------|--------- +`url` | `string` | Full URL to send logs to. | | yes +`batch_size` | `string` | Maximum batch size of logs to accumulate before sending. | `"1MiB"` | no +`batch_wait` | `duration` | Maximum amount of time to wait before sending a batch. | `"1s"` | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`headers` | `map(string)` | Extra headers to deliver with the request. | | no +`max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no +`max_backoff_retries` | `int` | Maximum number of retries. | 10 | no +`min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no +`name` | `string` | Optional name to identify this endpoint with. | | no +`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no +`remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no +`retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no +`tenant_id` | `string` | The tenant ID used by default to push logs. | | no - At most one of the following can be provided: - - [`bearer_token` argument](#endpoint-block). - - [`bearer_token_file` argument](#endpoint-block). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most one of the following can be provided: +- [`authorization` block][authorization]. +- [`basic_auth` block][basic_auth]. +- [`bearer_token_file` argument](#endpoint-block). +- [`bearer_token` argument](#endpoint-block). +- [`oauth2` block][oauth2]. -If no `tenant_id` is provided, the component assumes that the Loki instance at -`endpoint` is running in single-tenant mode and no X-Scope-OrgID header is -sent. +If no `tenant_id` is provided, the component assumes that the Loki instance at `endpoint` is running in single-tenant mode and no X-Scope-OrgID header is sent. -When multiple `endpoint` blocks are provided, the `loki.write` component -creates a client for each. Received log entries are fanned-out to these clients -in succession. That means that if one client is bottlenecked, it may impact -the rest. +When multiple `endpoint` blocks are provided, the `loki.write` component creates a client for each. Received log entries are fanned-out to these clients in succession. +That means that if one client is bottlenecked, it may impact the rest. -Endpoints can be named for easier identification in debug metrics by using the -`name` argument. If the `name` argument isn't provided, a name is generated -based on a hash of the endpoint settings. +Endpoints can be named for easier identification in debug metrics by using the `name` argument. +If the `name` argument isn't provided, a name is generated based on a hash of the endpoint settings. -The `retry_on_http_429` argument specifies whether `HTTP 429` status code -responses should be treated as recoverable errors; other `HTTP 4xx` status code -responses are never considered recoverable errors. When `retry_on_http_429` is -enabled, the retry mechanism will be governed by the backoff configuration specified through `min_backoff_period`, `max_backoff_period ` and `max_backoff_retries` attributes. +The `retry_on_http_429` argument specifies whether `HTTP 429` status code responses should be treated as recoverable errors; other `HTTP 4xx` status code responses are never considered recoverable errors. +When `retry_on_http_429` is enabled, the retry mechanism will be governed by the backoff configuration specified through `min_backoff_period`, `max_backoff_period ` and `max_backoff_retries` attributes. -### basic_auth block +### endpoint > authorization -{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} -### authorization block +### endpoint > basic_auth -{{< docs/shared lookup="flow/reference/components/authorization-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/basic-auth-block.md" source="agent" version="" >}} -### oauth2 block +### endpoint > oauth2 -{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/oauth2-block.md" source="agent" version="" >}} -### tls_config block +### endpoint > oauth2 > tls_config -{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} -### queue_config block (experimental) +### endpoint > queue_config (experimental) -The optional `queue_config` block configures, when WAL is enabled (see [Write-Ahead block](#wal-block-experimental)), how the -underlying client queues batches of logs to be sent to Loki. +The optional `queue_config` block configures how the underlying client queues batches of logs to be sent to Loki when WAL is enabled (see [Write-Ahead block](#wal-block-experimental)). The following arguments are supported: -| Name | Type | Description | Default | Required | -| --------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | -| `capacity` | `string` | Controls the size of the underlying send queue buffer. This setting should be considered a worst-case scenario of memory consumption, in which all enqueued batches are full. | `10MiB` | no | +| Name | Type | Description | Default | Required | +|-----------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------| +| `capacity` | `string` | Controls the size of the underlying send queue buffer. This setting should be considered a worst-case scenario of memory consumption, in which all enqueued batches are full. | `10MiB` | no | | `drain_timeout` | `duration` | Configures the maximum time the client can take to drain the send queue upon shutdown. During that time, it will enqueue pending batches and drain the send queue sending each. | `"1m"` | no | -### wal block (experimental) +### endpoint > tls_config + +{{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} + +### wal (experimental) + +The optional `wal` block configures the Write-Ahead Log (WAL) used in the Loki remote-write client. +To enable the WAL, you must include the `wal` block in your configuration. +When the WAL is enabled, the log entries sent to the `loki.write` component are first written to a WAL under the `dir` directory and then read into the remote-write client. +This process provides durability guarantees when an entry reaches this component. +The client knows when to read from the WAL using the following two mechanisms: -The optional `wal` block configures the Write-Ahead Log (WAL) used in the Loki remote-write client. To enable the WAL, -you must include the `wal` block in your configuration. When the WAL is enabled, the log entries sent to the `loki.write` -component are first written to a WAL under the `dir` directory and then read into the remote-write client. This process -provides durability guarantees when an entry reaches this component. The client knows when to read from the WAL using the -following two mechanisms: - The WAL-writer side of the `loki.write` component notifies the reader side that new data is available. -- The WAL-reader side periodically checks if there is new data, increasing the wait time exponentially between -`min_read_frequency` and `max_read_frequency`. +- The WAL-reader side periodically checks if there is new data, increasing the wait time exponentially between `min_read_frequency` and `max_read_frequency`. -The WAL is located inside a component-specific directory relative to the -storage path Grafana Agent is configured to use. See the -[`agent run` documentation][run] for how to change the storage path. +The WAL is located inside a component-specific directory relative to the storage path Grafana Agent is configured to use. +See the [`agent run` documentation][run] for how to change the storage path. The following arguments are supported: -Name | Type | Description | Default | Required ---------------------- |------------|--------------------------------------------------------------------------------------------------------------------|-----------| -------- -`enabled` | `bool` | Whether to enable the WAL. | false | no -`max_segment_age` | `duration` | Maximum time a WAL segment should be allowed to live. Segments older than this setting will be eventually deleted. | `"1h"` | no -`min_read_frequency` | `duration` | Minimum backoff time in the backup read mechanism. | `"250ms"` | no -`max_read_frequency` | `duration` | Maximum backoff time in the backup read mechanism. | `"1s"` | no +Name | Type | Description | Default | Required +---------------------|------------|--------------------------------------------------------------------------------------------------------------------|-----------|--------- +`enabled` | `bool` | Whether to enable the WAL. | false | no +`max_read_frequency` | `duration` | Maximum backoff time in the backup read mechanism. | `"1s"` | no +`max_segment_age` | `duration` | Maximum time a WAL segment should be allowed to live. Segments older than this setting will be eventually deleted. | `"1h"` | no +`min_read_frequency` | `duration` | Minimum backoff time in the backup read mechanism. | `"250ms"` | no [run]: {{< relref "../cli/run.md" >}} @@ -173,28 +164,26 @@ Name | Type | Description The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +-----------|----------------|-------------------------------------------------------------- `receiver` | `LogsReceiver` | A value that other components can use to send log entries to. ## Component health -`loki.write` is only reported as unhealthy if given an invalid -configuration. +`loki.write` is only reported as unhealthy if given an invalid configuration. ## Debug information -`loki.write` does not expose any component-specific debug -information. +`loki.write` does not expose any component-specific debug information. ## Debug metrics -* `loki_write_encoded_bytes_total` (counter): Number of bytes encoded and ready to send. -* `loki_write_sent_bytes_total` (counter): Number of bytes sent. +* `loki_write_batch_retries_total` (counter): Number of times batches have had to be retried. * `loki_write_dropped_bytes_total` (counter): Number of bytes dropped because failed to be sent to the ingester after all retries. -* `loki_write_sent_entries_total` (counter): Number of log entries sent to the ingester. * `loki_write_dropped_entries_total` (counter): Number of log entries dropped because they failed to be sent to the ingester after all retries. +* `loki_write_encoded_bytes_total` (counter): Number of bytes encoded and ready to send. * `loki_write_request_duration_seconds` (histogram): Duration of sent requests. -* `loki_write_batch_retries_total` (counter): Number of times batches have had to be retried. +* `loki_write_sent_bytes_total` (counter): Number of bytes sent. +* `loki_write_sent_entries_total` (counter): Number of log entries sent to the ingester. * `loki_write_stream_lag_seconds` (gauge): Difference between current time and last batch timestamp for successful sends. ## Examples diff --git a/docs/sources/shared/flow/reference/components/authorization-block.md b/docs/sources/shared/flow/reference/components/authorization-block.md index 190cd11f8bb9..79ffc301039d 100644 --- a/docs/sources/shared/flow/reference/components/authorization-block.md +++ b/docs/sources/shared/flow/reference/components/authorization-block.md @@ -10,11 +10,10 @@ description: Shared content, authorization block headless: true --- -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`type` | `string` | Authorization type, for example, "Bearer". | | no -`credentials` | `secret` | Secret value. | | no -`credentials_file` | `string` | File containing the secret value. | | no +Name | Type | Description | Default | Required +-------------------|----------|--------------------------------------------|---------|--------- +`credentials_file` | `string` | File containing the secret value. | | no +`credentials` | `secret` | Secret value. | | no +`type` | `string` | Authorization type, for example, "Bearer". | | no -`credential` and `credentials_file` are mutually exclusive and only one can be -provided inside of an `authorization` block. +`credential` and `credentials_file` are mutually exclusive and only one can be provided inside of an `authorization` block. diff --git a/docs/sources/shared/flow/reference/components/azuread-block.md b/docs/sources/shared/flow/reference/components/azuread-block.md index ebdf436d02fe..2ffb379fff08 100644 --- a/docs/sources/shared/flow/reference/components/azuread-block.md +++ b/docs/sources/shared/flow/reference/components/azuread-block.md @@ -10,8 +10,8 @@ description: Shared content, azuread block headless: true --- -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- +Name | Type | Description | Default | Required +--------|----------|------------------|-----------------|--------- `cloud` | `string` | The Azure Cloud. | `"AzurePublic"` | no The supported values for `cloud` are: diff --git a/docs/sources/shared/flow/reference/components/basic-auth-block.md b/docs/sources/shared/flow/reference/components/basic-auth-block.md index 06c81f660e3c..19b956c7ca9a 100644 --- a/docs/sources/shared/flow/reference/components/basic-auth-block.md +++ b/docs/sources/shared/flow/reference/components/basic-auth-block.md @@ -10,11 +10,10 @@ description: Shared content, basic auth block headless: true --- -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | Basic auth username. | | no -`password` | `secret` | Basic auth password. | | no -`password_file` | `string` | File containing the basic auth password. | | no +Name | Type | Description | Default | Required +----------------|----------|------------------------------------------|---------|--------- +`password_file` | `string` | File containing the basic auth password. | | no +`password` | `secret` | Basic auth password. | | no +`username` | `string` | Basic auth username. | | no -`password` and `password_file` are mutually exclusive and only one can be -provided inside of a `basic_auth` block. +`password` and `password_file` are mutually exclusive and only one can be provided inside of a `basic_auth` block. diff --git a/docs/sources/shared/flow/reference/components/exporter-component-exports.md b/docs/sources/shared/flow/reference/components/exporter-component-exports.md index beb717a13fae..4bf4ae82b44d 100644 --- a/docs/sources/shared/flow/reference/components/exporter-component-exports.md +++ b/docs/sources/shared/flow/reference/components/exporter-component-exports.md @@ -13,15 +13,12 @@ headless: true The following fields are exported and can be referenced by other components. Name | Type | Description ---------- | ------------------- | ----------- +----------|---------------------|---------------------------------------------------------- `targets` | `list(map(string))` | The targets that can be used to collect exporter metrics. -For example, the `targets` can either be passed to a `discovery.relabel` -component to rewrite the targets' label sets, or to a `prometheus.scrape` -component that collects the exposed metrics. +For example, the `targets` can either be passed to a `discovery.relabel` component to rewrite the targets' label sets, or to a `prometheus.scrape` component that collects the exposed metrics. -The exported targets will use the configured [in-memory traffic][] address -specified by the [run command][]. +The exported targets will use the configured [in-memory traffic][] address specified by the [run command][]. [in-memory traffic]: {{< relref "../../../../flow/concepts/component_controller.md#in-memory-traffic" >}} [run command]: {{< relref "../../../../flow/reference/cli/run.md" >}} diff --git a/docs/sources/shared/flow/reference/components/extract-field-block.md b/docs/sources/shared/flow/reference/components/extract-field-block.md index 5036097d155f..2f30a1b137dc 100644 --- a/docs/sources/shared/flow/reference/components/extract-field-block.md +++ b/docs/sources/shared/flow/reference/components/extract-field-block.md @@ -12,31 +12,28 @@ headless: true The following attributes are supported: -Name | Type | Description | Default | Required ----- |----------------|----------------------------------------------------------------------------------------------------------|---------| -------- -`tag_name` | `string` | The name of the resource attribute that will be added to logs, metrics, or spans. | `""` | no -`key` | `string` | The annotation (or label) name. This must exactly match an annotation (or label) name. | `""` | no -`key_regex` | `string` | A regular expression used to extract a key that matches the regex. | `""` | no -`regex` | `string` | An optional field used to extract a sub-string from a complex field value. | `""` | no -`from` | `string` | The source of the labels or annotations. Allowed values are `pod` and `namespace`. | `pod` | no - -When `tag_name` is not specified, a default tag name will be used with the format: +Name | Type | Description | Default | Required +------------|----------|------------------------------------------------------------------------------------|---------|--------- +`from` | `string` | The source of the labels or annotations. Allowed values are `pod` and `namespace`. | `pod` | no +`key_regex` | `string` | A regular expression used to extract a key that matches the regular expression. | `""` | no +`key` | `string` | The annotation or label name. This must exactly match an annotation or label name. | `""` | no +`regex` | `string` | An optional field used to extract a sub-string from a complex field value. | `""` | no +`tag_name` | `string` | The name of the resource attribute that will be added to logs, metrics, or spans. | `""` | no + +When `tag_name` isn't specified, a default tag name is used with the format: * `k8s.pod.annotations.` * `k8s.pod.labels.