diff --git a/docs/sources/_index.md b/docs/sources/_index.md index 79f54ff17f32..a7cd98175188 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/ -- /docs/grafana-cloud/send-data/agent/ + - /docs/grafana-cloud/agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/ + - /docs/grafana-cloud/send-data/agent/ canonical: https://grafana.com/docs/agent/latest/ title: Grafana Agent description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector weight: 350 cascade: - AGENT_RELEASE: v0.43.4 + AGENT_RELEASE: v0.41.1 OTEL_VERSION: v0.96.0 refs: variants: @@ -57,9 +57,9 @@ For information on other variants of Grafana Agent, refer to [Introduction to Gr Grafana Agent can collect, transform, and send data to: -* The [Prometheus][] ecosystem -* The [OpenTelemetry][] ecosystem -* The Grafana open source ecosystem ([Loki][], [Grafana][], [Tempo][], [Mimir][], [Pyroscope][]) +- The [Prometheus][] ecosystem +- The [OpenTelemetry][] ecosystem +- The Grafana open source ecosystem ([Loki][], [Grafana][], [Tempo][], [Mimir][], [Pyroscope][]) [Terraform]: https://terraform.io [Prometheus]: https://prometheus.io @@ -72,48 +72,48 @@ Grafana Agent can collect, transform, and send data to: ## Why use Grafana Agent? -* **Vendor-neutral**: Fully compatible with the Prometheus, OpenTelemetry, and +- **Vendor-neutral**: Fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems. -* **Every signal**: Collect telemetry data for metrics, logs, traces, and +- **Every signal**: Collect telemetry data for metrics, logs, traces, and continuous profiles. -* **Scalable**: Deploy on any number of machines to collect millions of active +- **Scalable**: Deploy on any number of machines to collect millions of active series and terabytes of logs. -* **Battle-tested**: Grafana Agent extends the existing battle-tested code from +- **Battle-tested**: Grafana Agent extends the existing battle-tested code from the Prometheus and OpenTelemetry Collector projects. -* **Powerful**: Write programmable pipelines with ease, and debug them using a +- **Powerful**: Write programmable pipelines with ease, and debug them using a [built-in UI](ref:ui). -* **Batteries included**: Integrate with systems like MySQL, Kubernetes, and +- **Batteries included**: Integrate with systems like MySQL, Kubernetes, and Apache to get telemetry that's immediately useful. ## Getting started -* Choose a [variant](ref:variants) of Grafana Agent to run. -* Refer to the documentation for the variant to use: - * [Static mode](ref:static-mode) - * [Static mode Kubernetes operator](ref:static-mode-kubernetes-operator) - * [Flow mode](ref:flow-mode) +- Choose a [variant](ref:variants) of Grafana Agent to run. +- Refer to the documentation for the variant to use: + - [Static mode](ref:static-mode) + - [Static mode Kubernetes operator](ref:static-mode-kubernetes-operator) + - [Flow mode](ref:flow-mode) ## Supported platforms -* Linux +- Linux - * Minimum version: kernel 2.6.32 or later - * Architectures: AMD64, ARM64 + - Minimum version: kernel 2.6.32 or later + - Architectures: AMD64, ARM64 -* Windows +- Windows - * Minimum version: Windows Server 2016 or later, or Windows 10 or later. - * Architectures: AMD64 + - Minimum version: Windows Server 2016 or later, or Windows 10 or later. + - Architectures: AMD64 -* macOS +- macOS - * Minimum version: macOS 10.13 or later - * Architectures: AMD64 (Intel), ARM64 (Apple Silicon) + - Minimum version: macOS 10.13 or later + - Architectures: AMD64 (Intel), ARM64 (Apple Silicon) -* FreeBSD +- FreeBSD - * Minimum version: FreeBSD 10 or later - * Architectures: AMD64 + - Minimum version: FreeBSD 10 or later + - Architectures: AMD64 ## Release cadence @@ -131,4 +131,3 @@ published outside of the release cadence may not include these dependency updates. Patch and security releases may be created at any time. - diff --git a/docs/sources/about.md b/docs/sources/about.md index 51d7a1bea499..edc2288f5ae0 100644 --- a/docs/sources/about.md +++ b/docs/sources/about.md @@ -1,10 +1,10 @@ --- aliases: -- ./about-agent/ -- /docs/grafana-cloud/agent/about/ -- /docs/grafana-cloud/monitor-infrastructure/agent/about/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/about/ -- /docs/grafana-cloud/send-data/agent/about/ + - ./about-agent/ + - /docs/grafana-cloud/agent/about/ + - /docs/grafana-cloud/monitor-infrastructure/agent/about/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/about/ + - /docs/grafana-cloud/send-data/agent/about/ canonical: https://grafana.com/docs/agent/latest/about/ description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector menuTitle: Introduction @@ -78,7 +78,6 @@ Grafana Agent is available in three different variants: - [Static mode Kubernetes operator](ref:static-mode-kubernetes-operator): The Kubernetes operator for Static mode. - [Flow mode](ref:flow-mode): The new, component-based Grafana Agent. - [Pyroscope]: https://grafana.com/docs/pyroscope/latest/configure-client/grafana-agent/go_pull [helm chart]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-k8s-helmchart [sla]: https://grafana.com/legal/grafana-cloud-sla @@ -86,11 +85,11 @@ Grafana Agent is available in three different variants: ## Stability -| Project | Stability | -| ------- | --------- | -| Static mode | Stable | -| Static mode Kubernetes operator | Beta | -| Flow mode | Stable | +| Project | Stability | +| ------------------------------- | --------- | +| Static mode | Stable | +| Static mode Kubernetes operator | Beta | +| Flow mode | Stable | ## Choose which variant of Grafana Agent to run @@ -103,26 +102,26 @@ Each variant of Grafana Agent provides a different level of functionality. The f #### Core telemetry -| | Grafana Agent Flow mode | Grafana Agent Static mode | Grafana Agent Operator | OpenTelemetry Collector | Prometheus Agent mode | -|--------------|--------------------------|---------------------------|------------------------|-------------------------|-----------------------| +| | Grafana Agent Flow mode | Grafana Agent Static mode | Grafana Agent Operator | OpenTelemetry Collector | Prometheus Agent mode | +| ------------ | ---------------------------------------------- | ------------------------- | ---------------------- | ----------------------- | --------------------- | | **Metrics** | [Prometheus](ref:prometheus), [OTel](ref:otel) | Prometheus | Prometheus | OTel | Prometheus | -| **Logs** | [Loki](ref:loki), [OTel](ref:otel) | Loki | Loki | OTel | No | -| **Traces** | [OTel](ref:otel) | OTel | OTel | OTel | No | -| **Profiles** | [Pyroscope][] | No | No | Planned | No | +| **Logs** | [Loki](ref:loki), [OTel](ref:otel) | Loki | Loki | OTel | No | +| **Traces** | [OTel](ref:otel) | OTel | OTel | OTel | No | +| **Profiles** | [Pyroscope][] | No | No | Planned | No | #### **OSS features** | | Grafana Agent Flow mode | Grafana Agent Static mode | Grafana Agent Operator | OpenTelemetry Collector | Prometheus Agent mode | -|--------------------------|-------------------------|---------------------------|------------------------|-------------------------|-----------------------| +| ------------------------ | ----------------------- | ------------------------- | ---------------------- | ----------------------- | --------------------- | | **Kubernetes native** | [Yes][helm chart] | No | Yes | Yes | No | -| **Clustering** | [Yes](ref:clustering) | No | No | No | No | -| **Prometheus rules** | [Yes](ref:rules) | No | No | No | No | -| **Native Vault support** | [Yes](ref:vault) | No | No | No | No | +| **Clustering** | [Yes](ref:clustering) | No | No | No | No | +| **Prometheus rules** | [Yes](ref:rules) | No | No | No | No | +| **Native Vault support** | [Yes](ref:vault) | No | No | No | No | #### Grafana Cloud solutions | | Grafana Agent Flow mode | Grafana Agent Static mode | Grafana Agent Operator | OpenTelemetry Collector | Prometheus Agent mode | -|-------------------------------|-------------------------|---------------------------|------------------------|-------------------------|-----------------------| +| ----------------------------- | ----------------------- | ------------------------- | ---------------------- | ----------------------- | --------------------- | | **Official vendor support** | [Yes][sla] | Yes | Yes | No | No | | **Cloud integrations** | Some | Yes | Some | No | No | | **Kubernetes monitoring** | [Yes][helm chart] | Yes, custom | Yes | No | Yes, custom | @@ -135,9 +134,9 @@ Static mode is the most mature variant of Grafana Agent. You should run Static mode when: -* **Maturity**: You need to use the most mature version of Grafana Agent. +- **Maturity**: You need to use the most mature version of Grafana Agent. -* **Grafana Cloud integrations**: You need to use Grafana Agent with Grafana Cloud integrations. +- **Grafana Cloud integrations**: You need to use Grafana Agent with Grafana Cloud integrations. ### Static mode Kubernetes operator @@ -153,7 +152,7 @@ allowing static mode to support resources from Prometheus Operator, such as Serv You should run the Static mode Kubernetes operator when: -* **Prometheus Operator compatibility**: You need to be able to consume +- **Prometheus Operator compatibility**: You need to be able to consume ServiceMonitors, PodMonitors, and Probes from the Prometheus Operator project for collecting Prometheus metrics. @@ -166,20 +165,19 @@ improved debugging, and ability to adapt to the needs of power users by adopting You should run Flow mode when: -* You need functionality unique to Flow mode: +- You need functionality unique to Flow mode: - * **Improved debugging**: You need to more easily debug configuration issues using a UI. + - **Improved debugging**: You need to more easily debug configuration issues using a UI. - * **Full OpenTelemetry support**: Support for collecting OpenTelemetry metrics, logs, and traces. + - **Full OpenTelemetry support**: Support for collecting OpenTelemetry metrics, logs, and traces. - * **PrometheusRule support**: Support for the PrometheusRule resource from the Prometheus Operator project for configuring Grafana Mimir. + - **PrometheusRule support**: Support for the PrometheusRule resource from the Prometheus Operator project for configuring Grafana Mimir. - * **Ecosystem transformation**: You need to be able to convert Prometheus and Loki pipelines to and from OpenTelmetry Collector pipelines. + - **Ecosystem transformation**: You need to be able to convert Prometheus and Loki pipelines to and from OpenTelmetry Collector pipelines. - * **Grafana Pyroscope support**: Support for collecting profiles for Grafana Pyroscope. + - **Grafana Pyroscope support**: Support for collecting profiles for Grafana Pyroscope. ### BoringCrypto [BoringCrypto](https://pkg.go.dev/crypto/internal/boring) is an **EXPERIMENTAL** feature for building Grafana Agent binaries and images with BoringCrypto enabled. Builds and Docker images for Linux arm64/amd64 are made available. - diff --git a/docs/sources/data-collection.md b/docs/sources/data-collection.md index 910d92653be2..887133020f88 100644 --- a/docs/sources/data-collection.md +++ b/docs/sources/data-collection.md @@ -1,10 +1,10 @@ --- aliases: -- ./data-collection/ -- /docs/grafana-cloud/agent/data-collection/ -- /docs/grafana-cloud/monitor-infrastructure/agent/data-collection/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/data-collection/ -- /docs/grafana-cloud/send-data/agent/data-collection/ + - ./data-collection/ + - /docs/grafana-cloud/agent/data-collection/ + - /docs/grafana-cloud/monitor-infrastructure/agent/data-collection/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/data-collection/ + - /docs/grafana-cloud/send-data/agent/data-collection/ canonical: https://grafana.com/docs/agent/latest/data-collection/ description: Grafana Agent data collection menuTitle: Data collection @@ -40,20 +40,19 @@ Statistics help us better understand how Grafana Agent is used. This helps us pr The usage information includes the following details: -* A randomly generated, anonymous unique ID (UUID). -* Timestamp of when the UID was first generated. -* Timestamp of when the report was created (by default, every four hours). -* Version of running Grafana Agent. -* Operating system Grafana Agent is running on. -* System architecture Grafana Agent is running on. -* List of enabled feature flags ([Static](ref:static) mode only). -* List of enabled integrations ([Static](ref:static) mode only). -* List of enabled [components](ref:components) ([Flow](ref:flow) mode only). -* Method used to deploy Grafana Agent, for example Docker, Helm, RPM, or Operator. +- A randomly generated, anonymous unique ID (UUID). +- Timestamp of when the UID was first generated. +- Timestamp of when the report was created (by default, every four hours). +- Version of running Grafana Agent. +- Operating system Grafana Agent is running on. +- System architecture Grafana Agent is running on. +- List of enabled feature flags ([Static](ref:static) mode only). +- List of enabled integrations ([Static](ref:static) mode only). +- List of enabled [components](ref:components) ([Flow](ref:flow) mode only). +- Method used to deploy Grafana Agent, for example Docker, Helm, RPM, or Operator. This list may change over time. All newly reported data is documented in the CHANGELOG. ## Opt-out of data collection You can use the `-disable-reporting` [command line flag](ref:command-line-flag) to disable the reporting and opt-out of the data collection. - diff --git a/docs/sources/flow/_index.md b/docs/sources/flow/_index.md index 5b8185e891c4..75dd6da3724d 100644 --- a/docs/sources/flow/_index.md +++ b/docs/sources/flow/_index.md @@ -1,11 +1,12 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/ -- /docs/grafana-cloud/send-data/agent/flow/ + - /docs/grafana-cloud/agent/flow/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/ + - /docs/grafana-cloud/send-data/agent/flow/ canonical: https://grafana.com/docs/agent/latest/flow/ -description: Grafana Agent Flow is a component-based revision of Grafana Agent with +description: + Grafana Agent Flow is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and adaptability title: Flow mode weight: 400 @@ -50,17 +51,17 @@ debuggability, and ability to adapt to the needs of power users. Components allow for reusability, composability, and focus on a single task. -* **Reusability** allows for the output of components to be reused as the input for multiple other components. -* **Composability** allows for components to be chained together to form a pipeline. -* **Single task** means the scope of a component is limited to one narrow task and thus has fewer side effects. +- **Reusability** allows for the output of components to be reused as the input for multiple other components. +- **Composability** allows for components to be chained together to form a pipeline. +- **Single task** means the scope of a component is limited to one narrow task and thus has fewer side effects. ## Features -* Write declarative configurations with a Terraform-inspired configuration +- Write declarative configurations with a Terraform-inspired configuration language. -* Declare components to configure parts of a pipeline. -* Use expressions to bind components together to build a programmable pipeline. -* Includes a UI for debugging the state of a pipeline. +- Declare components to configure parts of a pipeline. +- Use expressions to bind components together to build a programmable pipeline. +- Includes a UI for debugging the state of a pipeline. {{< param "PRODUCT_NAME" >}} is a [distribution][] of the OpenTelemetry Collector. @@ -109,7 +110,6 @@ prometheus.remote_write "default" { } ``` - ## {{% param "PRODUCT_NAME" %}} configuration generator The {{< param "PRODUCT_NAME" >}} [configuration generator](https://grafana.github.io/agent-configurator/) helps you get a head start on creating flow code. @@ -120,11 +120,10 @@ This feature is experimental, and it doesn't support all River components. ## Next steps -* [Install](ref:install) {{< param "PRODUCT_NAME" >}}. -* Learn about the core [Concepts](ref:concepts) of {{< param "PRODUCT_NAME" >}}. -* Follow the [Tutorials](ref:tutorials) for hands-on learning of {{< param "PRODUCT_NAME" >}}. -* Consult the [Tasks](ref:tasks) instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}. -* Check out the [Reference](ref:reference) documentation to find specific information you might be looking for. +- [Install](ref:install) {{< param "PRODUCT_NAME" >}}. +- Learn about the core [Concepts](ref:concepts) of {{< param "PRODUCT_NAME" >}}. +- Follow the [Tutorials](ref:tutorials) for hands-on learning of {{< param "PRODUCT_NAME" >}}. +- Consult the [Tasks](ref:tasks) instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}. +- Check out the [Reference](ref:reference) documentation to find specific information you might be looking for. [distribution]: https://opentelemetry.io/ecosystem/distributions/ - diff --git a/docs/sources/flow/concepts/_index.md b/docs/sources/flow/concepts/_index.md index 786af8e5467b..f8d9242d6ff0 100644 --- a/docs/sources/flow/concepts/_index.md +++ b/docs/sources/flow/concepts/_index.md @@ -1,10 +1,10 @@ --- aliases: -- ../concepts/ -- /docs/grafana-cloud/agent/flow/concepts/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/ + - ../concepts/ + - /docs/grafana-cloud/agent/flow/concepts/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/ description: Learn about the Grafana Agent Flow concepts title: Concepts diff --git a/docs/sources/flow/concepts/clustering.md b/docs/sources/flow/concepts/clustering.md index 9982004173f6..be63e1208f5f 100644 --- a/docs/sources/flow/concepts/clustering.md +++ b/docs/sources/flow/concepts/clustering.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/concepts/clustering/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/clustering/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/clustering/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/clustering/ + - /docs/grafana-cloud/agent/flow/concepts/clustering/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/clustering/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/clustering/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/clustering/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/clustering/ description: Learn about Grafana Agent clustering concepts menuTitle: Clustering @@ -98,4 +98,3 @@ Refer to component reference documentation to discover whether it supports clust You can use the {{< param "PRODUCT_NAME" >}} UI [clustering page](ref:clustering-page) to monitor your cluster status. Refer to [Debugging clustering issues](ref:debugging) for additional troubleshooting information. - diff --git a/docs/sources/flow/concepts/component_controller.md b/docs/sources/flow/concepts/component_controller.md index 904451b567a4..92722b45e921 100644 --- a/docs/sources/flow/concepts/component_controller.md +++ b/docs/sources/flow/concepts/component_controller.md @@ -1,10 +1,10 @@ --- aliases: -- ../../concepts/component-controller/ -- /docs/grafana-cloud/agent/flow/concepts/component_controller/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/component_controller/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/component_controller/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/component_controller/ + - ../../concepts/component-controller/ + - /docs/grafana-cloud/agent/flow/concepts/component_controller/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/component_controller/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/component_controller/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/component_controller/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/component_controller/ description: Learn about the component controller title: Component controller @@ -33,10 +33,10 @@ The _component controller_ is the core part of {{< param "PRODUCT_NAME" >}} whic The component controller is responsible for: -* Reading and validating the configuration file. -* Managing the lifecycle of defined components. -* Evaluating the arguments used to configure components. -* Reporting the health of defined components. +- Reading and validating the configuration file. +- Managing the lifecycle of defined components. +- Evaluating the arguments used to configure components. +- Reporting the health of defined components. ## Component graph @@ -130,4 +130,3 @@ removing components no longer defined in the configuration file and creating new All components managed by the controller are reevaluated after reloading. [DAG]: https://en.wikipedia.org/wiki/Directed_acyclic_graph - diff --git a/docs/sources/flow/concepts/components.md b/docs/sources/flow/concepts/components.md index 1f93d768113e..b0138ee2821c 100644 --- a/docs/sources/flow/concepts/components.md +++ b/docs/sources/flow/concepts/components.md @@ -1,10 +1,10 @@ --- aliases: -- ../../concepts/components/ -- /docs/grafana-cloud/agent/flow/concepts/components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/components/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/components/ + - ../../concepts/components/ + - /docs/grafana-cloud/agent/flow/concepts/components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/components/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/components/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/components/ description: Learn about components title: Components @@ -18,8 +18,8 @@ Each component handles a single task, such as retrieving secrets or collecting P Components are composed of the following: -* Arguments: Settings that configure a component. -* Exports: Named values that a component exposes to other components. +- Arguments: Settings that configure a component. +- Exports: Named values that a component exposes to other components. Each component has a name that describes what that component is responsible for. For example, the `local.file` component is responsible for retrieving the contents of files on disk. diff --git a/docs/sources/flow/concepts/config-language/_index.md b/docs/sources/flow/concepts/config-language/_index.md index 4b38a4d83966..e5c97ee73324 100644 --- a/docs/sources/flow/concepts/config-language/_index.md +++ b/docs/sources/flow/concepts/config-language/_index.md @@ -1,21 +1,21 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/concepts/config-language/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/ -- configuration-language/ # /docs/agent/latest/flow/concepts/configuration-language/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/config-language/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/ -- ../configuration-language/ # /docs/agent/latest/flow/configuration-language/ -- ../concepts/configuration_language/ # /docs/agent/latest/flow/concepts/configuration_language/ -- /docs/grafana-cloud/agent/flow/concepts/configuration_language/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/configuration_language/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/configuration_language/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/configuration_language/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/ + - configuration-language/ # /docs/agent/latest/flow/concepts/configuration-language/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/config-language/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/ + - ../configuration-language/ # /docs/agent/latest/flow/configuration-language/ + - ../concepts/configuration_language/ # /docs/agent/latest/flow/concepts/configuration_language/ + - /docs/grafana-cloud/agent/flow/concepts/configuration_language/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/configuration_language/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/configuration_language/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/configuration_language/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/ description: Learn about the configuration language title: Configuration language @@ -61,17 +61,17 @@ BLOCK_NAME { [River is designed][RFC] with the following requirements in mind: -* _Fast_: The configuration language must be fast so the component controller can quickly evaluate changes. -* _Simple_: The configuration language must be easy to read and write to minimize the learning curve. -* _Debuggable_: The configuration language must give detailed information when there's a mistake in the configuration file. +- _Fast_: The configuration language must be fast so the component controller can quickly evaluate changes. +- _Simple_: The configuration language must be easy to read and write to minimize the learning curve. +- _Debuggable_: The configuration language must give detailed information when there's a mistake in the configuration file. River is similar to HCL, the language Terraform and other Hashicorp projects use. It's a distinct language with custom syntax and features, such as first-class functions. -* Blocks are a group of related settings and usually represent creating a component. +- Blocks are a group of related settings and usually represent creating a component. Blocks have a name that consists of zero or more identifiers separated by `.`, an optional user label, and a body containing attributes and nested blocks. -* Attributes appear within blocks and assign a value to a name. -* Expressions represent a value, either literally or by referencing and combining other values. +- Attributes appear within blocks and assign a value to a name. +- Expressions represent a value, either literally or by referencing and combining other values. You use expressions to compute a value for an attribute. River is declarative, so ordering components, blocks, and attributes within a block isn't significant. @@ -94,10 +94,10 @@ You use expressions to compute the value of an attribute. The simplest expressions are constant values like `"debug"`, `32`, or `[1, 2, 3, 4]`. River supports complex expressions, for example: -* Referencing the exports of components: `local.file.password_file.content` -* Mathematical operations: `1 + 2`, `3 * 4`, `(5 * 6) + (7 + 8)` -* Equality checks: `local.file.file_a.content == local.file.file_b.content` -* Calling functions from River's standard library: `env("HOME")` retrieves the value of the `HOME` environment variable. +- Referencing the exports of components: `local.file.password_file.content` +- Mathematical operations: `1 + 2`, `3 * 4`, `(5 * 6) + (7 + 8)` +- Equality checks: `local.file.file_a.content == local.file.file_b.content` +- Calling functions from River's standard library: `env("HOME")` retrieves the value of the `HOME` environment variable. You can use expressions for any attribute inside a component definition. @@ -122,21 +122,20 @@ prometheus.remote_write "default" { The preceding example has two blocks: -* `prometheus.remote_write "default"`: A labeled block which instantiates a `prometheus.remote_write` component. +- `prometheus.remote_write "default"`: A labeled block which instantiates a `prometheus.remote_write` component. The label is the string `"default"`. -* `endpoint`: An unlabeled block inside the component that configures an endpoint to send metrics to. +- `endpoint`: An unlabeled block inside the component that configures an endpoint to send metrics to. This block sets the `url` attribute to specify the endpoint. - ## Tooling You can use one or all of the following tools to help you write configuration files in River. -* Experimental editor support for - * [vim](https://github.com/rfratto/vim-river) - * [VSCode](https://github.com/rfratto/vscode-river) - * [river-mode](https://github.com/jdbaldry/river-mode) for Emacs -* Code formatting using the [`agent fmt` command](ref:fmt) +- Experimental editor support for + - [vim](https://github.com/rfratto/vim-river) + - [VSCode](https://github.com/rfratto/vscode-river) + - [river-mode](https://github.com/jdbaldry/river-mode) for Emacs +- Code formatting using the [`agent fmt` command](ref:fmt) You can also start developing your own tooling using the {{< param "PRODUCT_ROOT_NAME" >}} repository as a go package or use the [tree-sitter grammar][] with other programming languages. @@ -146,4 +145,3 @@ You can also start developing your own tooling using the {{< param "PRODUCT_ROOT [VSCode]: https://github.com/rfratto/vscode-river [river-mode]: https://github.com/jdbaldry/river-mode [tree-sitter grammar]: https://github.com/grafana/tree-sitter-river - diff --git a/docs/sources/flow/concepts/config-language/components.md b/docs/sources/flow/concepts/config-language/components.md index 0d9bc8541907..ecebde1bbf2d 100644 --- a/docs/sources/flow/concepts/config-language/components.md +++ b/docs/sources/flow/concepts/config-language/components.md @@ -1,16 +1,16 @@ --- aliases: -- ../configuration-language/components/ # /docs/agent/latest/flow/concepts/configuration-language/components/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/components/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/components/ -# Previous page aliases for backwards compatibility: -- ../../configuration-language/components/ # /docs/agent/latest/flow/configuration-language/components/ -- /docs/grafana-cloud/agent/flow/config-language/components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/components/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/components/ + - ../configuration-language/components/ # /docs/agent/latest/flow/concepts/configuration-language/components/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/components/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/components/ + # Previous page aliases for backwards compatibility: + - ../../configuration-language/components/ # /docs/agent/latest/flow/configuration-language/components/ + - /docs/grafana-cloud/agent/flow/config-language/components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/components/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/components/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/components/ description: Learn about the components configuration language title: Components configuration language @@ -50,11 +50,11 @@ All components are identified by their name, describing what the component is re Most user interactions with components center around two basic concepts, _arguments_ and _exports_. -* _Arguments_ are settings that modify the behavior of a component. +- _Arguments_ are settings that modify the behavior of a component. They can be any number of attributes or nested unlabeled blocks, some required and some optional. Any optional arguments that aren't set take on their default values. -* _Exports_ are zero or more output values that other components can refer to and can be of any River type. +- _Exports_ are zero or more output values that other components can refer to and can be of any River type. The following block defines a `local.file` component labeled "targets". The `local.file.targets` component exposes the file `content` as a string in its exports. @@ -109,4 +109,3 @@ The documentation of each [component](ref:components) provides more information In the previous example, the contents of the `local.file.targets.content` expression is evaluated to a concrete value. The value is type-checked and substituted into `prometheus.scrape.default`, where you can configure it. - diff --git a/docs/sources/flow/concepts/config-language/expressions/_index.md b/docs/sources/flow/concepts/config-language/expressions/_index.md index 85d8660c468b..05e6a5805677 100644 --- a/docs/sources/flow/concepts/config-language/expressions/_index.md +++ b/docs/sources/flow/concepts/config-language/expressions/_index.md @@ -1,16 +1,16 @@ --- aliases: -- ../configuration-language/expressions/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/ -# Previous page aliases for backwards compatibility: -- ../../configuration-language/expressions/ # /docs/agent/latest/flow/configuration-language/expressions/ -- /docs/grafana-cloud/agent/flow/config-language/expressions/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/ + - ../configuration-language/expressions/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/ + # Previous page aliases for backwards compatibility: + - ../../configuration-language/expressions/ # /docs/agent/latest/flow/configuration-language/expressions/ + - /docs/grafana-cloud/agent/flow/config-language/expressions/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/expressions/ description: Learn about expressions title: Expressions @@ -43,4 +43,3 @@ Expressions may also do things like [refer to values](ref:refer-to-values) expor You use expressions when you configure any component. All component arguments have an underlying [type](ref:type). River checks the expression type before assigning the result to an attribute. - diff --git a/docs/sources/flow/concepts/config-language/expressions/function_calls.md b/docs/sources/flow/concepts/config-language/expressions/function_calls.md index a5738b44e0a6..fe5a42bbaf0c 100644 --- a/docs/sources/flow/concepts/config-language/expressions/function_calls.md +++ b/docs/sources/flow/concepts/config-language/expressions/function_calls.md @@ -1,16 +1,16 @@ --- aliases: -- ../../configuration-language/expressions/function-calls/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/function-calls/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/function_calls/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/function_calls/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/function_calls/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/function_calls/ -# Previous page aliases for backwards compatibility: -- ../../../configuration-language/expressions/function-calls/ # /docs/agent/latest/flow/configuration-language/expressions/function-calls/ -- /docs/grafana-cloud/agent/flow/config-language/expressions/function_calls/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/function_calls/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/function_calls/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/function_calls/ + - ../../configuration-language/expressions/function-calls/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/function-calls/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/function_calls/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/function_calls/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/function_calls/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/function_calls/ + # Previous page aliases for backwards compatibility: + - ../../../configuration-language/expressions/function-calls/ # /docs/agent/latest/flow/configuration-language/expressions/function-calls/ + - /docs/grafana-cloud/agent/flow/config-language/expressions/function_calls/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/function_calls/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/function_calls/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/function_calls/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/expressions/function_calls/ description: Learn about function calls title: Function calls @@ -42,4 +42,3 @@ Some functions allow for more complex expressions, for example, concatenating ar env("HOME") json_decode(local.file.cfg.content)["namespace"] ``` - diff --git a/docs/sources/flow/concepts/config-language/expressions/operators.md b/docs/sources/flow/concepts/config-language/expressions/operators.md index 19bb003f74f3..651b4e417a5c 100644 --- a/docs/sources/flow/concepts/config-language/expressions/operators.md +++ b/docs/sources/flow/concepts/config-language/expressions/operators.md @@ -1,16 +1,16 @@ --- aliases: -- ../../configuration-language/expressions/operators/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/operators/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/operators/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/operators/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/operators/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/operators/ -# Previous page aliases for backwards compatibility: -- ../../../configuration-language/expressions/operators/ # /docs/agent/latest/flow/configuration-language/expressions/operators/ -- /docs/grafana-cloud/agent/flow/config-language/expressions/operators/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/operators/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/operators/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/operators/ + - ../../configuration-language/expressions/operators/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/operators/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/operators/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/operators/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/operators/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/operators/ + # Previous page aliases for backwards compatibility: + - ../../../configuration-language/expressions/operators/ # /docs/agent/latest/flow/configuration-language/expressions/operators/ + - /docs/grafana-cloud/agent/flow/config-language/expressions/operators/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/operators/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/operators/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/operators/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/expressions/operators/ description: Learn about operators title: Operators @@ -24,50 +24,50 @@ All operations follow the standard [PEMDAS][] order of mathematical operations. ## Arithmetic operators -Operator | Description ----------|--------------------------------------------------- -`+` | Adds two numbers. -`-` | Subtracts two numbers. -`*` | Multiplies two numbers. -`/` | Divides two numbers. -`%` | Computes the remainder after dividing two numbers. -`^` | Raises the number to the specified power. +| Operator | Description | +| -------- | -------------------------------------------------- | +| `+` | Adds two numbers. | +| `-` | Subtracts two numbers. | +| `*` | Multiplies two numbers. | +| `/` | Divides two numbers. | +| `%` | Computes the remainder after dividing two numbers. | +| `^` | Raises the number to the specified power. | ## String operators -Operator | Description ----------|------------------------- -`+` | Concatenate two strings. +| Operator | Description | +| -------- | ------------------------ | +| `+` | Concatenate two strings. | ## Comparison operators -Operator | Description ----------|--------------------------------------------------------------------- -`==` | `true` when two values are equal. -`!=` | `true` when two values aren't equal. -`<` | `true` when the left value is less than the right value. -`<=` | `true` when the left value is less than or equal to the right value. -`>` | `true` when the left value is greater than the right value. -`>=` | `true` when the left value is greater or equal to the right value. +| Operator | Description | +| -------- | -------------------------------------------------------------------- | +| `==` | `true` when two values are equal. | +| `!=` | `true` when two values aren't equal. | +| `<` | `true` when the left value is less than the right value. | +| `<=` | `true` when the left value is less than or equal to the right value. | +| `>` | `true` when the left value is greater than the right value. | +| `>=` | `true` when the left value is greater or equal to the right value. | You can apply the equality operators `==` and `!=` to any operands. -The two operands in ordering operators `<` `<=` `>` and `>=` must both be _orderable_ and of the same type. +The two operands in ordering operators `<` `<=` `>` and `>=` must both be _orderable_ and of the same type. The results of the comparisons are: -* Boolean values are equal if they're either both true or both false. -* Numerical (integer and floating-point) values are orderable in the usual way. -* String values are orderable lexically byte-wise. -* Objects are equal if all their fields are equal. -* Array values are equal if their corresponding elements are equal. +- Boolean values are equal if they're either both true or both false. +- Numerical (integer and floating-point) values are orderable in the usual way. +- String values are orderable lexically byte-wise. +- Objects are equal if all their fields are equal. +- Array values are equal if their corresponding elements are equal. ## Logical operators -Operator | Description ----------|--------------------------------------------------------- -`&&` | `true` when the both left _and_ right value are `true`. -`\|\|` | `true` when the either left _or_ right value are `true`. -`!` | Negates a boolean value. +| Operator | Description | +| -------- | -------------------------------------------------------- | +| `&&` | `true` when the both left _and_ right value are `true`. | +| `\|\|` | `true` when the either left _or_ right value are `true`. | +| `!` | Negates a boolean value. | Logical operators apply to boolean values and yield a boolean result. @@ -78,19 +78,19 @@ River uses `=` as its assignment operator. An assignment statement may only assign a single value. Each value must be _assignable_ to the attribute or object key. -* You can assign `null` to any attribute. -* You can assign numerical, string, boolean, array, function, capsule, and object types to attributes of the corresponding type. -* You can assign numbers to string attributes with an implicit conversion. -* You can assign strings to numerical attributes if they represent a number. -* You can't assign blocks. +- You can assign `null` to any attribute. +- You can assign numerical, string, boolean, array, function, capsule, and object types to attributes of the corresponding type. +- You can assign numbers to string attributes with an implicit conversion. +- You can assign strings to numerical attributes if they represent a number. +- You can't assign blocks. ## Brackets -Brackets | Description ----------|------------------------------------ -`{ }` | Defines blocks and objects. -`( )` | Groups and prioritizes expressions. -`[ ]` | Defines arrays. +| Brackets | Description | +| -------- | ----------------------------------- | +| `{ }` | Defines blocks and objects. | +| `( )` | Groups and prioritizes expressions. | +| `[ ]` | Defines arrays. | The following example uses curly braces and square brackets to define an object and an array. @@ -101,10 +101,10 @@ arr = [1, true, 7 * (1+1), 3] ## Access operators -Operator | Description ----------|------------------------------------------------------------------------ -`[ ]` | Access a member of an array or object. -`.` | Access a named member of an object or an exported field of a component. +| Operator | Description | +| -------- | ----------------------------------------------------------------------- | +| `[ ]` | Access a member of an array or object. | +| `.` | Access a named member of an object or an exported field of a component. | You can access arbitrarily nested values with River's access operators. You can use square brackets to access zero-indexed array indices and object fields by enclosing the field name in double quotes. diff --git a/docs/sources/flow/concepts/config-language/expressions/referencing_exports.md b/docs/sources/flow/concepts/config-language/expressions/referencing_exports.md index 1614583abc40..ec89ee38ed15 100644 --- a/docs/sources/flow/concepts/config-language/expressions/referencing_exports.md +++ b/docs/sources/flow/concepts/config-language/expressions/referencing_exports.md @@ -1,16 +1,16 @@ --- aliases: -- ../../configuration-language/expressions/referencing-exports/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/referencing-exports/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/referencing_exports/ -# Previous page aliases for backwards compatibility: -- ../../../configuration-language/expressions/referencing-exports/ # /docs/agent/latest/flow/configuration-language/expressions/referencing-exports/ -- /docs/grafana-cloud/agent/flow/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/referencing_exports/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/referencing_exports/ + - ../../configuration-language/expressions/referencing-exports/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/referencing-exports/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/referencing_exports/ + # Previous page aliases for backwards compatibility: + - ../../../configuration-language/expressions/referencing-exports/ # /docs/agent/latest/flow/configuration-language/expressions/referencing-exports/ + - /docs/grafana-cloud/agent/flow/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/referencing_exports/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/referencing_exports/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/expressions/referencing_exports/ description: Learn about referencing component exports title: Referencing component exports @@ -65,4 +65,3 @@ In the preceding example, you wired together a very simple pipeline by writing a After the value is resolved, it must match the [type](ref:type) of the attribute it is assigned to. While you can only configure attributes using the basic River types, the exports of components can take on special internal River types, such as Secrets or Capsules, which expose different functionality. - diff --git a/docs/sources/flow/concepts/config-language/expressions/types_and_values.md b/docs/sources/flow/concepts/config-language/expressions/types_and_values.md index a1d46d2fd537..0b8d79b9c1ea 100644 --- a/docs/sources/flow/concepts/config-language/expressions/types_and_values.md +++ b/docs/sources/flow/concepts/config-language/expressions/types_and_values.md @@ -1,16 +1,16 @@ --- aliases: -- ../../configuration-language/expressions/types-and-values/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/types-and-values/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/types_and_values/ -# Previous page aliases for backwards compatibility: -- ../../../configuration-language/expressions/types-and-values/ # /docs/agent/latest/flow/configuration-language/expressions/types-and-values/ -- /docs/grafana-cloud/agent/flow/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/types_and_values/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/types_and_values/ + - ../../configuration-language/expressions/types-and-values/ # /docs/agent/latest/flow/concepts/configuration-language/expressions/types-and-values/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/expressions/types_and_values/ + # Previous page aliases for backwards compatibility: + - ../../../configuration-language/expressions/types-and-values/ # /docs/agent/latest/flow/configuration-language/expressions/types-and-values/ + - /docs/grafana-cloud/agent/flow/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/expressions/types_and_values/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/expressions/types_and_values/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/expressions/types_and_values/ description: Learn about the River types and values title: Types and values @@ -27,34 +27,34 @@ refs: River uses the following types for its values: -* `number`: Any numeric value, like `3` or `3.14`. -* `string`: A sequence of Unicode characters representing text, like `"Hello, world!"`. -* `bool`: A boolean value, either `true` or `false`. -* `array`: A sequence of values, like `[1, 2, 3]`. Elements within the list are indexed by whole numbers, starting with zero. -* `object`: A group of values identified by named labels, like `{ name = "John" }`. -* `function`: A value representing a routine that runs with arguments to compute another value, like `env("HOME")`. +- `number`: Any numeric value, like `3` or `3.14`. +- `string`: A sequence of Unicode characters representing text, like `"Hello, world!"`. +- `bool`: A boolean value, either `true` or `false`. +- `array`: A sequence of values, like `[1, 2, 3]`. Elements within the list are indexed by whole numbers, starting with zero. +- `object`: A group of values identified by named labels, like `{ name = "John" }`. +- `function`: A value representing a routine that runs with arguments to compute another value, like `env("HOME")`. Functions take zero or more arguments as input and always return a single value as output. -* `null`: A type that has no value. +- `null`: A type that has no value. ## Naming convention In addition to the preceding types, the [component reference][] documentation uses the following conventions for referring to types: -* `any`: A value of any type. -* `map(T)`: an `object` with the value type `T`. +- `any`: A value of any type. +- `map(T)`: an `object` with the value type `T`. For example, `map(string)` is an object where all the values are strings. The key type of an object is always a string or an identifier converted into a string. -* `list(T)`: an `array` with the value type`T`. +- `list(T)`: an `array` with the value type`T`. For example, `list(string)` is an array where all the values are strings. -* `duration`: a `string` denoting a duration of time, such as `"1d"`, `"1h30m"`, `"10s"`. +- `duration`: a `string` denoting a duration of time, such as `"1d"`, `"1h30m"`, `"10s"`. Valid units are: - * `d` for days. - * `h` for hours. - * `m` for minutes. - * `s` for seconds. - * `ms` for milliseconds. - * `ns` for nanoseconds. + - `d` for days. + - `h` for hours. + - `m` for minutes. + - `s` for seconds. + - `ms` for milliseconds. + - `ns` for nanoseconds. You can combine values of descending units to add their values together. For example, `"1h30m"` is the same as `"90m"`. @@ -81,7 +81,7 @@ A `\` in a string starts an escape sequence to represent a special character. The following table shows the supported escape sequences. | Sequence | Replacement | -|--------------|-----------------------------------------------------------------------------------------| +| ------------ | --------------------------------------------------------------------------------------- | | `\\` | The `\` character `U+005C` | | `\a` | The alert or bell character `U+0007` | | `\b` | The backspace character `U+0008` | @@ -99,7 +99,7 @@ The following table shows the supported escape sequences. ## Raw strings -Raw strings are represented by sequences of Unicode characters surrounded by backticks ``` `` ```. +Raw strings are represented by sequences of Unicode characters surrounded by backticks ` `` `. Raw strings don't support any escape sequences. ```river @@ -176,8 +176,8 @@ If the key isn't a valid identifier, you must wrap it in double quotes like a st {{< admonition type="note" >}} Don't confuse objects with blocks. -* An _object_ is a value assigned to an [Attribute][]. You **must** use commas between key-value pairs on separate lines. -* A [Block][] is a named structural element composed of multiple attributes. You **must not** use commas between attributes. +- An _object_ is a value assigned to an [Attribute][]. You **must** use commas between key-value pairs on separate lines. +- A [Block][] is a named structural element composed of multiple attributes. You **must not** use commas between attributes. [Attribute]: {{< relref "../syntax.md#Attributes" >}} [Block]: {{< relref "../syntax.md#Blocks" >}} @@ -223,4 +223,3 @@ prometheus.scrape "default" { forward_to = [prometheus.remote_write.default.receiver] } ``` - diff --git a/docs/sources/flow/concepts/config-language/files.md b/docs/sources/flow/concepts/config-language/files.md index bd5565635fe7..c01cae5252db 100644 --- a/docs/sources/flow/concepts/config-language/files.md +++ b/docs/sources/flow/concepts/config-language/files.md @@ -1,16 +1,16 @@ --- aliases: -- ../configuration-language/files/ # /docs/agent/latest/flow/concepts/configuration-language/files/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/files/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/files/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/files/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/files/ -# Previous page aliases for backwards compatibility: -- ../../configuration-language/files/ # /docs/agent/latest/flow/configuration-language/files/ -- /docs/grafana-cloud/agent/flow/config-language/files/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/files/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/files/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/files/ + - ../configuration-language/files/ # /docs/agent/latest/flow/concepts/configuration-language/files/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/files/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/files/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/files/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/files/ + # Previous page aliases for backwards compatibility: + - ../../configuration-language/files/ # /docs/agent/latest/flow/configuration-language/files/ + - /docs/grafana-cloud/agent/flow/config-language/files/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/files/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/files/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/files/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/files/ description: Learn about River files title: Files diff --git a/docs/sources/flow/concepts/config-language/syntax.md b/docs/sources/flow/concepts/config-language/syntax.md index 9bee7086c40a..222deb8ec422 100644 --- a/docs/sources/flow/concepts/config-language/syntax.md +++ b/docs/sources/flow/concepts/config-language/syntax.md @@ -1,16 +1,16 @@ --- aliases: -- ../configuration-language/syntax/ # /docs/agent/latest/flow/concepts/configuration-language/syntax/ -- /docs/grafana-cloud/agent/flow/concepts/config-language/syntax/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/syntax/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/syntax/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/syntax/ -# Previous page aliases for backwards compatibility: -- ../../configuration-language/syntax/ # /docs/agent/latest/flow/configuration-language/syntax/ -- /docs/grafana-cloud/agent/flow/config-language/syntax/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/syntax/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/syntax/ -- /docs/grafana-cloud/send-data/agent/flow/config-language/syntax/ + - ../configuration-language/syntax/ # /docs/agent/latest/flow/concepts/configuration-language/syntax/ + - /docs/grafana-cloud/agent/flow/concepts/config-language/syntax/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/config-language/syntax/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/config-language/syntax/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/config-language/syntax/ + # Previous page aliases for backwards compatibility: + - ../../configuration-language/syntax/ # /docs/agent/latest/flow/configuration-language/syntax/ + - /docs/grafana-cloud/agent/flow/config-language/syntax/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/config-language/syntax/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/config-language/syntax/ + - /docs/grafana-cloud/send-data/agent/flow/config-language/syntax/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/config-language/syntax/ description: Learn about the River syntax title: Syntax @@ -127,4 +127,3 @@ River ignores other newlines and you can can enter as many newlines as you want. [identifier]: #identifiers [identifier]: #identifiers - diff --git a/docs/sources/flow/concepts/custom_components.md b/docs/sources/flow/concepts/custom_components.md index 102e6fdcf10f..635b351ea898 100644 --- a/docs/sources/flow/concepts/custom_components.md +++ b/docs/sources/flow/concepts/custom_components.md @@ -1,10 +1,10 @@ --- aliases: -- ../../concepts/custom-components/ -- /docs/grafana-cloud/agent/flow/concepts/custom-components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/custom-components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/custom-components/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/custom-components/ + - ../../concepts/custom-components/ + - /docs/grafana-cloud/agent/flow/concepts/custom-components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/custom-components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/custom-components/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/custom-components/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/custom_components/ description: Learn about custom components title: Custom components @@ -17,19 +17,19 @@ _Custom components_ are a way to create new components from a pipeline of built- A custom component is composed of: -* _Arguments_: Settings that configure the custom component. -* _Exports_: Values that a custom component exposes to its consumers. -* _Components_: Built-in and custom components that are run as part of the custom component. +- _Arguments_: Settings that configure the custom component. +- _Exports_: Values that a custom component exposes to its consumers. +- _Components_: Built-in and custom components that are run as part of the custom component. ## Creating custom components -You can create a new custom component using [the `declare` configuration block][declare]. +You can create a new custom component using [the `declare` configuration block][declare]. The label of the block determines the name of the custom component. The following custom configuration blocks can be used inside a `declare` block: -* [argument][]: Create a new named argument, whose current value can be referenced using the expression `argument.NAME.value`. Argument values are determined by the user of a custom component. -* [export][]: Expose a new named value to custom component users. +- [argument][]: Create a new named argument, whose current value can be referenced using the expression `argument.NAME.value`. Argument values are determined by the user of a custom component. +- [export][]: Expose a new named value to custom component users. Custom components are useful for reusing a common pipeline multiple times. To learn how to share custom components across multiple files, refer to [Modules][]. diff --git a/docs/sources/flow/concepts/modules.md b/docs/sources/flow/concepts/modules.md index e947bfc99e40..3ab3040ef2d6 100644 --- a/docs/sources/flow/concepts/modules.md +++ b/docs/sources/flow/concepts/modules.md @@ -1,10 +1,10 @@ --- aliases: -- ../../concepts/modules/ -- /docs/grafana-cloud/agent/flow/concepts/modules/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/modules/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/modules/ -- /docs/grafana-cloud/send-data/agent/flow/concepts/modules/ + - ../../concepts/modules/ + - /docs/grafana-cloud/agent/flow/concepts/modules/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/concepts/modules/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/concepts/modules/ + - /docs/grafana-cloud/send-data/agent/flow/concepts/modules/ canonical: https://grafana.com/docs/agent/latest/flow/concepts/modules/ description: Learn about modules title: Modules @@ -26,10 +26,10 @@ Modules can be [imported](#importing-modules) to enable the reuse of [custom com A module can be _imported_, allowing the custom components defined by that module to be used by other modules, called the _importing module_. Modules can be imported from multiple locations using one of the `import` configuration blocks: -* [import.file]: Imports a module from a file or a directory on disk. -* [import.git]: Imports a module from a file located in a Git repository. -* [import.http]: Imports a module from the response of an HTTP request. -* [import.string]: Imports a module from a string. +- [import.file]: Imports a module from a file or a directory on disk. +- [import.git]: Imports a module from a file located in a Git repository. +- [import.http]: Imports a module from the response of an HTTP request. +- [import.string]: Imports a module from a string. [import.file]: {{< relref "../reference/config-blocks/import.file.md" >}} [import.git]: {{< relref "../reference/config-blocks/import.git.md" >}} @@ -112,21 +112,21 @@ loki.write "default" { ``` {{< collapse title="Classic modules" >}} + # Classic modules (deprecated) {{< admonition type="caution" >}} Modules were redesigned in v0.40 to simplify concepts. This section outlines the design of the original modules prior to v0.40. Classic modules are scheduled to be removed in the release after v0.40. {{< /admonition >}} - You use _Modules_ to create {{< param "PRODUCT_NAME" >}} configurations that you can load as a component. Modules are a great way to parameterize a configuration to create reusable pipelines. Modules are {{< param "PRODUCT_NAME" >}} configurations which have: -* _Arguments_: Settings that configure a module. -* _Exports_: Named values that a module exposes to the consumer of the module. -* _Components_: {{< param "PRODUCT_NAME" >}} components to run when the module is running. +- _Arguments_: Settings that configure a module. +- _Exports_: Named values that a module exposes to the consumer of the module. +- _Components_: {{< param "PRODUCT_NAME" >}} components to run when the module is running. You use a [Module loader][] to load Modules into {{< param "PRODUCT_NAME" >}}. @@ -138,10 +138,10 @@ A _Module loader_ is a {{< param "PRODUCT_NAME" >}} component that retrieves a m Module loader components are responsible for the following functions: -* Retrieving the module source. -* Creating a [Component controller][] for the module. -* Passing arguments to the loaded module. -* Exposing exports from the loaded module. +- Retrieving the module source. +- Creating a [Component controller][] for the module. +- Passing arguments to the loaded module. +- Exposing exports from the loaded module. Module loaders are typically called `module.LOADER_NAME`. @@ -155,9 +155,9 @@ Refer to [Components][] for more information about the module loader components. Modules are flexible, and you can retrieve their configuration anywhere, such as: -* The local filesystem. -* An S3 bucket. -* An HTTP endpoint. +- The local filesystem. +- An S3 bucket. +- An HTTP endpoint. Each module loader component supports different ways of retrieving `module.sources`. The most generic module loader component, `module.string`, can load modules from the export of another {{< param "PRODUCT_NAME" >}} component. @@ -244,4 +244,5 @@ loki.write "default" { [export block]: https://grafana.com/docs/agent//flow/reference/config-blocks/export [Component controller]: https://grafana.com/docs/agent//flow/concepts/component_controller [Components]: https://grafana.com/docs/agent//flow/reference/components + {{< /collapse >}} diff --git a/docs/sources/flow/get-started/_index.md b/docs/sources/flow/get-started/_index.md index 444b64f5afc5..94b515daa0d3 100644 --- a/docs/sources/flow/get-started/_index.md +++ b/docs/sources/flow/get-started/_index.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/ -# Previous docs aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/ -- /docs/grafana-cloud/send-data/agent/flow/setup/ -- ./setup/ # /docs/agent/latest/flow/setup/ + - /docs/grafana-cloud/agent/flow/get-started/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/ + # Previous docs aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/ + - /docs/grafana-cloud/send-data/agent/flow/setup/ + - ./setup/ # /docs/agent/latest/flow/setup/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/ description: Learn how to install and use Grafana Agent Flow menuTitle: Get started diff --git a/docs/sources/flow/get-started/deploy-agent.md b/docs/sources/flow/get-started/deploy-agent.md index 0a76e62c42df..d611da0c6cbb 100644 --- a/docs/sources/flow/get-started/deploy-agent.md +++ b/docs/sources/flow/get-started/deploy-agent.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/deploy-agent/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/deploy-agent/ -# Previous docs aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/deploy-agent/ -- /docs/grafana-cloud/send-data/agent/flow/setup/deploy-agent/ -- ../setup/deploy-agent/ # /docs/agent/latest/flow/setup/deploy-agent/ + - /docs/grafana-cloud/agent/flow/get-started/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/deploy-agent/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/deploy-agent/ + # Previous docs aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/deploy-agent/ + - /docs/grafana-cloud/send-data/agent/flow/setup/deploy-agent/ + - ../setup/deploy-agent/ # /docs/agent/latest/flow/setup/deploy-agent/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/deploy-agent/ description: Learn about possible deployment topologies for Grafana Agent Flow menuTitle: Deploy @@ -21,7 +21,7 @@ weight: 900 ## Processing different types of telemetry in different {{< param "PRODUCT_ROOT_NAME" >}} instances -If the load on {{< param "PRODUCT_ROOT_NAME" >}} is small, it is recommended to process all necessary telemetry signals in the same {{< param "PRODUCT_ROOT_NAME" >}} process. +If the load on {{< param "PRODUCT_ROOT_NAME" >}} is small, it is recommended to process all necessary telemetry signals in the same {{< param "PRODUCT_ROOT_NAME" >}} process. For example, a single {{< param "PRODUCT_ROOT_NAME" >}} can process all of the incoming metrics, logs, traces, and profiles. However, if the load on the {{< param "PRODUCT_ROOT_NAME" >}}s is big, it may be beneficial to process different telemetry signals in different deployments of {{< param "PRODUCT_ROOT_NAME" >}}s. @@ -30,8 +30,8 @@ This provides better stability due to the isolation between processes. For example, an overloaded {{< param "PRODUCT_ROOT_NAME" >}} processing traces won't impact an {{< param "PRODUCT_ROOT_NAME" >}} processing metrics. Different types of signal collection require different methods for scaling: -* "Pull" components such as `prometheus.scrape` and `pyroscope.scrape` are scaled using hashmod sharing or clustering. -* "Push" components such as `otelcol.receiver.otlp` are scaled by placing a load balancer in front of them. +- "Pull" components such as `prometheus.scrape` and `pyroscope.scrape` are scaled using hashmod sharing or clustering. +- "Push" components such as `otelcol.receiver.otlp` are scaled by placing a load balancer in front of them. ### Traces @@ -43,36 +43,38 @@ This similarity is because most {{< param "PRODUCT_NAME" >}} components used for #### When to scale To decide whether scaling is necessary, check metrics such as: -* `receiver_refused_spans_ratio_total` from receivers such as `otelcol.receiver.otlp`. -* `processor_refused_spans_ratio_total` from processors such as `otelcol.processor.batch`. -* `exporter_send_failed_spans_ratio_total` from exporters such as `otelcol.exporter.otlp` and `otelcol.exporter.loadbalancing`. + +- `receiver_refused_spans_ratio_total` from receivers such as `otelcol.receiver.otlp`. +- `processor_refused_spans_ratio_total` from processors such as `otelcol.processor.batch`. +- `exporter_send_failed_spans_ratio_total` from exporters such as `otelcol.exporter.otlp` and `otelcol.exporter.loadbalancing`. #### Stateful and stateless components -In the context of tracing, a "stateful component" is a component +In the context of tracing, a "stateful component" is a component that needs to aggregate certain spans to work correctly. A "stateless {{< param "PRODUCT_ROOT_NAME" >}}" is a {{< param "PRODUCT_ROOT_NAME" >}} which does not contain stateful components. -Scaling stateful {{< param "PRODUCT_ROOT_NAME" >}}s is more difficult, because spans must be forwarded to a +Scaling stateful {{< param "PRODUCT_ROOT_NAME" >}}s is more difficult, because spans must be forwarded to a specific {{< param "PRODUCT_ROOT_NAME" >}} according to a span property such as trace ID or a `service.name` attribute. You can forward spans with `otelcol.exporter.loadbalancing`. Examples of stateful components: -* `otelcol.processor.tail_sampling` -* `otelcol.connector.spanmetrics` -* `otelcol.connector.servicegraph` +- `otelcol.processor.tail_sampling` +- `otelcol.connector.spanmetrics` +- `otelcol.connector.servicegraph` -A "stateless component" does not need to aggregate specific spans to work correctly - +A "stateless component" does not need to aggregate specific spans to work correctly - it can work correctly even if it only has some of the spans of a trace. A stateless {{< param "PRODUCT_ROOT_NAME" >}} can be scaled without using `otelcol.exporter.loadbalancing`. For example, you could use an off-the-shelf load balancer to do a round-robin load balancing. Examples of stateless components: -* `otelcol.processor.probabilistic_sampler` -* `otelcol.processor.transform` -* `otelcol.processor.attributes` -* `otelcol.processor.span` + +- `otelcol.processor.probabilistic_sampler` +- `otelcol.processor.transform` +- `otelcol.processor.attributes` +- `otelcol.processor.span` diff --git a/docs/sources/flow/get-started/install/_index.md b/docs/sources/flow/get-started/install/_index.md index bb80d4aa078e..a44f4e89e26d 100644 --- a/docs/sources/flow/get-started/install/_index.md +++ b/docs/sources/flow/get-started/install/_index.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/ -# Previous docs aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/install/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/ -- /docs/sources/flow/install/ -- ../setup/install/ # /docs/agent/latest/flow/setup/install/ + - /docs/grafana-cloud/agent/flow/get-started/install/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/ + # Previous docs aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/install/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/ + - /docs/sources/flow/install/ + - ../setup/install/ # /docs/agent/latest/flow/setup/install/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/ description: Learn how to install Grafana Agent Flow menuTitle: Install @@ -45,4 +45,3 @@ Installing {{< param "PRODUCT_NAME" >}} on other operating systems is possible, By default, {{< param "PRODUCT_NAME" >}} sends anonymous usage information to Grafana Labs. Refer to [data collection](ref:data-collection) for more information about what data is collected and how you can opt-out. - diff --git a/docs/sources/flow/get-started/install/ansible.md b/docs/sources/flow/get-started/install/ansible.md index da424df6aa35..ed126ba7db0a 100644 --- a/docs/sources/flow/get-started/install/ansible.md +++ b/docs/sources/flow/get-started/install/ansible.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/ansible/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/ansible/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/ansible/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/ansible/ + - /docs/grafana-cloud/agent/flow/get-started/install/ansible/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/ansible/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/ansible/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/ansible/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/ansible/ description: Learn how to install Grafana Agent Flow with Ansible menuTitle: Ansible @@ -32,23 +32,23 @@ To add {{% param "PRODUCT_NAME" %}} to a host: 1. Create a file named `grafana-agent.yml` and add the following: - ```yaml - - name: Install Grafana Agent Flow - hosts: all - become: true - tasks: - - name: Install Grafana Agent Flow - ansible.builtin.include_role: - name: grafana.grafana.grafana_agent - vars: - grafana_agent_mode: flow - # Destination file name - grafana_agent_config_filename: config.river - # Local file to copy - grafana_agent_provisioned_config_file: "" - grafana_agent_flags_extra: - server.http.listen-addr: '0.0.0.0:12345' - ``` + ```yaml + - name: Install Grafana Agent Flow + hosts: all + become: true + tasks: + - name: Install Grafana Agent Flow + ansible.builtin.include_role: + name: grafana.grafana.grafana_agent + vars: + grafana_agent_mode: flow + # Destination file name + grafana_agent_config_filename: config.river + # Local file to copy + grafana_agent_provisioned_config_file: "" + grafana_agent_flags_extra: + server.http.listen-addr: "0.0.0.0:12345" + ``` Replace the following: @@ -85,4 +85,3 @@ Main PID: 3176 (agent-linux-amd) ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) - diff --git a/docs/sources/flow/get-started/install/binary.md b/docs/sources/flow/get-started/install/binary.md index 8b1dd7e67fc4..1dd30db85606 100644 --- a/docs/sources/flow/get-started/install/binary.md +++ b/docs/sources/flow/get-started/install/binary.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/binary/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/binary/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/binary/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/binary/ -# Previous docs aliases for backwards compatibility: -- ../../install/binary/ # /docs/agent/latest/flow/install/binary/ -- /docs/grafana-cloud/agent/flow/setup/install/binary/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/binary/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/binary/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/binary/ -- ../../setup/install/binary/ # /docs/agent/latest/flow/setup/install/binary/ + - /docs/grafana-cloud/agent/flow/get-started/install/binary/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/binary/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/binary/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/binary/ + # Previous docs aliases for backwards compatibility: + - ../../install/binary/ # /docs/agent/latest/flow/install/binary/ + - /docs/grafana-cloud/agent/flow/setup/install/binary/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/binary/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/binary/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/binary/ + - ../../setup/install/binary/ # /docs/agent/latest/flow/setup/install/binary/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/binary/ description: Learn how to install Grafana Agent Flow as a standalone binary menuTitle: Standalone @@ -28,10 +28,10 @@ refs: {{< param "PRODUCT_NAME" >}} is distributed as a standalone binary for the following operating systems and architectures: -* Linux: AMD64, ARM64 -* Windows: AMD64 -* macOS: AMD64 (Intel), ARM64 (Apple Silicon) -* FreeBSD: AMD64 +- Linux: AMD64, ARM64 +- Windows: AMD64 +- macOS: AMD64 (Intel), ARM64 (Apple Silicon) +- FreeBSD: AMD64 ## Download {{% param "PRODUCT_ROOT_NAME" %}} @@ -58,4 +58,3 @@ To download {{< param "PRODUCT_NAME" >}} as a standalone binary, perform the fol ## Next steps - [Run {{< param "PRODUCT_NAME" >}}](ref:run) - diff --git a/docs/sources/flow/get-started/install/chef.md b/docs/sources/flow/get-started/install/chef.md index 46e885df1c7b..ccad37924c9b 100644 --- a/docs/sources/flow/get-started/install/chef.md +++ b/docs/sources/flow/get-started/install/chef.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/chef/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/chef/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/chef/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/chef/ + - /docs/grafana-cloud/agent/flow/get-started/install/chef/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/chef/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/chef/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/chef/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/chef/ description: Learn how to install Grafana Agent Flow with Chef @@ -38,59 +38,59 @@ To add {{< param "PRODUCT_NAME" >}} to a host: 1. Add the following resources to your [Chef][] recipe to add the Grafana package repositories to your system: - ```ruby - if platform_family?('debian', 'rhel', 'amazon', 'fedora') - if platform_family?('debian') - remote_file '/etc/apt/keyrings/grafana.gpg' do - source 'https://apt.grafana.com/gpg.key' - mode '0644' - action :create - end - - file '/etc/apt/sources.list.d/grafana.list' do - content "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com/ stable main" - mode '0644' - notifies :update, 'apt_update[update apt cache]', :immediately - end - - apt_update 'update apt cache' do - action :nothing - end - elsif platform_family?('rhel', 'amazon', 'fedora') - yum_repository 'grafana' do - description 'grafana' - baseurl 'https://rpm.grafana.com/oss/rpm' - gpgcheck true - gpgkey 'https://rpm.grafana.com/gpg.key' - enabled true - action :create - notifies :run, 'execute[add-rhel-key]', :immediately - end - - execute 'add-rhel-key' do - command "rpm --import https://rpm.grafana.com/gpg.key" - action :nothing - end - end - else - fail "The #{node['platform_family']} platform is not supported." - end - ``` + ```ruby + if platform_family?('debian', 'rhel', 'amazon', 'fedora') + if platform_family?('debian') + remote_file '/etc/apt/keyrings/grafana.gpg' do + source 'https://apt.grafana.com/gpg.key' + mode '0644' + action :create + end + + file '/etc/apt/sources.list.d/grafana.list' do + content "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com/ stable main" + mode '0644' + notifies :update, 'apt_update[update apt cache]', :immediately + end + + apt_update 'update apt cache' do + action :nothing + end + elsif platform_family?('rhel', 'amazon', 'fedora') + yum_repository 'grafana' do + description 'grafana' + baseurl 'https://rpm.grafana.com/oss/rpm' + gpgcheck true + gpgkey 'https://rpm.grafana.com/gpg.key' + enabled true + action :create + notifies :run, 'execute[add-rhel-key]', :immediately + end + + execute 'add-rhel-key' do + command "rpm --import https://rpm.grafana.com/gpg.key" + action :nothing + end + end + else + fail "The #{node['platform_family']} platform is not supported." + end + ``` 1. Add the following resources to install and enable the `grafana-agent-flow` service: - ```ruby - package 'grafana-agent-flow' do - action :install - flush_cache [ :before ] if platform_family?('amazon', 'rhel', 'fedora') - notifies :restart, 'service[grafana-agent-flow]', :delayed - end + ```ruby + package 'grafana-agent-flow' do + action :install + flush_cache [ :before ] if platform_family?('amazon', 'rhel', 'fedora') + notifies :restart, 'service[grafana-agent-flow]', :delayed + end - service 'grafana-agent-flow' do - service_name 'grafana-agent-flow' - action [:enable, :start] - end - ``` + service 'grafana-agent-flow' do + service_name 'grafana-agent-flow' + action [:enable, :start] + end + ``` ## Configuration @@ -103,4 +103,3 @@ The default configuration file location is `/etc/grafana-agent-flow.river`. You - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) [Chef]: https://www.chef.io/products/chef-infrastructure-management/ - diff --git a/docs/sources/flow/get-started/install/docker.md b/docs/sources/flow/get-started/install/docker.md index 85f38ca3f34d..2747a0052005 100644 --- a/docs/sources/flow/get-started/install/docker.md +++ b/docs/sources/flow/get-started/install/docker.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/docker/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/docker/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/docker/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/docker/ -# Previous docs aliases for backwards compatibility: -- ../../install/docker/ # /docs/agent/latest/flow/install/docker/ -- /docs/grafana-cloud/agent/flow/setup/install/docker/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/docker/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/docker/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/docker/ -- ../../setup/install/docker/ # /docs/agent/latest/flow/setup/install/docker/ + - /docs/grafana-cloud/agent/flow/get-started/install/docker/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/docker/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/docker/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/docker/ + # Previous docs aliases for backwards compatibility: + - ../../install/docker/ # /docs/agent/latest/flow/install/docker/ + - /docs/grafana-cloud/agent/flow/setup/install/docker/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/docker/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/docker/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/docker/ + - ../../setup/install/docker/ # /docs/agent/latest/flow/setup/install/docker/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/docker/ description: Learn how to install Grafana Agent Flow on Docker menuTitle: Docker @@ -33,13 +33,13 @@ refs: {{< param "PRODUCT_NAME" >}} is available as a Docker container image on the following platforms: -* [Linux containers][] for AMD64 and ARM64. -* [Windows containers][] for AMD64. +- [Linux containers][] for AMD64 and ARM64. +- [Windows containers][] for AMD64. ## Before you begin -* Install [Docker][] on your computer. -* Create and save a {{< param "PRODUCT_NAME" >}} River configuration file on your computer, for example: +- Install [Docker][] on your computer. +- Create and save a {{< param "PRODUCT_NAME" >}} River configuration file on your computer, for example: ```river logging { @@ -105,4 +105,3 @@ To verify that {{< param "PRODUCT_NAME" >}} is running successfully, navigate to [Linux containers]: #run-a-linux-docker-container [Windows containers]: #run-a-windows-docker-container [Docker]: https://docker.io - diff --git a/docs/sources/flow/get-started/install/kubernetes.md b/docs/sources/flow/get-started/install/kubernetes.md index d97140ffcb23..ecf39caf37be 100644 --- a/docs/sources/flow/get-started/install/kubernetes.md +++ b/docs/sources/flow/get-started/install/kubernetes.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/kubernetes/ -# Previous docs aliases for backwards compatibility: -- ../../install/kubernetes/ # /docs/agent/latest/flow/install/kubernetes/ -- /docs/grafana-cloud/agent/flow/setup/install/kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/kubernetes/ -- ../../setup/install/kubernetes/ # /docs/agent/latest/flow/setup/install/kubernetes/ + - /docs/grafana-cloud/agent/flow/get-started/install/kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/kubernetes/ + # Previous docs aliases for backwards compatibility: + - ../../install/kubernetes/ # /docs/agent/latest/flow/install/kubernetes/ + - /docs/grafana-cloud/agent/flow/setup/install/kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/kubernetes/ + - ../../setup/install/kubernetes/ # /docs/agent/latest/flow/setup/install/kubernetes/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/kubernetes/ description: Learn how to deploy Grafana Agent Flow on Kubernetes menuTitle: Kubernetes @@ -35,9 +35,9 @@ You can deploy {{< param "PRODUCT_ROOT_NAME" >}} either in static mode or flow m ## Before you begin -* Install [Helm][] on your computer. -* Configure a Kubernetes cluster that you can use for {{< param "PRODUCT_NAME" >}}. -* Configure your local Kubernetes context to point to the cluster. +- Install [Helm][] on your computer. +- Configure a Kubernetes cluster that you can use for {{< param "PRODUCT_NAME" >}}. +- Configure your local Kubernetes context to point to the cluster. ## Deploy @@ -54,6 +54,7 @@ To deploy {{< param "PRODUCT_ROOT_NAME" >}} on Kubernetes using Helm, run the fo ```shell helm repo update ``` + 1. Create a namespace for {{< param "PRODUCT_NAME" >}}: ```shell @@ -97,6 +98,4 @@ see the [Configure {{< param "PRODUCT_NAME" >}} on Kubernetes](ref:configure) gu - Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about Helm chart. [Artifact Hub]: https://artifacthub.io/packages/helm/grafana/grafana-agent - [Helm]: https://helm.sh - diff --git a/docs/sources/flow/get-started/install/linux.md b/docs/sources/flow/get-started/install/linux.md index 88e8690b00c5..69509c1f18ad 100644 --- a/docs/sources/flow/get-started/install/linux.md +++ b/docs/sources/flow/get-started/install/linux.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/linux/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/linux/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/linux/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/linux/ -# Previous docs aliases for backwards compatibility: -- ../../install/linux/ # /docs/agent/latest/flow/install/linux/ -- /docs/grafana-cloud/agent/flow/setup/install/linux/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/linux/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/linux/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/linux/ -- ../../setup/install/linux/ # /docs/agent/latest/flow/setup/install/linux/ + - /docs/grafana-cloud/agent/flow/get-started/install/linux/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/linux/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/linux/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/linux/ + # Previous docs aliases for backwards compatibility: + - ../../install/linux/ # /docs/agent/latest/flow/install/linux/ + - /docs/grafana-cloud/agent/flow/setup/install/linux/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/linux/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/linux/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/linux/ + - ../../setup/install/linux/ # /docs/agent/latest/flow/setup/install/linux/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/linux/ description: Learn how to install Grafana Agent Flow on Linux menuTitle: Linux @@ -40,6 +40,7 @@ To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in 1. Import the GPG key and add the Grafana package repository. {{< code >}} + ```debian-ubuntu sudo mkdir -p /etc/apt/keyrings/ wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null @@ -50,7 +51,7 @@ To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in wget -q -O gpg.key https://rpm.grafana.com/gpg.key sudo rpm --import gpg.key echo -e '[grafana]\nname=grafana\nbaseurl=https://rpm.grafana.com\nrepo_gpgcheck=1\nenabled=1\ngpgcheck=1\ngpgkey=https://rpm.grafana.com/gpg.key\nsslverify=1 -sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo + sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo ``` ```suse-opensuse @@ -58,11 +59,13 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. sudo rpm --import gpg.key sudo zypper addrepo https://rpm.grafana.com grafana ``` + {{< /code >}} 1. Update the repositories. {{< code >}} + ```debian-ubuntu sudo apt-get update ``` @@ -74,11 +77,13 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. ```suse-opensuse sudo zypper update ``` + {{< /code >}} 1. Install {{< param "PRODUCT_NAME" >}}. {{< code >}} + ```debian-ubuntu sudo apt-get install grafana-agent-flow ``` @@ -90,6 +95,7 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. ```suse-opensuse sudo zypper install grafana-agent-flow ``` + {{< /code >}} ## Uninstall @@ -105,6 +111,7 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i 1. Uninstall {{< param "PRODUCT_NAME" >}}. {{< code >}} + ```debian-ubuntu sudo apt-get remove grafana-agent-flow ``` @@ -116,11 +123,13 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i ```suse-opensuse sudo zypper remove grafana-agent-flow ``` + {{< /code >}} 1. Optional: Remove the Grafana repository. {{< code >}} + ```debian-ubuntu sudo rm -i /etc/apt/sources.list.d/grafana.list ``` @@ -132,10 +141,10 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i ```suse-opensuse sudo zypper removerepo grafana ``` + {{< /code >}} ## Next steps - [Run {{< param "PRODUCT_NAME" >}}](ref:run) - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) - diff --git a/docs/sources/flow/get-started/install/macos.md b/docs/sources/flow/get-started/install/macos.md index 1631055497a4..159132f75bed 100644 --- a/docs/sources/flow/get-started/install/macos.md +++ b/docs/sources/flow/get-started/install/macos.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/macos/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/macos/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/macos/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/macos/ -# Previous docs aliases for backwards compatibility: -- ../../install/macos/ # /docs/agent/latest/flow/install/macos/ -- /docs/grafana-cloud/agent/flow/setup/install/macos/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/macos/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/macos/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/macos/ -- ../../setup/install/macos/ # /docs/agent/latest/flow/setup/install/macos/ + - /docs/grafana-cloud/agent/flow/get-started/install/macos/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/macos/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/macos/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/macos/ + # Previous docs aliases for backwards compatibility: + - ../../install/macos/ # /docs/agent/latest/flow/install/macos/ + - /docs/grafana-cloud/agent/flow/setup/install/macos/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/macos/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/macos/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/macos/ + - ../../setup/install/macos/ # /docs/agent/latest/flow/setup/install/macos/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/macos/ description: Learn how to install Grafana AgentFlow on macOS menuTitle: macOS @@ -39,7 +39,7 @@ The default prefix for Homebrew on Intel is `/usr/local`. The default prefix for ## Before you begin -* Install [Homebrew][] on your computer. +- Install [Homebrew][] on your computer. ## Install @@ -87,4 +87,3 @@ brew uninstall grafana-agent-flow - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) [Homebrew]: https://brew.sh - diff --git a/docs/sources/flow/get-started/install/puppet.md b/docs/sources/flow/get-started/install/puppet.md index 7c144615e18d..e722228c9eee 100644 --- a/docs/sources/flow/get-started/install/puppet.md +++ b/docs/sources/flow/get-started/install/puppet.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/puppet/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/puppet/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/puppet/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/puppet/ + - /docs/grafana-cloud/agent/flow/get-started/install/puppet/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/puppet/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/puppet/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/puppet/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/puppet/ description: Learn how to install Grafana Agent Flow with Puppet @@ -36,80 +36,79 @@ To add {{< param "PRODUCT_NAME" >}} to a host: 1. Ensure that the following module dependencies are declared and installed: - ```json - { - "name": "puppetlabs/apt", - "version_requirement": ">= 4.1.0 <= 7.0.0" - }, - { - "name": "puppetlabs/yumrepo_core", - "version_requirement": "<= 2.0.0" - } - ``` + ```json + { + "name": "puppetlabs/apt", + "version_requirement": ">= 4.1.0 <= 7.0.0" + }, + { + "name": "puppetlabs/yumrepo_core", + "version_requirement": "<= 2.0.0" + } + ``` 1. Create a new [Puppet][] manifest with the following class to add the Grafana package repositories, install the `grafana-agent-flow` package, and run the service: - ```ruby - class grafana_agent::grafana_agent_flow () { - case $::os['family'] { - 'debian': { - apt::source { 'grafana': - location => 'https://apt.grafana.com/', - release => '', - repos => 'stable main', - key => { - id => 'B53AE77BADB630A683046005963FA27710458545', - source => 'https://apt.grafana.com/gpg.key', - }, - } -> package { 'grafana-agent-flow': - require => Exec['apt_update'], - } -> service { 'grafana-agent-flow': - ensure => running, - name => 'grafana-agent-flow', - enable => true, - subscribe => Package['grafana-agent-flow'], - } - } - 'redhat': { - yumrepo { 'grafana': - ensure => 'present', - name => 'grafana', - descr => 'grafana', - baseurl => 'https://packages.grafana.com/oss/rpm', - gpgkey => 'https://packages.grafana.com/gpg.key', - enabled => '1', - gpgcheck => '1', - target => '/etc/yum.repo.d/grafana.repo', - } -> package { 'grafana-agent-flow': - } -> service { 'grafana-agent-flow': - ensure => running, - name => 'grafana-agent-flow', - enable => true, - subscribe => Package['grafana-agent-flow'], - } - } - default: { - fail("Unsupported OS family: (${$::os['family']})") - } - } - } - ``` + ```ruby + class grafana_agent::grafana_agent_flow () { + case $::os['family'] { + 'debian': { + apt::source { 'grafana': + location => 'https://apt.grafana.com/', + release => '', + repos => 'stable main', + key => { + id => 'B53AE77BADB630A683046005963FA27710458545', + source => 'https://apt.grafana.com/gpg.key', + }, + } -> package { 'grafana-agent-flow': + require => Exec['apt_update'], + } -> service { 'grafana-agent-flow': + ensure => running, + name => 'grafana-agent-flow', + enable => true, + subscribe => Package['grafana-agent-flow'], + } + } + 'redhat': { + yumrepo { 'grafana': + ensure => 'present', + name => 'grafana', + descr => 'grafana', + baseurl => 'https://packages.grafana.com/oss/rpm', + gpgkey => 'https://packages.grafana.com/gpg.key', + enabled => '1', + gpgcheck => '1', + target => '/etc/yum.repo.d/grafana.repo', + } -> package { 'grafana-agent-flow': + } -> service { 'grafana-agent-flow': + ensure => running, + name => 'grafana-agent-flow', + enable => true, + subscribe => Package['grafana-agent-flow'], + } + } + default: { + fail("Unsupported OS family: (${$::os['family']})") + } + } + } + ``` 1. To use this class in a module, add the following line to the module's `init.pp` file: - ```ruby - include grafana_agent::grafana_agent_flow - ``` + ```ruby + include grafana_agent::grafana_agent_flow + ``` ## Configuration The `grafana-agent-flow` package installs a default configuration file that doesn't send telemetry anywhere. -The default configuration file location is `/etc/grafana-agent-flow.river`. You can replace this file with your own configuration, or create a new configuration file for the service to use. +The default configuration file location is `/etc/grafana-agent-flow.river`. You can replace this file with your own configuration, or create a new configuration file for the service to use. ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) [Puppet]: https://www.puppet.com/ - diff --git a/docs/sources/flow/get-started/install/windows.md b/docs/sources/flow/get-started/install/windows.md index a0c01294aab3..75f4049aa622 100644 --- a/docs/sources/flow/get-started/install/windows.md +++ b/docs/sources/flow/get-started/install/windows.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/install/windows/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/windows/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/windows/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/install/windows/ -# Previous docs aliases for backwards compatibility: -- ../../install/windows/ # /docs/agent/latest/flow/install/windows/ -- /docs/grafana-cloud/agent/flow/setup/install/windows/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/windows/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/windows/ -- /docs/grafana-cloud/send-data/agent/flow/setup/install/windows/ -- ../../setup/install/windows/ # /docs/agent/latest/flow/setup/install/windows/ + - /docs/grafana-cloud/agent/flow/get-started/install/windows/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/install/windows/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/install/windows/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/install/windows/ + # Previous docs aliases for backwards compatibility: + - ../../install/windows/ # /docs/agent/latest/flow/install/windows/ + - /docs/grafana-cloud/agent/flow/setup/install/windows/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/install/windows/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/install/windows/ + - /docs/grafana-cloud/send-data/agent/flow/setup/install/windows/ + - ../../setup/install/windows/ # /docs/agent/latest/flow/setup/install/windows/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/install/windows/ description: Learn how to install Grafana Agent Flow on Windows menuTitle: Windows @@ -78,17 +78,17 @@ To do a silent install of {{< param "PRODUCT_NAME" >}} on Windows, perform the f ### Silent install options -* `/CONFIG=` Path to the configuration file. Default: `$INSTDIR\config.river` -* `/DISABLEREPORTING=` Disable [data collection](ref:data-collection). Default: `no` -* `/DISABLEPROFILING=` Disable profiling endpoint. Default: `no` -* `/ENVIRONMENT="KEY=VALUE\0KEY2=VALUE2"` Define environment variables for Windows Service. Default: `` +- `/CONFIG=` Path to the configuration file. Default: `$INSTDIR\config.river` +- `/DISABLEREPORTING=` Disable [data collection](ref:data-collection). Default: `no` +- `/DISABLEPROFILING=` Disable profiling endpoint. Default: `no` +- `/ENVIRONMENT="KEY=VALUE\0KEY2=VALUE2"` Define environment variables for Windows Service. Default: `` ## Service Configuration {{< param "PRODUCT_NAME" >}} uses the Windows Registry `HKLM\Software\Grafana\Grafana Agent Flow` for service configuration. -* `Arguments` (Type `REG_MULTI_SZ`) Each value represents a binary argument for grafana-agent-flow binary. -* `Environment` (Type `REG_MULTI_SZ`) Each value represents a environment value `KEY=VALUE` for grafana-agent-flow binary. +- `Arguments` (Type `REG_MULTI_SZ`) Each value represents a binary argument for grafana-agent-flow binary. +- `Environment` (Type `REG_MULTI_SZ`) Each value represents a environment value `KEY=VALUE` for grafana-agent-flow binary. ## Uninstall @@ -104,4 +104,3 @@ This includes any configuration files in the installation directory. - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) [latest]: https://github.com/grafana/agent/releases/latest - diff --git a/docs/sources/flow/get-started/run/_index.md b/docs/sources/flow/get-started/run/_index.md index 6b38643a6fcc..ce1f0bcea2ac 100644 --- a/docs/sources/flow/get-started/run/_index.md +++ b/docs/sources/flow/get-started/run/_index.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/get-started/run/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/run/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/run/ -- /docs/grafana-cloud/send-data/agent/flow/get-started/run/ -- /docs/sources/flow/run/ -# Previous pages aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/start-agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/start-agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/start-agent/ -- /docs/grafana-cloud/send-data/agent/flow/setup/start-agent/ -- ../setup/start-agent/ # /docs/agent/latest/flow/setup/start-agent/ + - /docs/grafana-cloud/agent/flow/get-started/run/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/run/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/run/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/run/ + - /docs/sources/flow/run/ + # Previous pages aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/start-agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/start-agent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/start-agent/ + - /docs/grafana-cloud/send-data/agent/flow/setup/start-agent/ + - ../setup/start-agent/ # /docs/agent/latest/flow/setup/start-agent/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/run/ description: Learn how to run Grafana Agent Flow menuTitle: Run @@ -30,4 +30,3 @@ Use the following pages to learn how to start, restart, and stop {{< param "PROD For installation instructions, refer to [Install {{< param "PRODUCT_NAME" >}}](ref:install). {{< section >}} - diff --git a/docs/sources/flow/get-started/run/binary.md b/docs/sources/flow/get-started/run/binary.md index 1f398c645b35..f8c20fa44c10 100644 --- a/docs/sources/flow/get-started/run/binary.md +++ b/docs/sources/flow/get-started/run/binary.md @@ -1,9 +1,9 @@ --- aliases: - - /docs/grafana-cloud/agent/flow/get-started/run/binary/ - - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/run/binary/ - - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/run/binary/ - - /docs/grafana-cloud/send-data/agent/flow/get-started/run/binary/ + - /docs/grafana-cloud/agent/flow/get-started/run/binary/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/get-started/run/binary/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/get-started/run/binary/ + - /docs/grafana-cloud/send-data/agent/flow/get-started/run/binary/ canonical: https://grafana.com/docs/agent/latest/flow/get-started/run/binary/ description: Learn how to run Grafana Agent Flow as a standalone binary menuTitle: Standalone @@ -36,8 +36,8 @@ AGENT_MODE=flow run Replace the following: -* _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. -* _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. +- _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. +- _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. ## Start {{% param "PRODUCT_NAME" %}} on Windows @@ -50,8 +50,8 @@ set AGENT_MODE=flow Replace the following: -* _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. -* _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. +- _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. +- _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. ## Set up {{% param "PRODUCT_NAME" %}} as a Linux systemd service @@ -93,8 +93,8 @@ These steps assume you have a default systemd and {{< param "PRODUCT_NAME" >}} c Replace the following: - * _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. - * _``_: The path to a working directory, for example `/var/lib/grafana-agent-flow`. + - _``_: The path to the {{< param "PRODUCT_NAME" >}} binary file. + - _``_: The path to a working directory, for example `/var/lib/grafana-agent-flow`. 1. Create an environment file in `/etc/default/` called `grafana-agent-flow` with the following contents: @@ -119,7 +119,7 @@ These steps assume you have a default systemd and {{< param "PRODUCT_NAME" >}} c Replace the following: - * _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. + - _``_: The path to the {{< param "PRODUCT_NAME" >}} configuration file. 1. To reload the service files, run the following command in a terminal window: @@ -128,4 +128,3 @@ These steps assume you have a default systemd and {{< param "PRODUCT_NAME" >}} c ``` 1. Use the [Linux](ref:startlinux) systemd commands to manage your standalone Linux installation of {{< param "PRODUCT_NAME" >}}. - diff --git a/docs/sources/flow/get-started/run/linux.md b/docs/sources/flow/get-started/run/linux.md index 38369e75ef47..c5c9103fc0a5 100644 --- a/docs/sources/flow/get-started/run/linux.md +++ b/docs/sources/flow/get-started/run/linux.md @@ -77,4 +77,3 @@ sudo journalctl -u grafana-agent-flow ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) - diff --git a/docs/sources/flow/get-started/run/macos.md b/docs/sources/flow/get-started/run/macos.md index 85bcc456ae47..128daaf9d1e0 100644 --- a/docs/sources/flow/get-started/run/macos.md +++ b/docs/sources/flow/get-started/run/macos.md @@ -74,4 +74,3 @@ refer to your current copy of the {{< param "PRODUCT_NAME" >}} formula to locate ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}](ref:configuremacos) - diff --git a/docs/sources/flow/get-started/run/windows.md b/docs/sources/flow/get-started/run/windows.md index d6f6a355dcf0..fcbce1abc343 100644 --- a/docs/sources/flow/get-started/run/windows.md +++ b/docs/sources/flow/get-started/run/windows.md @@ -30,9 +30,9 @@ To verify that {{< param "PRODUCT_NAME" >}} is running as a Windows Service: 1. Open the Windows Services manager (services.msc): - 1. Right click on the Start Menu and select **Run**. + 1. Right click on the Start Menu and select **Run**. - 1. Type: `services.msc` and click **OK**. + 1. Type: `services.msc` and click **OK**. 1. Scroll down to find the **{{< param "PRODUCT_NAME" >}}** service and verify that the **Status** is **Running**. @@ -45,9 +45,9 @@ To view the logs, perform the following steps: 1. Open the Event Viewer: - 1. Right click on the Start Menu and select **Run**. + 1. Right click on the Start Menu and select **Run**. - 1. Type `eventvwr` and click **OK**. + 1. Type `eventvwr` and click **OK**. 1. In the Event Viewer, click on **Windows Logs > Application**. @@ -56,4 +56,3 @@ To view the logs, perform the following steps: ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}](ref:configure) - diff --git a/docs/sources/flow/reference/_index.md b/docs/sources/flow/reference/_index.md index 5c4e88aac9cc..0c7040fa8e23 100644 --- a/docs/sources/flow/reference/_index.md +++ b/docs/sources/flow/reference/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/ -- /docs/grafana-cloud/send-data/agent/flow/reference/ + - /docs/grafana-cloud/agent/flow/reference/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/ + - /docs/grafana-cloud/send-data/agent/flow/reference/ canonical: https://grafana.com/docs/agent/latest/flow/reference/ description: The reference-level documentaiton for Grafana Agent menuTitle: Reference diff --git a/docs/sources/flow/reference/cli/_index.md b/docs/sources/flow/reference/cli/_index.md index 43fa4be774fd..1dd7e6687b23 100644 --- a/docs/sources/flow/reference/cli/_index.md +++ b/docs/sources/flow/reference/cli/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/cli/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/ -- /docs/grafana-cloud/send-data/agent/flow/reference/cli/ + - /docs/grafana-cloud/agent/flow/reference/cli/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/ + - /docs/grafana-cloud/send-data/agent/flow/reference/cli/ canonical: https://grafana.com/docs/agent/latest/flow/reference/cli/ description: Learn about the Grafana Agent command line interface menuTitle: Command-line interface @@ -21,12 +21,12 @@ starts {{< param "PRODUCT_NAME" >}}. Available commands: -* [`convert`][convert]: Convert a {{< param "PRODUCT_ROOT_NAME" >}} configuration file. -* [`fmt`][fmt]: Format a {{< param "PRODUCT_NAME" >}} configuration file. -* [`run`][run]: Start {{< param "PRODUCT_NAME" >}}, given a configuration file. -* [`tools`][tools]: Read the WAL and provide statistical information. -* `completion`: Generate shell completion for the `grafana-agent-flow` CLI. -* `help`: Print help for supported commands. +- [`convert`][convert]: Convert a {{< param "PRODUCT_ROOT_NAME" >}} configuration file. +- [`fmt`][fmt]: Format a {{< param "PRODUCT_NAME" >}} configuration file. +- [`run`][run]: Start {{< param "PRODUCT_NAME" >}}, given a configuration file. +- [`tools`][tools]: Read the WAL and provide statistical information. +- `completion`: Generate shell completion for the `grafana-agent-flow` CLI. +- `help`: Print help for supported commands. [run]: {{< relref "./run.md" >}} [fmt]: {{< relref "./fmt.md" >}} diff --git a/docs/sources/flow/reference/cli/convert.md b/docs/sources/flow/reference/cli/convert.md index 73761440eec4..d2c2c7a38051 100644 --- a/docs/sources/flow/reference/cli/convert.md +++ b/docs/sources/flow/reference/cli/convert.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/cli/convert/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/convert/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/convert/ -- /docs/grafana-cloud/send-data/agent/flow/reference/cli/convert/ + - /docs/grafana-cloud/agent/flow/reference/cli/convert/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/convert/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/convert/ + - /docs/grafana-cloud/send-data/agent/flow/reference/cli/convert/ canonical: https://grafana.com/docs/agent/latest/flow/reference/cli/convert/ description: Learn about the convert command labels: @@ -25,13 +25,13 @@ This command has no backward compatibility guarantees and may change or be remov Usage: -* `AGENT_MODE=flow grafana-agent convert [ ...] ` -* `grafana-agent-flow convert [ ...] ` +- `AGENT_MODE=flow grafana-agent convert [ ...] ` +- `grafana-agent-flow convert [ ...] ` - Replace the following: + Replace the following: - * _``_: One or more flags that define the input and output of the command. - * _``_: The {{< param "PRODUCT_ROOT_NAME" >}} configuration file. + - _``_: One or more flags that define the input and output of the command. + - _``_: The {{< param "PRODUCT_ROOT_NAME" >}} configuration file. If the `FILE_NAME` argument isn't provided or if the `FILE_NAME` argument is equal to `-`, `convert` converts the contents of standard input. Otherwise, @@ -44,15 +44,15 @@ configuration or can't be converted to {{< param "PRODUCT_NAME" >}} River format The following flags are supported: -* `--output`, `-o`: The filepath and filename where the output is written. +- `--output`, `-o`: The filepath and filename where the output is written. -* `--report`, `-r`: The filepath and filename where the report is written. +- `--report`, `-r`: The filepath and filename where the report is written. -* `--source-format`, `-f`: Required. The format of the source file. Supported formats: [otelcol], [prometheus], [promtail], [static]. +- `--source-format`, `-f`: Required. The format of the source file. Supported formats: [otelcol], [prometheus], [promtail], [static]. -* `--bypass-errors`, `-b`: Enable bypassing errors when converting. +- `--bypass-errors`, `-b`: Enable bypassing errors when converting. -* `--extra-args`, `e`: Extra arguments from the original format used by the converter. +- `--extra-args`, `e`: Extra arguments from the original format used by the converter. [otelcol]: #opentelemetry-collector [prometheus]: #prometheus @@ -63,9 +63,10 @@ The following flags are supported: ### Defaults {{< param "PRODUCT_NAME" >}} defaults are managed as follows: -* If a provided source configuration value matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. -* If a non-provided source configuration value default matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. -* If a non-provided source configuration value default doesn't match a {{< param "PRODUCT_NAME" >}} default value, the default value is included in the output. + +- If a provided source configuration value matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. +- If a non-provided source configuration value default matches a {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. +- If a non-provided source configuration value default doesn't match a {{< param "PRODUCT_NAME" >}} default value, the default value is included in the output. ### Errors @@ -95,7 +96,7 @@ This includes Prometheus features such as [relabel_config](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#relabel_config), [metric_relabel_configs](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#metric_relabel_configs), [remote_write](https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#remote_write), -and many supported *_sd_configs. Unsupported features in a source configuration result +and many supported \*\_sd_configs. Unsupported features in a source configuration result in [errors]. Refer to [Migrate from Prometheus to {{< param "PRODUCT_NAME" >}}][migrate-prometheus] for a detailed migration guide. @@ -136,4 +137,4 @@ Refer to [Migrate from Grafana Agent Static to {{< param "PRODUCT_NAME" >}}][mig [migrate-promtail]: ../../../tasks/migrate/from-promtail/ [migrate-static]: ../../../tasks/migrate/from-static/ [Grafana Agent Static]: ../../../../static/ -[integrations-next]: ../../../../static/configuration/integrations/integrations-next/ \ No newline at end of file +[integrations-next]: ../../../../static/configuration/integrations/integrations-next/ diff --git a/docs/sources/flow/reference/cli/fmt.md b/docs/sources/flow/reference/cli/fmt.md index 7a266921d365..c5a8cf440a54 100644 --- a/docs/sources/flow/reference/cli/fmt.md +++ b/docs/sources/flow/reference/cli/fmt.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/cli/fmt/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/fmt/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/fmt/ -- /docs/grafana-cloud/send-data/agent/flow/reference/cli/fmt/ + - /docs/grafana-cloud/agent/flow/reference/cli/fmt/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/fmt/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/fmt/ + - /docs/grafana-cloud/send-data/agent/flow/reference/cli/fmt/ canonical: https://grafana.com/docs/agent/latest/flow/reference/cli/fmt/ description: Learn about the fmt command menuTitle: fmt @@ -19,13 +19,13 @@ The `fmt` command formats a given {{< param "PRODUCT_NAME" >}} configuration fil Usage: -* `AGENT_MODE=flow grafana-agent fmt [FLAG ...] FILE_NAME` -* `grafana-agent-flow fmt [FLAG ...] FILE_NAME` +- `AGENT_MODE=flow grafana-agent fmt [FLAG ...] FILE_NAME` +- `grafana-agent-flow fmt [FLAG ...] FILE_NAME` - Replace the following: + Replace the following: - * `FLAG`: One or more flags that define the input and output of the command. - * `FILE_NAME`: The {{< param "PRODUCT_NAME" >}} configuration file. + - `FLAG`: One or more flags that define the input and output of the command. + - `FILE_NAME`: The {{< param "PRODUCT_NAME" >}} configuration file. If the `FILE_NAME` argument is not provided or if the `FILE_NAME` argument is equal to `-`, `fmt` formats the contents of standard input. Otherwise, @@ -41,5 +41,5 @@ properly. The following flags are supported: -* `--write`, `-w`: Write the formatted file back to disk when not reading from +- `--write`, `-w`: Write the formatted file back to disk when not reading from standard input. diff --git a/docs/sources/flow/reference/cli/run.md b/docs/sources/flow/reference/cli/run.md index 71c17837eb46..fbb6507f983a 100644 --- a/docs/sources/flow/reference/cli/run.md +++ b/docs/sources/flow/reference/cli/run.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/cli/run/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/run/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/run/ -- /docs/grafana-cloud/send-data/agent/flow/reference/cli/run/ + - /docs/grafana-cloud/agent/flow/reference/cli/run/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/run/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/run/ + - /docs/grafana-cloud/send-data/agent/flow/reference/cli/run/ canonical: https://grafana.com/docs/agent/latest/flow/reference/cli/run/ description: Learn about the run command menuTitle: run @@ -19,13 +19,13 @@ The `run` command runs {{< param "PRODUCT_NAME" >}} in the foreground until an i Usage: -* `AGENT_MODE=flow grafana-agent run [FLAG ...] PATH_NAME` -* `grafana-agent-flow run [FLAG ...] PATH_NAME` +- `AGENT_MODE=flow grafana-agent run [FLAG ...] PATH_NAME` +- `grafana-agent-flow run [FLAG ...] PATH_NAME` - Replace the following: + Replace the following: - * `FLAG`: One or more flags that define the input and output of the command. - * `PATH_NAME`: Required. The {{< param "PRODUCT_NAME" >}} configuration file/directory path. + - `FLAG`: One or more flags that define the input and output of the command. + - `PATH_NAME`: Required. The {{< param "PRODUCT_NAME" >}} configuration file/directory path. If the `PATH_NAME` argument is not provided, or if the configuration path can't be loaded or contains errors during the initial load, the `run` command will immediately exit and show an error message. @@ -45,25 +45,25 @@ running components. The following flags are supported: -* `--server.http.enable-pprof`: Enable /debug/pprof profiling endpoints. (default `true`) -* `--server.http.memory-addr`: Address to listen for [in-memory HTTP traffic][] on +- `--server.http.enable-pprof`: Enable /debug/pprof profiling endpoints. (default `true`) +- `--server.http.memory-addr`: Address to listen for [in-memory HTTP traffic][] on (default `agent.internal:12345`). -* `--server.http.listen-addr`: Address to listen for HTTP traffic on (default `127.0.0.1:12345`). -* `--server.http.ui-path-prefix`: Base path where the UI is exposed (default `/`). -* `--storage.path`: Base directory where components can store data (default `data-agent/`). -* `--disable-reporting`: Disable [data collection][] (default `false`). -* `--cluster.enabled`: Start {{< param "PRODUCT_NAME" >}} in clustered mode (default `false`). -* `--cluster.node-name`: The name to use for this node (defaults to the environment's hostname). -* `--cluster.join-addresses`: Comma-separated list of addresses to join the cluster at (default `""`). Mutually exclusive with `--cluster.discover-peers`. -* `--cluster.discover-peers`: List of key-value tuples for discovering peers (default `""`). Mutually exclusive with `--cluster.join-addresses`. -* `--cluster.rejoin-interval`: How often to rejoin the list of peers (default `"60s"`). -* `--cluster.advertise-address`: Address to advertise to other cluster nodes (default `""`). -* `--cluster.advertise-interfaces`: List of interfaces used to infer an address to advertise. Set to `all` to use all available network interfaces on the system. (default `"eth0,en0"`). -* `--cluster.max-join-peers`: Number of peers to join from the discovered set (default `5`). -* `--cluster.name`: Name to prevent nodes without this identifier from joining the cluster (default `""`). -* `--config.format`: The format of the source file. Supported formats: `flow`, `otelcol`, `prometheus`, `promtail`, `static` (default `"flow"`). -* `--config.bypass-conversion-errors`: Enable bypassing errors when converting (default `false`). -* `--config.extra-args`: Extra arguments from the original format used by the converter. +- `--server.http.listen-addr`: Address to listen for HTTP traffic on (default `127.0.0.1:12345`). +- `--server.http.ui-path-prefix`: Base path where the UI is exposed (default `/`). +- `--storage.path`: Base directory where components can store data (default `data-agent/`). +- `--disable-reporting`: Disable [data collection][] (default `false`). +- `--cluster.enabled`: Start {{< param "PRODUCT_NAME" >}} in clustered mode (default `false`). +- `--cluster.node-name`: The name to use for this node (defaults to the environment's hostname). +- `--cluster.join-addresses`: Comma-separated list of addresses to join the cluster at (default `""`). Mutually exclusive with `--cluster.discover-peers`. +- `--cluster.discover-peers`: List of key-value tuples for discovering peers (default `""`). Mutually exclusive with `--cluster.join-addresses`. +- `--cluster.rejoin-interval`: How often to rejoin the list of peers (default `"60s"`). +- `--cluster.advertise-address`: Address to advertise to other cluster nodes (default `""`). +- `--cluster.advertise-interfaces`: List of interfaces used to infer an address to advertise. Set to `all` to use all available network interfaces on the system. (default `"eth0,en0"`). +- `--cluster.max-join-peers`: Number of peers to join from the discovered set (default `5`). +- `--cluster.name`: Name to prevent nodes without this identifier from joining the cluster (default `""`). +- `--config.format`: The format of the source file. Supported formats: `flow`, `otelcol`, `prometheus`, `promtail`, `static` (default `"flow"`). +- `--config.bypass-conversion-errors`: Enable bypassing errors when converting (default `false`). +- `--config.extra-args`: Extra arguments from the original format used by the converter. [in-memory HTTP traffic]: {{< relref "../../concepts/component_controller.md#in-memory-traffic" >}} [data collection]: {{< relref "../../../data-collection" >}} @@ -73,8 +73,8 @@ The following flags are supported: The configuration file can be reloaded from disk by either: -* Sending an HTTP POST request to the `/-/reload` endpoint. -* Sending a `SIGHUP` signal to the {{< param "PRODUCT_NAME" >}} process. +- Sending an HTTP POST request to the `/-/reload` endpoint. +- Sending a `SIGHUP` signal to the {{< param "PRODUCT_NAME" >}} process. When this happens, the [component controller][] synchronizes the set of running components with the latest set of components specified in the configuration file. @@ -130,7 +130,7 @@ itself. The `--cluster.rejoin-interval` flag defines how often each node should rediscover peers based on the contents of the `--cluster.join-addresses` and -`--cluster.discover-peers` flags and try to rejoin them. This operation +`--cluster.discover-peers` flags and try to rejoin them. This operation is useful for addressing split-brain issues if the initial bootstrap is unsuccessful and for making clustering easier to manage in dynamic environments. To disable this behavior, set the `--cluster.rejoin-interval` @@ -159,11 +159,11 @@ Attempting to join a cluster with a wrong `--cluster.name` will result in a "fai Clustered {{< param "PRODUCT_ROOT_NAME" >}}s are in one of three states: -* **Viewer**: {{< param "PRODUCT_NAME" >}} has a read-only view of the cluster and isn't participating in workload distribution. +- **Viewer**: {{< param "PRODUCT_NAME" >}} has a read-only view of the cluster and isn't participating in workload distribution. -* **Participant**: {{< param "PRODUCT_NAME" >}} is participating in workload distribution for components that have clustering enabled. +- **Participant**: {{< param "PRODUCT_NAME" >}} is participating in workload distribution for components that have clustering enabled. -* **Terminating**: {{< param "PRODUCT_NAME" >}} is shutting down and will no longer assign new work to itself. +- **Terminating**: {{< param "PRODUCT_NAME" >}} is shutting down and will no longer assign new work to itself. Each {{< param "PRODUCT_ROOT_NAME" >}} initially joins the cluster in the viewer state and then transitions to the participant state after the process startup completes. Each {{< param "PRODUCT_ROOT_NAME" >}} then @@ -190,5 +190,5 @@ Include `--config.extra-args` to pass additional command line flags from the ori Refer to [grafana-agent-flow convert][] for more details on how `extra-args` work. [grafana-agent-flow convert]: {{< relref "./convert.md" >}} -[clustering]: {{< relref "../../concepts/clustering.md" >}} +[clustering]: {{< relref "../../concepts/clustering.md" >}} [go-discover]: https://github.com/hashicorp/go-discover diff --git a/docs/sources/flow/reference/cli/tools.md b/docs/sources/flow/reference/cli/tools.md index b9fb73a761bd..153d20c256f7 100644 --- a/docs/sources/flow/reference/cli/tools.md +++ b/docs/sources/flow/reference/cli/tools.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/cli/tools/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/tools/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/tools/ -- /docs/grafana-cloud/send-data/agent/flow/reference/cli/tools/ + - /docs/grafana-cloud/agent/flow/reference/cli/tools/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/cli/tools/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/cli/tools/ + - /docs/grafana-cloud/send-data/agent/flow/reference/cli/tools/ canonical: https://grafana.com/docs/agent/latest/flow/reference/cli/tools/ description: Learn about the tools command menuTitle: tools @@ -26,31 +26,31 @@ guarantees and may change or be removed between releases. Usage: -* `AGENT_MODE=flow grafana-agent tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY` -* `grafana-agent-flow tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY` +- `AGENT_MODE=flow grafana-agent tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY` +- `grafana-agent-flow tools prometheus.remote_write sample-stats [FLAG ...] WAL_DIRECTORY` The `sample-stats` command reads the Write-Ahead Log (WAL) specified by `WAL_DIRECTORY` and collects information on metric samples within it. For each metric discovered, `sample-stats` emits: -* The timestamp of the oldest sample received for that metric. -* The timestamp of the newest sample received for that metric. -* The total number of samples discovered for that metric. +- The timestamp of the oldest sample received for that metric. +- The timestamp of the newest sample received for that metric. +- The total number of samples discovered for that metric. By default, `sample-stats` will return information for every metric in the WAL. You can pass the `--selector` flag to filter the reported metrics to a smaller set. The following flag is supported: -* `--selector`: A PromQL label selector to filter data by. (default `{}`) +- `--selector`: A PromQL label selector to filter data by. (default `{}`) ### prometheus.remote_write target-stats Usage: -* `AGENT_MODE=flow grafana-agent tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY` -* `grafana-agent-flow tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY` +- `AGENT_MODE=flow grafana-agent tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY` +- `grafana-agent-flow tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY` The `target-stats` command reads the Write-Ahead Log (WAL) specified by `WAL_DIRECTORY` and collects metric cardinality information for a specific @@ -62,32 +62,32 @@ metric name. The following flags are supported: -* `--job`: The `job` label of the target. -* `--instance`: The `instance` label of the target. +- `--job`: The `job` label of the target. +- `--instance`: The `instance` label of the target. The `--job` and `--instance` labels are required. ### prometheus.remote_write wal-stats -Usage: +Usage: -* `AGENT_MODE=flow grafana-agent tools prometheus.remote_write wal-stats WAL_DIRECTORY` -* `grafana-agent-flow tools prometheus.remote_write wal-stats WAL_DIRECTORY` +- `AGENT_MODE=flow grafana-agent tools prometheus.remote_write wal-stats WAL_DIRECTORY` +- `grafana-agent-flow tools prometheus.remote_write wal-stats WAL_DIRECTORY` The `wal-stats` command reads the Write-Ahead Log (WAL) specified by `WAL_DIRECTORY` and collects general information about it. The following information is reported: -* The timestamp of the oldest sample in the WAL. -* The timestamp of the newest sample in the WAL. -* The total number of unique series defined in the WAL. -* The total number of samples in the WAL. -* The number of hash collisions detected, if any. -* The total number of invalid records in the WAL, if any. -* The most recent WAL checkpoint segment number. -* The oldest segment number in the WAL. -* The newest segment number in the WAL. +- The timestamp of the oldest sample in the WAL. +- The timestamp of the newest sample in the WAL. +- The total number of unique series defined in the WAL. +- The total number of samples in the WAL. +- The number of hash collisions detected, if any. +- The total number of invalid records in the WAL, if any. +- The most recent WAL checkpoint segment number. +- The oldest segment number in the WAL. +- The newest segment number in the WAL. Additionally, `wal-stats` reports per-target information, where a target is defined as a unique combination of the `job` and `instance` label values. For diff --git a/docs/sources/flow/reference/compatibility/_index.md b/docs/sources/flow/reference/compatibility/_index.md index 97d113cdb10f..097aec8daa41 100644 --- a/docs/sources/flow/reference/compatibility/_index.md +++ b/docs/sources/flow/reference/compatibility/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/compatible-components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/compatible-components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/compatible-components/ -- /docs/grafana-cloud/send-data/agent/flow/reference/compatible-components/ + - /docs/grafana-cloud/agent/flow/reference/compatible-components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/compatible-components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/compatible-components/ + - /docs/grafana-cloud/send-data/agent/flow/reference/compatible-components/ canonical: https://grafana.com/docs/agent/latest/flow/reference/compatibility/ description: Learn about which components are compatible with each other in Grafana Agent Flow title: Compatible components @@ -22,9 +22,10 @@ The value of an attribute may matter as well as its type. Refer to each component's documentation for more details on what values are acceptable. For example: -* A Prometheus component may always expect an `"__address__"` label inside a list of targets. -* A `string` argument may only accept certain values like "traceID" or "spanID". -{{< /admonition >}} + +- A Prometheus component may always expect an `"__address__"` label inside a list of targets. +- A `string` argument may only accept certain values like "traceID" or "spanID". + {{< /admonition >}} ## Targets @@ -38,6 +39,7 @@ It's recommended to always check component references for details when working w [string]: ../../concepts/config-language/expressions/types_and_values/#strings + ### Targets Exporters The following components, grouped by namespace, _export_ Targets. @@ -45,6 +47,7 @@ The following components, grouped by namespace, _export_ Targets. {{< collapse title="discovery" >}} + - [discovery.azure](../components/discovery.azure) - [discovery.consul](../components/discovery.consul) - [discovery.consulagent](../components/discovery.consulagent) @@ -76,13 +79,15 @@ The following components, grouped by namespace, _export_ Targets. - [discovery.serverset](../components/discovery.serverset) - [discovery.triton](../components/discovery.triton) - [discovery.uyuni](../components/discovery.uyuni) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="local" >}} + - [local.file_match](../components/local.file_match) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="prometheus" >}} + - [prometheus.exporter.apache](../components/prometheus.exporter.apache) - [prometheus.exporter.azure](../components/prometheus.exporter.azure) - [prometheus.exporter.blackbox](../components/prometheus.exporter.blackbox) @@ -110,49 +115,55 @@ The following components, grouped by namespace, _export_ Targets. - [prometheus.exporter.unix](../components/prometheus.exporter.unix) - [prometheus.exporter.vsphere](../components/prometheus.exporter.vsphere) - [prometheus.exporter.windows](../components/prometheus.exporter.windows) -{{< /collapse >}} + {{< /collapse >}} - + ### Targets Consumers + The following components, grouped by namespace, _consume_ Targets. {{< collapse title="discovery" >}} + - [discovery.process](../components/discovery.process) - [discovery.relabel](../components/discovery.relabel) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="local" >}} + - [local.file_match](../components/local.file_match) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="loki" >}} + - [loki.source.docker](../components/loki.source.docker) - [loki.source.file](../components/loki.source.file) - [loki.source.kubernetes](../components/loki.source.kubernetes) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="otelcol" >}} + - [otelcol.processor.discovery](../components/otelcol.processor.discovery) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="prometheus" >}} + - [prometheus.scrape](../components/prometheus.scrape) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="pyroscope" >}} + - [pyroscope.ebpf](../components/pyroscope.ebpf) - [pyroscope.java](../components/pyroscope.java) - [pyroscope.scrape](../components/pyroscope.scrape) -{{< /collapse >}} + {{< /collapse >}} - ## Prometheus `MetricsReceiver` The Prometheus metrics are sent between components using `MetricsReceiver`s. @@ -163,6 +174,7 @@ Use the following components to build your Prometheus metrics pipeline: [capsules]: ../../concepts/config-language/expressions/types_and_values/#capsules + ### Prometheus `MetricsReceiver` Exporters The following components, grouped by namespace, _export_ Prometheus `MetricsReceiver`. @@ -170,17 +182,20 @@ The following components, grouped by namespace, _export_ Prometheus `MetricsRece {{< collapse title="otelcol" >}} + - [otelcol.receiver.prometheus](../components/otelcol.receiver.prometheus) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="prometheus" >}} + - [prometheus.relabel](../components/prometheus.relabel) - [prometheus.remote_write](../components/prometheus.remote_write) -{{< /collapse >}} + {{< /collapse >}} + ### Prometheus `MetricsReceiver` Consumers The following components, grouped by namespace, _consume_ Prometheus `MetricsReceiver`. @@ -188,17 +203,19 @@ The following components, grouped by namespace, _consume_ Prometheus `MetricsRec {{< collapse title="otelcol" >}} + - [otelcol.exporter.prometheus](../components/otelcol.exporter.prometheus) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="prometheus" >}} + - [prometheus.operator.podmonitors](../components/prometheus.operator.podmonitors) - [prometheus.operator.probes](../components/prometheus.operator.probes) - [prometheus.operator.servicemonitors](../components/prometheus.operator.servicemonitors) - [prometheus.receive_http](../components/prometheus.receive_http) - [prometheus.relabel](../components/prometheus.relabel) - [prometheus.scrape](../components/prometheus.scrape) -{{< /collapse >}} + {{< /collapse >}} @@ -209,6 +226,7 @@ Components that consume `LogsReceiver` as an argument typically send logs to it. Use the following components to build your Loki logs pipeline: + ### Loki `LogsReceiver` Exporters The following components, grouped by namespace, _export_ Loki `LogsReceiver`. @@ -216,19 +234,22 @@ The following components, grouped by namespace, _export_ Loki `LogsReceiver`. {{< collapse title="loki" >}} + - [loki.echo](../components/loki.echo) - [loki.process](../components/loki.process) - [loki.relabel](../components/loki.relabel) - [loki.write](../components/loki.write) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="otelcol" >}} + - [otelcol.receiver.loki](../components/otelcol.receiver.loki) -{{< /collapse >}} + {{< /collapse >}} + ### Loki `LogsReceiver` Consumers The following components, grouped by namespace, _consume_ Loki `LogsReceiver`. @@ -236,10 +257,12 @@ The following components, grouped by namespace, _consume_ Loki `LogsReceiver`. {{< collapse title="faro" >}} + - [faro.receiver](../components/faro.receiver) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="loki" >}} + - [loki.process](../components/loki.process) - [loki.relabel](../components/loki.relabel) - [loki.source.api](../components/loki.source.api) @@ -258,11 +281,12 @@ The following components, grouped by namespace, _consume_ Loki `LogsReceiver`. - [loki.source.podlogs](../components/loki.source.podlogs) - [loki.source.syslog](../components/loki.source.syslog) - [loki.source.windowsevent](../components/loki.source.windowsevent) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="otelcol" >}} + - [otelcol.exporter.loki](../components/otelcol.exporter.loki) -{{< /collapse >}} + {{< /collapse >}} @@ -276,6 +300,7 @@ Refer to the component reference pages for more details on what is supported. Use the following components to build your OpenTelemetry pipeline: + ### OpenTelemetry `otelcol.Consumer` Exporters The following components, grouped by namespace, _export_ OpenTelemetry `otelcol.Consumer`. @@ -283,11 +308,11 @@ The following components, grouped by namespace, _export_ OpenTelemetry `otelcol. {{< collapse title="otelcol" >}} + - [otelcol.connector.host_info](../components/otelcol.connector.host_info) - [otelcol.connector.servicegraph](../components/otelcol.connector.servicegraph) - [otelcol.connector.spanlogs](../components/otelcol.connector.spanlogs) - [otelcol.connector.spanmetrics](../components/otelcol.connector.spanmetrics) -- [otelcol.exporter.debug](../components/otelcol.exporter.debug) - [otelcol.exporter.loadbalancing](../components/otelcol.exporter.loadbalancing) - [otelcol.exporter.logging](../components/otelcol.exporter.logging) - [otelcol.exporter.loki](../components/otelcol.exporter.loki) @@ -305,11 +330,12 @@ The following components, grouped by namespace, _export_ OpenTelemetry `otelcol. - [otelcol.processor.span](../components/otelcol.processor.span) - [otelcol.processor.tail_sampling](../components/otelcol.processor.tail_sampling) - [otelcol.processor.transform](../components/otelcol.processor.transform) -{{< /collapse >}} + {{< /collapse >}} + ### OpenTelemetry `otelcol.Consumer` Consumers The following components, grouped by namespace, _consume_ OpenTelemetry `otelcol.Consumer`. @@ -317,10 +343,12 @@ The following components, grouped by namespace, _consume_ OpenTelemetry `otelcol {{< collapse title="faro" >}} + - [faro.receiver](../components/faro.receiver) -{{< /collapse >}} + {{< /collapse >}} {{< collapse title="otelcol" >}} + - [otelcol.connector.host_info](../components/otelcol.connector.host_info) - [otelcol.connector.servicegraph](../components/otelcol.connector.servicegraph) - [otelcol.connector.spanlogs](../components/otelcol.connector.spanlogs) @@ -344,7 +372,7 @@ The following components, grouped by namespace, _consume_ OpenTelemetry `otelcol - [otelcol.receiver.prometheus](../components/otelcol.receiver.prometheus) - [otelcol.receiver.vcenter](../components/otelcol.receiver.vcenter) - [otelcol.receiver.zipkin](../components/otelcol.receiver.zipkin) -{{< /collapse >}} + {{< /collapse >}} @@ -356,6 +384,7 @@ Components that can consume Pyroscope profiles can be passed the `ProfilesReceiv Use the following components to build your Pyroscope profiles pipeline: + ### Pyroscope `ProfilesReceiver` Exporters The following components, grouped by namespace, _export_ Pyroscope `ProfilesReceiver`. @@ -363,12 +392,14 @@ The following components, grouped by namespace, _export_ Pyroscope `ProfilesRece {{< collapse title="pyroscope" >}} + - [pyroscope.write](../components/pyroscope.write) -{{< /collapse >}} + {{< /collapse >}} + ### Pyroscope `ProfilesReceiver` Consumers The following components, grouped by namespace, _consume_ Pyroscope `ProfilesReceiver`. @@ -376,9 +407,10 @@ The following components, grouped by namespace, _consume_ Pyroscope `ProfilesRec {{< collapse title="pyroscope" >}} + - [pyroscope.ebpf](../components/pyroscope.ebpf) - [pyroscope.java](../components/pyroscope.java) - [pyroscope.scrape](../components/pyroscope.scrape) -{{< /collapse >}} + {{< /collapse >}} diff --git a/docs/sources/flow/reference/components/_index.md b/docs/sources/flow/reference/components/_index.md index 3eafecb3c1af..b35748a24b81 100644 --- a/docs/sources/flow/reference/components/_index.md +++ b/docs/sources/flow/reference/components/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/ + - /docs/grafana-cloud/agent/flow/reference/components/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/ description: Learn about the components in Grafana Agent Flow title: Components reference diff --git a/docs/sources/flow/reference/components/discovery.azure.md b/docs/sources/flow/reference/components/discovery.azure.md index 9970dc4fde98..8f6a3684d5a7 100644 --- a/docs/sources/flow/reference/components/discovery.azure.md +++ b/docs/sources/flow/reference/components/discovery.azure.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.azure/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.azure/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.azure/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.azure/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.azure/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.azure/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.azure/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.azure/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.azure/ description: Learn about discovery.azure title: discovery.azure @@ -26,30 +26,31 @@ discovery.azure "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ---------- | ---------------------------------------------------------------------- | -------------------- | -------- -`environment` | `string` | Azure environment. | `"AzurePublicCloud"` | no -`port` | `number` | Port to be appended to the `__address__` label for each target. | `80` | no -`subscription_id` | `string` | Azure subscription ID. | | no -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `5m` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | -------------------- | -------- | +| `environment` | `string` | Azure environment. | `"AzurePublicCloud"` | no | +| `port` | `number` | Port to be appended to the `__address__` label for each target. | `80` | no | +| `subscription_id` | `string` | Azure subscription ID. | | no | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `5m` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} ## Blocks + The following blocks are supported inside the definition of `discovery.azure`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -oauth | [oauth][] | OAuth configuration for Azure API. | no -managed_identity | [managed_identity][] | Managed Identity configuration for Azure API. | no -tls_config | [tls_config][] | TLS configuration for requests to the Azure API. | no +| Hierarchy | Block | Description | Required | +| ---------------- | -------------------- | ------------------------------------------------ | -------- | +| oauth | [oauth][] | OAuth configuration for Azure API. | no | +| managed_identity | [managed_identity][] | Managed Identity configuration for Azure API. | no | +| tls_config | [tls_config][] | TLS configuration for requests to the Azure API. | no | Exactly one of the `oauth` or `managed_identity` blocks must be specified. @@ -58,20 +59,22 @@ Exactly one of the `oauth` or `managed_identity` blocks must be specified. [tls_config]: #tls_config-block ### oauth block + The `oauth` block configures OAuth authentication for the Azure API. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`client_id` | `string` | OAuth client ID. | | yes -`client_secret` | `string` | OAuth client secret. | | yes -`tenant_id` | `string` | OAuth tenant ID. | | yes +| Name | Type | Description | Default | Required | +| --------------- | -------- | -------------------- | ------- | -------- | +| `client_id` | `string` | OAuth client ID. | | yes | +| `client_secret` | `string` | OAuth client secret. | | yes | +| `tenant_id` | `string` | OAuth tenant ID. | | yes | ### managed_identity block + The `managed_identity` block configures Managed Identity authentication for the Azure API. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`client_id` | `string` | Managed Identity client ID. | | yes +| Name | Type | Description | Default | Required | +| ----------- | -------- | --------------------------- | ------- | -------- | +| `client_id` | `string` | Managed Identity client ID. | | yes | ### tls_config block @@ -81,25 +84,25 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Azure API. +| Name | Type | Description | +| --------- | ------------------- | ------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Azure API. | Each target includes the following labels: -* `__meta_azure_subscription_id`: The Azure subscription ID. -* `__meta_azure_tenant_id`: The Azure tenant ID. -* `__meta_azure_machine_id`: The UUID of the Azure VM. -* `__meta_azure_machine_resource_group`: The name of the resource group the VM is in. -* `__meta_azure_machine_name`: The name of the VM. -* `__meta_azure_machine_computer_name`: The host OS name of the VM. -* `__meta_azure_machine_os_type`: The OS the VM is running (either `Linux` or `Windows`). -* `__meta_azure_machine_location`: The region the VM is in. -* `__meta_azure_machine_private_ip`: The private IP address of the VM. -* `__meta_azure_machine_public_ip`: The public IP address of the VM. -* `__meta_azure_machine_tag_*`: A tag on the VM. There will be one label per tag. -* `__meta_azure_machine_scale_set`: The name of the scale set the VM is in. -* `__meta_azure_machine_size`: The size of the VM. +- `__meta_azure_subscription_id`: The Azure subscription ID. +- `__meta_azure_tenant_id`: The Azure tenant ID. +- `__meta_azure_machine_id`: The UUID of the Azure VM. +- `__meta_azure_machine_resource_group`: The name of the resource group the VM is in. +- `__meta_azure_machine_name`: The name of the VM. +- `__meta_azure_machine_computer_name`: The host OS name of the VM. +- `__meta_azure_machine_os_type`: The OS the VM is running (either `Linux` or `Windows`). +- `__meta_azure_machine_location`: The region the VM is in. +- `__meta_azure_machine_private_ip`: The private IP address of the VM. +- `__meta_azure_machine_public_ip`: The public IP address of the VM. +- `__meta_azure_machine_tag_*`: A tag on the VM. There will be one label per tag. +- `__meta_azure_machine_scale_set`: The name of the scale set the VM is in. +- `__meta_azure_machine_size`: The size of the VM. Each discovered VM maps to a single target. The `__address__` label is set to the `private_ip:port` (`[private_ip]:port` if the private IP is an IPv6 address) of the VM. @@ -146,14 +149,16 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `AZURE_SUBSCRIPTION_ID`: Your Azure subscription ID. - - `AZURE_CLIENT_ID`: Your Azure client ID. - - `AZURE_CLIENT_SECRET`: Your Azure client secret. - - `AZURE_TENANT_ID`: Your Azure tenant ID. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `AZURE_SUBSCRIPTION_ID`: Your Azure subscription ID. +- `AZURE_CLIENT_ID`: Your Azure client ID. +- `AZURE_CLIENT_SECRET`: Your Azure client secret. +- `AZURE_TENANT_ID`: Your Azure tenant ID. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.consul.md b/docs/sources/flow/reference/components/discovery.consul.md index cf96dba94bda..367df69f6482 100644 --- a/docs/sources/flow/reference/components/discovery.consul.md +++ b/docs/sources/flow/reference/components/discovery.consul.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.consul/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.consul/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.consul/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.consul/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.consul/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.consul/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.consul/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.consul/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.consul/ description: Learn about discovery.consul title: discovery.consul @@ -27,37 +27,38 @@ discovery.consul "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`server` | `string` | Host and port of the Consul API. | `localhost:8500` | no -`token` | `secret` | Secret token used to access the Consul API. | | no -`datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no -`namespace` | `string` | Namespace to use (only supported in Consul Enterprise). | | no -`partition` | `string` | Admin partition to use (only supported in Consul Enterprise). | | no -`tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no -`scheme` | `string` | The scheme to use when talking to Consul. | `http` | no -`username` | `string` | The username to use (deprecated in favor of the basic_auth configuration). | | no -`password` | `secret` | The password to use (deprecated in favor of the basic_auth configuration). | | no -`allow_stale` | `bool` | Allow stale Consul results (see [official documentation][consistency documentation]). Will reduce load on Consul. | `true` | no -`services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no -`tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no -`node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------- | ---------------- | -------- | +| `server` | `string` | Host and port of the Consul API. | `localhost:8500` | no | +| `token` | `secret` | Secret token used to access the Consul API. | | no | +| `datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no | +| `namespace` | `string` | Namespace to use (only supported in Consul Enterprise). | | no | +| `partition` | `string` | Admin partition to use (only supported in Consul Enterprise). | | no | +| `tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no | +| `scheme` | `string` | The scheme to use when talking to Consul. | `http` | no | +| `username` | `string` | The username to use (deprecated in favor of the basic_auth configuration). | | no | +| `password` | `secret` | The password to use (deprecated in favor of the basic_auth configuration). | | no | +| `allow_stale` | `bool` | Allow stale Consul results (see [official documentation][consistency documentation]). Will reduce load on Consul. | `true` | no | +| `services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no | +| `tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no | +| `node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no | +| `refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -69,13 +70,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.consul`: -Hierarchy | Block | Description | Required ---------------------|-------------------|----------------------------------------------------------|--------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -106,25 +107,25 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. | Each target includes the following labels: -* `__meta_consul_address`: the address of the target. -* `__meta_consul_dc`: the datacenter name for the target. -* `__meta_consul_health`: the health status of the service. -* `__meta_consul_partition`: the admin partition name where the service is registered. -* `__meta_consul_metadata_`: each node metadata key value of the target. -* `__meta_consul_node`: the node name defined for the target. -* `__meta_consul_service_address`: the service address of the target. -* `__meta_consul_service_id`: the service ID of the target. -* `__meta_consul_service_metadata_`: each service metadata key value of the target. -* `__meta_consul_service_port`: the service port of the target. -* `__meta_consul_service`: the name of the service the target belongs to. -* `__meta_consul_tagged_address_`: each node tagged address key value of the target. -* `__meta_consul_tags`: the list of tags of the target joined by the tag separator. +- `__meta_consul_address`: the address of the target. +- `__meta_consul_dc`: the datacenter name for the target. +- `__meta_consul_health`: the health status of the service. +- `__meta_consul_partition`: the admin partition name where the service is registered. +- `__meta_consul_metadata_`: each node metadata key value of the target. +- `__meta_consul_node`: the node name defined for the target. +- `__meta_consul_service_address`: the service address of the target. +- `__meta_consul_service_id`: the service ID of the target. +- `__meta_consul_service_metadata_`: each service metadata key value of the target. +- `__meta_consul_service_port`: the service port of the target. +- `__meta_consul_service`: the name of the service the target belongs to. +- `__meta_consul_tagged_address_`: each node tagged address key value of the target. +- `__meta_consul_tags`: the list of tags of the target joined by the tag separator. ## Component health @@ -169,10 +170,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.consulagent.md b/docs/sources/flow/reference/components/discovery.consulagent.md index 340d1f6b5df3..4db2caaa5f05 100644 --- a/docs/sources/flow/reference/components/discovery.consulagent.md +++ b/docs/sources/flow/reference/components/discovery.consulagent.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.consulagent/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.consulagent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.consulagent/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.consulagent/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.consulagent/ description: Learn about discovery.consulagent title: discovery.consulagent @@ -96,6 +96,7 @@ values. ## Example + This example discovers targets from a Consul Agent for the specified list of services: ```river diff --git a/docs/sources/flow/reference/components/discovery.digitalocean.md b/docs/sources/flow/reference/components/discovery.digitalocean.md index faaa8e1ea81a..7d7069f66838 100644 --- a/docs/sources/flow/reference/components/discovery.digitalocean.md +++ b/docs/sources/flow/reference/components/discovery.digitalocean.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.digitalocean/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.digitalocean/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.digitalocean/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.digitalocean/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.digitalocean/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.digitalocean/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.digitalocean/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.digitalocean/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.digitalocean/ description: Learn about discovery.digitalocean title: discovery.digitalocean @@ -29,18 +29,18 @@ discovery.digitalocean "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`port` | `number` | Port to be appended to the `__address__` label for each Droplet. | `80` | no -`refresh_interval` | `duration` | Frequency to refresh list of Droplets. | `"1m"` | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `port` | `number` | Port to be appended to the `__address__` label for each Droplet. | `80` | no | +| `refresh_interval` | `duration` | Frequency to refresh list of Droplets. | `"1m"` | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | The DigitalOcean API uses bearer tokens for authentication, see more about it in the [DigitalOcean API documentation](https://docs.digitalocean.com/reference/api/api-reference/#section/Authentication). @@ -51,32 +51,32 @@ Exactly one of the [`bearer_token`](#arguments) and [`bearer_token_file`](#argum {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} ## Blocks -The `discovery.digitalocean` component does not support any blocks, and is configured fully through arguments. +The `discovery.digitalocean` component does not support any blocks, and is configured fully through arguments. ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the DigitalOcean API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the DigitalOcean API. | Each target includes the following labels: -* `__meta_digitalocean_droplet_id`: ID of the Droplet. -* `__meta_digitalocean_droplet_name`: Name of the Droplet. -* `__meta_digitalocean_image`: The image slug (unique text identifier of the image) used to create the Droplet. -* `__meta_digitalocean_image_name`: Name of the image used to create the Droplet. -* `__meta_digitalocean_private_ipv4`: The private IPv4 address of the Droplet. -* `__meta_digitalocean_public_ipv4`: The public IPv4 address of the Droplet. -* `__meta_digitalocean_public_ipv6`: The public IPv6 address of the Droplet. -* `__meta_digitalocean_region`: The region the Droplet is running in. -* `__meta_digitalocean_size`: The size of the Droplet. -* `__meta_digitalocean_status`: The current status of the Droplet. -* `__meta_digitalocean_features`: Optional properties configured for the Droplet, such as IPV6 networking, private networking, or backups. -* `__meta_digitalocean_tags`: The tags assigned to the Droplet. -* `__meta_digitalocean_vpc`: The ID of the VPC where the Droplet is located. +- `__meta_digitalocean_droplet_id`: ID of the Droplet. +- `__meta_digitalocean_droplet_name`: Name of the Droplet. +- `__meta_digitalocean_image`: The image slug (unique text identifier of the image) used to create the Droplet. +- `__meta_digitalocean_image_name`: Name of the image used to create the Droplet. +- `__meta_digitalocean_private_ipv4`: The private IPv4 address of the Droplet. +- `__meta_digitalocean_public_ipv4`: The public IPv4 address of the Droplet. +- `__meta_digitalocean_public_ipv6`: The public IPv6 address of the Droplet. +- `__meta_digitalocean_region`: The region the Droplet is running in. +- `__meta_digitalocean_size`: The size of the Droplet. +- `__meta_digitalocean_status`: The current status of the Droplet. +- `__meta_digitalocean_features`: Optional properties configured for the Droplet, such as IPV6 networking, private networking, or backups. +- `__meta_digitalocean_tags`: The tags assigned to the Droplet. +- `__meta_digitalocean_vpc`: The ID of the VPC where the Droplet is located. Each discovered Droplet maps to one target. @@ -97,6 +97,7 @@ values. ## Example This would result in targets with `__address__` labels like: `192.0.2.1:8080`: + ```river discovery.digitalocean "example" { port = 8080 @@ -120,10 +121,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.dns.md b/docs/sources/flow/reference/components/discovery.dns.md index a54890c240f1..1fb90dfa27fe 100644 --- a/docs/sources/flow/reference/components/discovery.dns.md +++ b/docs/sources/flow/reference/components/discovery.dns.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/latest/flow/reference/components/discovery.dns/ -- /docs/grafana-cloud/agent/flow/reference/components/discovery.dns/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.dns/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.dns/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.dns/ + - /docs/agent/latest/flow/reference/components/discovery.dns/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.dns/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.dns/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.dns/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.dns/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.dns/ description: Learn about discovery.dns title: discovery.dns @@ -26,28 +26,27 @@ discovery.dns "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`names` | `list(string)` | DNS names to look up. | | yes -`port` | `number` | Port to use for collecting metrics. Not used for SRV records. | `0` | no -`refresh_interval` | `duration` | How often to query DNS for updates. | `"30s"` | no -`type` | `string` | Type of DNS record to query. Must be one of SRV, A, AAAA, or MX. | `"SRV"` | no +| Name | Type | Description | Default | Required | +| ------------------ | -------------- | ---------------------------------------------------------------- | ------- | -------- | +| `names` | `list(string)` | DNS names to look up. | | yes | +| `port` | `number` | Port to use for collecting metrics. Not used for SRV records. | `0` | no | +| `refresh_interval` | `duration` | How often to query DNS for updates. | `"30s"` | no | +| `type` | `string` | Type of DNS record to query. Must be one of SRV, A, AAAA, or MX. | `"SRV"` | no | ## Exported fields The following field is exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the docker API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the docker API. | Each target includes the following labels: -* `__meta_dns_name`: Name of the record that produced the discovered target. -* `__meta_dns_srv_record_target`: Target field of the SRV record. -* `__meta_dns_srv_record_port`: Port field of the SRV record. -* `__meta_dns_mx_record_target`: Target field of the MX record. - +- `__meta_dns_name`: Name of the record that produced the discovered target. +- `__meta_dns_srv_record_target`: Target field of the SRV record. +- `__meta_dns_srv_record_port`: Port field of the SRV record. +- `__meta_dns_mx_record_target`: Target field of the MX record. ## Component health @@ -90,10 +89,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.docker.md b/docs/sources/flow/reference/components/discovery.docker.md index d9b5a0271343..edb51bb105d6 100644 --- a/docs/sources/flow/reference/components/discovery.docker.md +++ b/docs/sources/flow/reference/components/discovery.docker.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.docker/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.docker/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.docker/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.docker/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.docker/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.docker/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.docker/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.docker/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.docker/ description: Learn about discovery.docker title: discovery.docker @@ -27,27 +27,28 @@ discovery.docker "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`host` | `string` | Address of the Docker Daemon to connect to. | | yes -`port` | `number` | Port to use for collecting metrics when containers don't have any port mappings. | `80` | no -`host_networking_host` | `string` | Host to use if the container is in host networking mode. | `"localhost"` | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"1m"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------------- | -------- | +| `host` | `string` | Address of the Docker Daemon to connect to. | | yes | +| `port` | `number` | Port to use for collecting metrics when containers don't have any port mappings. | `80` | no | +| `host_networking_host` | `string` | Host to use if the container is in host networking mode. | `"localhost"` | no | +| `refresh_interval` | `duration` | Frequency to refresh list of containers. | `"1m"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -58,14 +59,14 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.docker`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -filter | [filter][] | Filters discoverable resources. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| filter | [filter][] | Filters discoverable resources. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -83,10 +84,10 @@ The `filter` block configures a filter to pass to the Docker Engine to limit the amount of containers returned. The `filter` block can be specified multiple times to provide more than one filter. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | Filter name to use. | | yes -`values` | `list(string)` | Values to pass to the filter. | | yes +| Name | Type | Description | Default | Required | +| -------- | -------------- | ----------------------------- | ------- | -------- | +| `name` | `string` | Filter name to use. | | yes | +| `values` | `list(string)` | Values to pass to the filter. | | yes | Refer to [List containers][List containers] from the Docker Engine API documentation for the list of supported filters and their meaning. @@ -113,30 +114,30 @@ documentation for the list of supported filters and their meaning. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the docker API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the docker API. | Each target includes the following labels: -* `__meta_docker_container_id`: ID of the container. -* `__meta_docker_container_name`: Name of the container. -* `__meta_docker_container_network_mode`: Network mode of the container. -* `__meta_docker_container_label_`: Each label from the container. -* `__meta_docker_network_id`: ID of the Docker network the container is in. -* `__meta_docker_network_name`: Name of the Docker network the container is in. -* `__meta_docker_network_ingress`: Set to `true` if the Docker network is an +- `__meta_docker_container_id`: ID of the container. +- `__meta_docker_container_name`: Name of the container. +- `__meta_docker_container_network_mode`: Network mode of the container. +- `__meta_docker_container_label_`: Each label from the container. +- `__meta_docker_network_id`: ID of the Docker network the container is in. +- `__meta_docker_network_name`: Name of the Docker network the container is in. +- `__meta_docker_network_ingress`: Set to `true` if the Docker network is an ingress network. -* `__meta_docker_network_internal`: Set to `true` if the Docker network is an +- `__meta_docker_network_internal`: Set to `true` if the Docker network is an internal network. -* `__meta_docker_network_label_`: Each label from the network the +- `__meta_docker_network_label_`: Each label from the network the container is in. -* `__meta_docker_network_scope`: The scope of the network the container is in. -* `__meta_docker_network_ip`: The IP of the container in the network. -* `__meta_docker_port_private`: The private port on the container. -* `__meta_docker_port_public`: The publicly exposed port from the container, +- `__meta_docker_network_scope`: The scope of the network the container is in. +- `__meta_docker_network_ip`: The IP of the container in the network. +- `__meta_docker_port_private`: The private port on the container. +- `__meta_docker_port_public`: The publicly exposed port from the container, if a port mapping exists. -* `__meta_docker_port_public_ip`: The public IP of the container, if a port +- `__meta_docker_port_public_ip`: The public IP of the container, if a port mapping exists. Each discovered container maps to one target per unique combination of networks @@ -184,10 +185,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Windows hosts @@ -214,10 +217,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. > **NOTE**: This example requires the "Expose daemon on tcp://localhost:2375 > without TLS" setting to be enabled in the Docker Engine settings. diff --git a/docs/sources/flow/reference/components/discovery.dockerswarm.md b/docs/sources/flow/reference/components/discovery.dockerswarm.md index d02a044f5cf7..f6bee0e91647 100644 --- a/docs/sources/flow/reference/components/discovery.dockerswarm.md +++ b/docs/sources/flow/reference/components/discovery.dockerswarm.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.dockerswarm/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.dockerswarm/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.dockerswarm/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.dockerswarm/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.dockerswarm/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.dockerswarm/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.dockerswarm/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.dockerswarm/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.dockerswarm/ description: Learn about discovery.dockerswarm title: discovery.dockerswarm @@ -26,27 +26,28 @@ discovery.dockerswarm "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`host` | `string` | Address of the Docker daemon. | | yes -`role` | `string` | Role of the targets to retrieve. Must be `services`, `tasks`, or `nodes`. | | yes -`port` | `number` | The port to scrape metrics from, when `role` is nodes, and for discovered tasks and services that don't have published ports. | `80` | no -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"60s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `host` | `string` | Address of the Docker daemon. | | yes | +| `role` | `string` | Role of the targets to retrieve. Must be `services`, `tasks`, or `nodes`. | | yes | +| `port` | `number` | The port to scrape metrics from, when `role` is nodes, and for discovered tasks and services that don't have published ports. | `80` | no | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"60s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} diff --git a/docs/sources/flow/reference/components/discovery.ec2.md b/docs/sources/flow/reference/components/discovery.ec2.md index 6345018f1119..0fa804eb24b3 100644 --- a/docs/sources/flow/reference/components/discovery.ec2.md +++ b/docs/sources/flow/reference/components/discovery.ec2.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.ec2/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ec2/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ec2/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ec2/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.ec2/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ec2/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ec2/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ec2/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.ec2/ description: Learn about discovery.ec2 title: discovery.ec2 @@ -26,47 +26,48 @@ discovery.ec2 "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`endpoint` | `string` | Custom endpoint to be used. | | no -`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no -`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no -`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no -`profile` | `string` | Named AWS profile used to connect to the API. | | no -`role_arn` | `string` | AWS Role Amazon Resource Name (ARN), an alternative to using AWS API keys. | | no -`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - - {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `endpoint` | `string` | Custom endpoint to be used. | | no | +| `region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no | +| `access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no | +| `secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no | +| `profile` | `string` | Named AWS profile used to connect to the API. | | no | +| `role_arn` | `string` | AWS Role Amazon Resource Name (ARN), an alternative to using AWS API keys. | | no | +| `refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no | +| `port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +{{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} ## Blocks The following blocks are supported inside the definition of `discovery.ec2`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -filter | [filter][] | Filters discoverable resources. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| filter | [filter][] | Filters discoverable resources. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | [filter]: #filter-block [authorization]: #authorization-block @@ -82,10 +83,10 @@ tls_config | [tls_config][] | Configure TLS settings for connecting to the endpo Filters can be used optionally to filter the instance list by other criteria. Available filter criteria can be found in the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html). -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | Filter name to use. | | yes -`values` | `list(string)` | Values to pass to the filter. | | yes +| Name | Type | Description | Default | Required | +| -------- | -------------- | ----------------------------- | ------- | -------- | +| `name` | `string` | Filter name to use. | | yes | +| `values` | `list(string)` | Values to pass to the filter. | | yes | Refer to the [Filter API AWS EC2 documentation][filter api] for the list of supported filters and their descriptions. @@ -103,32 +104,32 @@ Refer to the [Filter API AWS EC2 documentation][filter api] for the list of supp The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of discovered EC2 targets. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------- | +| `targets` | `list(map(string))` | The set of discovered EC2 targets. | Each target includes the following labels: -* `__meta_ec2_ami`: The EC2 Amazon Machine Image. -* `__meta_ec2_architecture`: The architecture of the instance. -* `__meta_ec2_availability_zone`: The availability zone in which the instance is running. -* `__meta_ec2_availability_zone_id`: The availability zone ID in which the instance is running (requires `ec2:DescribeAvailabilityZones`). -* `__meta_ec2_instance_id`: The EC2 instance ID. -* `__meta_ec2_instance_lifecycle`: The lifecycle of the EC2 instance, set only for 'spot' or 'scheduled' instances, absent otherwise. -* `__meta_ec2_instance_state`: The state of the EC2 instance. -* `__meta_ec2_instance_type`: The type of the EC2 instance. -* `__meta_ec2_ipv6_addresses`: Comma-separated list of IPv6 addresses assigned to the instance's network interfaces, if present. -* `__meta_ec2_owner_id`: The ID of the AWS account that owns the EC2 instance. -* `__meta_ec2_platform`: The Operating System platform, set to 'windows' on Windows servers, absent otherwise. -* `__meta_ec2_primary_subnet_id`: The subnet ID of the primary network interface, if available. -* `__meta_ec2_private_dns_name`: The private DNS name of the instance, if available. -* `__meta_ec2_private_ip`: The private IP address of the instance, if present. -* `__meta_ec2_public_dns_name`: The public DNS name of the instance, if available. -* `__meta_ec2_public_ip`: The public IP address of the instance, if available. -* `__meta_ec2_region`: The region of the instance. -* `__meta_ec2_subnet_id`: Comma-separated list of subnets IDs in which the instance is running, if available. -* `__meta_ec2_tag_`: Each tag value of the instance. -* `__meta_ec2_vpc_id`: The ID of the VPC in which the instance is running, if available. +- `__meta_ec2_ami`: The EC2 Amazon Machine Image. +- `__meta_ec2_architecture`: The architecture of the instance. +- `__meta_ec2_availability_zone`: The availability zone in which the instance is running. +- `__meta_ec2_availability_zone_id`: The availability zone ID in which the instance is running (requires `ec2:DescribeAvailabilityZones`). +- `__meta_ec2_instance_id`: The EC2 instance ID. +- `__meta_ec2_instance_lifecycle`: The lifecycle of the EC2 instance, set only for 'spot' or 'scheduled' instances, absent otherwise. +- `__meta_ec2_instance_state`: The state of the EC2 instance. +- `__meta_ec2_instance_type`: The type of the EC2 instance. +- `__meta_ec2_ipv6_addresses`: Comma-separated list of IPv6 addresses assigned to the instance's network interfaces, if present. +- `__meta_ec2_owner_id`: The ID of the AWS account that owns the EC2 instance. +- `__meta_ec2_platform`: The Operating System platform, set to 'windows' on Windows servers, absent otherwise. +- `__meta_ec2_primary_subnet_id`: The subnet ID of the primary network interface, if available. +- `__meta_ec2_private_dns_name`: The private DNS name of the instance, if available. +- `__meta_ec2_private_ip`: The private IP address of the instance, if present. +- `__meta_ec2_public_dns_name`: The public DNS name of the instance, if available. +- `__meta_ec2_public_ip`: The public IP address of the instance, if available. +- `__meta_ec2_region`: The region of the instance. +- `__meta_ec2_subnet_id`: Comma-separated list of subnets IDs in which the instance is running, if available. +- `__meta_ec2_tag_`: Each tag value of the instance. +- `__meta_ec2_vpc_id`: The ID of the VPC in which the instance is running, if available. ## Component health @@ -167,10 +168,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.eureka.md b/docs/sources/flow/reference/components/discovery.eureka.md index 1cb3dd50da98..98f8f3789c0b 100644 --- a/docs/sources/flow/reference/components/discovery.eureka.md +++ b/docs/sources/flow/reference/components/discovery.eureka.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.eureka/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.eureka/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.eureka/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.eureka/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.eureka/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.eureka/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.eureka/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.eureka/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.eureka/ description: Learn about discovery.eureka title: discovery.eureka @@ -27,41 +27,43 @@ discovery.eureka "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`server` | `string` | Eureka server URL. | | yes -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `30s` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `server` | `string` | Eureka server URL. | | yes | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `30s` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} ## Blocks + The following blocks are supported inside the definition of `discovery.eureka`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -92,30 +94,30 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Eureka API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Eureka API. | Each target includes the following labels: -* `__meta_eureka_app_name` -* `__meta_eureka_app_instance_hostname` -* `__meta_eureka_app_instance_homepage_url` -* `__meta_eureka_app_instance_statuspage_url` -* `__meta_eureka_app_instance_healthcheck_url` -* `__meta_eureka_app_instance_ip_addr` -* `__meta_eureka_app_instance_vip_address` -* `__meta_eureka_app_instance_secure_vip_address` -* `__meta_eureka_app_instance_status` -* `__meta_eureka_app_instance_port` -* `__meta_eureka_app_instance_port_enabled` -* `__meta_eureka_app_instance_secure_port` -* `__meta_eureka_app_instance_secure_port_enabled` -* `__meta_eureka_app_instance_datacenterinfo_name` -* `__meta_eureka_app_instance_datacenterinfo_metadata_` -* `__meta_eureka_app_instance_country_id` -* `__meta_eureka_app_instance_id` -* `__meta_eureka_app_instance_metadata_` +- `__meta_eureka_app_name` +- `__meta_eureka_app_instance_hostname` +- `__meta_eureka_app_instance_homepage_url` +- `__meta_eureka_app_instance_statuspage_url` +- `__meta_eureka_app_instance_healthcheck_url` +- `__meta_eureka_app_instance_ip_addr` +- `__meta_eureka_app_instance_vip_address` +- `__meta_eureka_app_instance_secure_vip_address` +- `__meta_eureka_app_instance_status` +- `__meta_eureka_app_instance_port` +- `__meta_eureka_app_instance_port_enabled` +- `__meta_eureka_app_instance_secure_port` +- `__meta_eureka_app_instance_secure_port_enabled` +- `__meta_eureka_app_instance_datacenterinfo_name` +- `__meta_eureka_app_instance_datacenterinfo_metadata_` +- `__meta_eureka_app_instance_country_id` +- `__meta_eureka_app_instance_id` +- `__meta_eureka_app_instance_metadata_` ## Component health @@ -154,10 +156,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.file.md b/docs/sources/flow/reference/components/discovery.file.md index 67335bf5e1b7..9b169e08feea 100644 --- a/docs/sources/flow/reference/components/discovery.file.md +++ b/docs/sources/flow/reference/components/discovery.file.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.file/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.file/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.file/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.file/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.file/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.file/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.file/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.file/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.file/ description: Learn about discovery.file title: discovery.file @@ -35,24 +35,24 @@ discovery.file "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | ------------------- | ------------------------------------------ |---------| -------- -`files` | `list(string)` | Files to read and discover targets from. | | yes -`refresh_interval` | `duration` | How often to sync targets. | "5m" | no +| Name | Type | Description | Default | Required | +| ------------------ | -------------- | ---------------------------------------- | ------- | -------- | +| `files` | `list(string)` | Files to read and discover targets from. | | yes | +| `refresh_interval` | `duration` | How often to sync targets. | "5m" | no | -The last path segment of each element in `files` may contain a single * that matches any character sequence, e.g. `my/path/tg_*.json`. +The last path segment of each element in `files` may contain a single _ that matches any character sequence, e.g. `my/path/tg\__.json`. ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the filesystem. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the filesystem. | Each target includes the following labels: -* `__meta_filepath`: The absolute path to the file the target was discovered from. +- `__meta_filepath`: The absolute path to the file the target was discovered from. ## Component health @@ -71,16 +71,17 @@ values. ## Examples ### Example target files + ```json [ { - "targets": [ "127.0.0.1:9091", "127.0.0.1:9092" ], + "targets": ["127.0.0.1:9091", "127.0.0.1:9092"], "labels": { "environment": "dev" } }, { - "targets": [ "127.0.0.1:9093" ], + "targets": ["127.0.0.1:9093"], "labels": { "environment": "prod" } @@ -90,12 +91,12 @@ values. ```yaml - targets: - - 127.0.0.1:9999 - - 127.0.0.1:10101 + - 127.0.0.1:9999 + - 127.0.0.1:10101 labels: job: worker - targets: - - 127.0.0.1:9090 + - 127.0.0.1:9090 labels: job: prometheus ``` @@ -128,9 +129,10 @@ prometheus.remote_write "demo" { ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### File discovery with retained file path label @@ -170,9 +172,10 @@ prometheus.remote_write "demo" { ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.gce.md b/docs/sources/flow/reference/components/discovery.gce.md index 182a19dfacc5..4f3cffe5f13f 100644 --- a/docs/sources/flow/reference/components/discovery.gce.md +++ b/docs/sources/flow/reference/components/discovery.gce.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.gce/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.gce/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.gce/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.gce/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.gce/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.gce/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.gce/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.gce/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.gce/ description: Learn about discovery.gce title: discovery.gce @@ -21,7 +21,6 @@ Credentials are discovered by the Google Cloud SDK default client by looking in If the Agent is running within GCE, the service account associated with the instance it is running on should have at least read-only permissions to the compute resources. If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations. - ## Usage ```river @@ -35,14 +34,14 @@ discovery.gce "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`project` | `string` | The GCP Project.| | yes -`zone` | `string` | The zone of the scrape targets. | | yes -`filter` | `string` | Filter can be used optionally to filter the instance list by other criteria. | | no -`refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `"60s"`| no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | `80`| no -`tag_separator` | `string` | The tag separator is used to separate the tags on concatenation. | `","`| no +| Name | Type | Description | Default | Required | +| ------------------ | ---------- | ----------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `project` | `string` | The GCP Project. | | yes | +| `zone` | `string` | The zone of the scrape targets. | | yes | +| `filter` | `string` | Filter can be used optionally to filter the instance list by other criteria. | | no | +| `refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `"60s"` | no | +| `port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | `80` | no | +| `tag_separator` | `string` | The tag separator is used to separate the tags on concatenation. | `","` | no | For more information on the syntax of the `filter` argument, refer to Google's `filter` documentation for [Method: instances.list](https://cloud.google.com/compute/docs/reference/latest/instances/list). @@ -50,26 +49,25 @@ For more information on the syntax of the `filter` argument, refer to Google's ` The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of discovered GCE targets. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------- | +| `targets` | `list(map(string))` | The set of discovered GCE targets. | Each target includes the following labels: -* `__meta_gce_instance_id`: the numeric id of the instance -* `__meta_gce_instance_name`: the name of the instance -* `__meta_gce_label_LABEL_NAME`: each GCE label of the instance -* `__meta_gce_machine_type`: full or partial URL of the machine type of the instance -* `__meta_gce_metadata_NAME`: each metadata item of the instance -* `__meta_gce_network`: the network URL of the instance -* `__meta_gce_private_ip`: the private IP address of the instance -* `__meta_gce_interface_ipv4_NAME`: IPv4 address of each named interface -* `__meta_gce_project`: the GCP project in which the instance is running -* `__meta_gce_public_ip`: the public IP address of the instance, if present -* `__meta_gce_subnetwork`: the subnetwork URL of the instance -* `__meta_gce_tags`: comma separated list of instance tags -* `__meta_gce_zone`: the GCE zone URL in which the instance is running - +- `__meta_gce_instance_id`: the numeric id of the instance +- `__meta_gce_instance_name`: the name of the instance +- `__meta_gce_label_LABEL_NAME`: each GCE label of the instance +- `__meta_gce_machine_type`: full or partial URL of the machine type of the instance +- `__meta_gce_metadata_NAME`: each metadata item of the instance +- `__meta_gce_network`: the network URL of the instance +- `__meta_gce_private_ip`: the private IP address of the instance +- `__meta_gce_interface_ipv4_NAME`: IPv4 address of each named interface +- `__meta_gce_project`: the GCP project in which the instance is running +- `__meta_gce_public_ip`: the public IP address of the instance, if present +- `__meta_gce_subnetwork`: the subnetwork URL of the instance +- `__meta_gce_tags`: comma separated list of instance tags +- `__meta_gce_zone`: the GCE zone URL in which the instance is running ## Component health @@ -109,11 +107,13 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. - + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. + ## Compatible components diff --git a/docs/sources/flow/reference/components/discovery.hetzner.md b/docs/sources/flow/reference/components/discovery.hetzner.md index a18984696d8a..7b6c08f66284 100644 --- a/docs/sources/flow/reference/components/discovery.hetzner.md +++ b/docs/sources/flow/reference/components/discovery.hetzner.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.hetzner/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.hetzner/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.hetzner/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.hetzner/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.hetzner/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.hetzner/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.hetzner/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.hetzner/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.hetzner/ description: Learn about discovery.hetzner title: discovery.hetzner @@ -29,28 +29,29 @@ discovery.hetzner "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`role` | `string` | Hetzner role of entities that should be discovered. | | yes -`port` | `int` | The port to scrape metrics from. | `80` | no -`refresh_interval` | `duration` | The time after which the servers are refreshed. | `"60s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `role` | `string` | Hetzner role of entities that should be discovered. | | yes | +| `port` | `int` | The port to scrape metrics from. | `80` | no | +| `refresh_interval` | `duration` | The time after which the servers are refreshed. | `"60s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | `role` must be one of `robot` or `hcloud`. - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -61,13 +62,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.hetzner`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -98,45 +99,44 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Hetzner catalog API. +| Name | Type | Description | +| --------- | ------------------- | ----------------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Hetzner catalog API. | Each target includes the following labels: -* `__meta_hetzner_server_id`: the ID of the server -* `__meta_hetzner_server_name`: the name of the server -* `__meta_hetzner_server_status`: the status of the server -* `__meta_hetzner_public_ipv4`: the public ipv4 address of the server -* `__meta_hetzner_public_ipv6_network`: the public ipv6 network (/64) of the server -* `__meta_hetzner_datacenter`: the datacenter of the server +- `__meta_hetzner_server_id`: the ID of the server +- `__meta_hetzner_server_name`: the name of the server +- `__meta_hetzner_server_status`: the status of the server +- `__meta_hetzner_public_ipv4`: the public ipv4 address of the server +- `__meta_hetzner_public_ipv6_network`: the public ipv6 network (/64) of the server +- `__meta_hetzner_datacenter`: the datacenter of the server ### `hcloud` The labels below are only available for targets with `role` set to `hcloud`: -* `__meta_hetzner_hcloud_image_name`: the image name of the server -* `__meta_hetzner_hcloud_image_description`: the description of the server image -* `__meta_hetzner_hcloud_image_os_flavor`: the OS flavor of the server image -* `__meta_hetzner_hcloud_image_os_version`: the OS version of the server image -* `__meta_hetzner_hcloud_datacenter_location`: the location of the server -* `__meta_hetzner_hcloud_datacenter_location_network_zone`: the network zone of the server -* `__meta_hetzner_hcloud_server_type`: the type of the server -* `__meta_hetzner_hcloud_cpu_cores`: the CPU cores count of the server -* `__meta_hetzner_hcloud_cpu_type`: the CPU type of the server (shared or dedicated) -* `__meta_hetzner_hcloud_memory_size_gb`: the amount of memory of the server (in GB) -* `__meta_hetzner_hcloud_disk_size_gb`: the disk size of the server (in GB) -* `__meta_hetzner_hcloud_private_ipv4_`: the private ipv4 address of the server within a given network -* `__meta_hetzner_hcloud_label_`: each label of the server -* `__meta_hetzner_hcloud_labelpresent_`: `true` for each label of the server +- `__meta_hetzner_hcloud_image_name`: the image name of the server +- `__meta_hetzner_hcloud_image_description`: the description of the server image +- `__meta_hetzner_hcloud_image_os_flavor`: the OS flavor of the server image +- `__meta_hetzner_hcloud_image_os_version`: the OS version of the server image +- `__meta_hetzner_hcloud_datacenter_location`: the location of the server +- `__meta_hetzner_hcloud_datacenter_location_network_zone`: the network zone of the server +- `__meta_hetzner_hcloud_server_type`: the type of the server +- `__meta_hetzner_hcloud_cpu_cores`: the CPU cores count of the server +- `__meta_hetzner_hcloud_cpu_type`: the CPU type of the server (shared or dedicated) +- `__meta_hetzner_hcloud_memory_size_gb`: the amount of memory of the server (in GB) +- `__meta_hetzner_hcloud_disk_size_gb`: the disk size of the server (in GB) +- `__meta_hetzner_hcloud_private_ipv4_`: the private ipv4 address of the server within a given network +- `__meta_hetzner_hcloud_label_`: each label of the server +- `__meta_hetzner_hcloud_labelpresent_`: `true` for each label of the server ### `robot` The labels below are only available for targets with `role` set to `robot`: -* `__meta_hetzner_robot_product`: the product of the server -* `__meta_hetzner_robot_cancelled`: the server cancellation status - +- `__meta_hetzner_robot_product`: the product of the server +- `__meta_hetzner_robot_cancelled`: the server cancellation status ## Component health @@ -177,11 +177,13 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `HETZNER_ROLE`: The role of the entities that should be discovered. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `HETZNER_ROLE`: The role of the entities that should be discovered. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.http.md b/docs/sources/flow/reference/components/discovery.http.md index 1ad2734eafc5..6ef36d8a3908 100644 --- a/docs/sources/flow/reference/components/discovery.http.md +++ b/docs/sources/flow/reference/components/discovery.http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.http/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.http/ description: Learn about discovery.http title: discovery.http @@ -37,20 +37,17 @@ As an example, the following will provide a target with a custom `metricsPath`, ```json [ - { - "labels" : { - "__metrics_path__" : "/api/prometheus", - "__scheme__" : "https", - "__scrape_interval__" : "60s", - "__scrape_timeout__" : "10s", - "service" : "custom-api-service" - }, - "targets" : [ - "custom-api:443" - ] - }, + { + "labels": { + "__metrics_path__": "/api/prometheus", + "__scheme__": "https", + "__scrape_interval__": "60s", + "__scrape_timeout__": "10s", + "service": "custom-api-service" + }, + "targets": ["custom-api:443"] + } ] - ``` It is also possible to append query parameters to the metrics path with the `__param_` syntax. @@ -59,21 +56,18 @@ For example, the following will call a metrics path of `/health?target_data=prom ```json [ - { - "labels" : { - "__metrics_path__" : "/health", - "__scheme__" : "https", - "__scrape_interval__" : "60s", - "__scrape_timeout__" : "10s", - "__param_target_data": "prometheus", - "service" : "custom-api-service" - }, - "targets" : [ - "custom-api:443" - ] - }, + { + "labels": { + "__metrics_path__": "/health", + "__scheme__": "https", + "__scrape_interval__": "60s", + "__scrape_timeout__": "10s", + "__param_target_data": "prometheus", + "service": "custom-api-service" + }, + "targets": ["custom-api:443"] + } ] - ``` For more information on the potential labels you can use, see the [prometheus.scrape technical details][prometheus.scrape] section, or the [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) documentation. @@ -90,25 +84,26 @@ discovery.http "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`url` | `string` | URL to scrape. | | yes -`refresh_interval` | `duration` | How often to refresh targets. | `"60s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `url` | `string` | URL to scrape. | | yes | +| `refresh_interval` | `duration` | How often to refresh targets. | `"60s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -119,13 +114,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.http`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -156,13 +151,13 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the filesystem. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the filesystem. | Each target includes the following labels: -* `__meta_url`: URL the target was obtained from. +- `__meta_url`: URL the target was obtained from. ## Component health @@ -176,7 +171,7 @@ values. ## Debug metrics -* `prometheus_sd_http_failures_total` (counter): Total number of refresh failures. +- `prometheus_sd_http_failures_total` (counter): Total number of refresh failures. ## Examples diff --git a/docs/sources/flow/reference/components/discovery.ionos.md b/docs/sources/flow/reference/components/discovery.ionos.md index 9bdaa6bc4d1f..050f251f1d5a 100644 --- a/docs/sources/flow/reference/components/discovery.ionos.md +++ b/docs/sources/flow/reference/components/discovery.ionos.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.ionos/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ionos/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ionos/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ionos/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.ionos/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ionos/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ionos/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ionos/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.ionos/ description: Learn about discovery.ionos title: discovery.ionos @@ -27,26 +27,27 @@ discovery.ionos "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`datacenter_id` | `string` | The unique ID of the data center. | | yes -`refresh_interval` | `duration` | The time after which the servers are refreshed. | `60s` | no -`port` | `int` | The port to scrape metrics from. | 80 | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `datacenter_id` | `string` | The unique ID of the data center. | | yes | +| `refresh_interval` | `duration` | The time after which the servers are refreshed. | `60s` | no | +| `port` | `int` | The port to scrape metrics from. | 80 | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments diff --git a/docs/sources/flow/reference/components/discovery.kubelet.md b/docs/sources/flow/reference/components/discovery.kubelet.md index f9fef4a85693..09b0cab5a4b6 100644 --- a/docs/sources/flow/reference/components/discovery.kubelet.md +++ b/docs/sources/flow/reference/components/discovery.kubelet.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.kubelet/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kubelet/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kubelet/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kubelet/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.kubelet/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kubelet/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kubelet/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kubelet/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.kubelet/ description: Learn about discovery.kubelet labels: @@ -25,27 +25,27 @@ discovery.kubelet "LABEL" { ## Requirements -* The Kubelet must be reachable from the `grafana-agent` pod network. -* Follow the [Kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization) +- The Kubelet must be reachable from the `grafana-agent` pod network. +- Follow the [Kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization) documentation to configure authentication to the Kubelet API. ## Arguments The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`url` | `string` | URL of the Kubelet server. | "https://localhost:10250" | no -`refresh_interval` | `duration` | How often the Kubelet should be polled for scrape targets | `5s` | no -`namespaces` | `list(string)` | A list of namespaces to extract target pods from | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------------------------- | -------- | +| `url` | `string` | URL of the Kubelet server. | "https://localhost:10250" | no | +| `refresh_interval` | `duration` | How often the Kubelet should be polled for scrape targets | `5s` | no | +| `namespaces` | `list(string)` | A list of namespaces to extract target pods from | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | The `namespaces` list limits the namespaces to discover resources in. If omitted, all namespaces are searched. @@ -54,14 +54,15 @@ omitted, all namespaces are searched. You can have additional paths in the `url`. For example, if `url` is `https://kubernetes.default.svc.cluster.local:443/api/v1/nodes/cluster-node-1/proxy`, then `discovery.kubelet` sends a request on `https://kubernetes.default.svc.cluster.local:443/api/v1/nodes/cluster-node-1/proxy/pods` - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most, one of the following can be provided: - [arguments]: #arguments +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +[arguments]: #arguments {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -70,13 +71,13 @@ For example, if `url` is `https://kubernetes.default.svc.cluster.local:443/api/v The following blocks are supported inside the definition of `discovery.kubelet`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -107,44 +108,44 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Kubelet API. +| Name | Type | Description | +| --------- | ------------------- | --------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Kubelet API. | Each target includes the following labels: -* `__address__`: The target address to scrape derived from the pod IP and container port. -* `__meta_kubernetes_namespace`: The namespace of the pod object. -* `__meta_kubernetes_pod_name`: The name of the pod object. -* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. -* `__meta_kubernetes_pod_label_`: Each label from the pod object. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from +- `__address__`: The target address to scrape derived from the pod IP and container port. +- `__meta_kubernetes_namespace`: The namespace of the pod object. +- `__meta_kubernetes_pod_name`: The name of the pod object. +- `__meta_kubernetes_pod_ip`: The pod IP of the pod object. +- `__meta_kubernetes_pod_label_`: Each label from the pod object. +- `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the pod object. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the +- `__meta_kubernetes_pod_annotation_`: Each annotation from the pod object. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each +- `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the pod object. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an +- `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container the target +- `__meta_kubernetes_pod_container_name`: Name of the container the target address points to. -* `__meta_kubernetes_pod_container_id`: ID of the container the target address +- `__meta_kubernetes_pod_container_id`: ID of the container the target address points to. The ID is in the form `://`. -* `__meta_kubernetes_pod_container_image`: The image the container is using. -* `__meta_kubernetes_pod_container_port_name`: Name of the container port. -* `__meta_kubernetes_pod_container_port_number`: Number of the container port. -* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container +- `__meta_kubernetes_pod_container_image`: The image the container is using. +- `__meta_kubernetes_pod_container_port_name`: Name of the container port. +- `__meta_kubernetes_pod_container_port_number`: Number of the container port. +- `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready +- `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or +- `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled +- `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the pod object. -* `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. -* `__meta_kubernetes_pod_controller_name`: Name of the pod controller. +- `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +- `__meta_kubernetes_pod_uid`: The UID of the pod object. +- `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. +- `__meta_kubernetes_pod_controller_name`: Name of the pod controller. > **Note**: The Kubelet API used by this component is an internal API and therefore the > data in the response returned from the API cannot be guaranteed between different versions @@ -191,10 +192,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Limit searched namespaces @@ -222,10 +225,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.kubernetes.md b/docs/sources/flow/reference/components/discovery.kubernetes.md index 95d1d69a97f5..e3807c9f38ed 100644 --- a/docs/sources/flow/reference/components/discovery.kubernetes.md +++ b/docs/sources/flow/reference/components/discovery.kubernetes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kubernetes/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kubernetes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.kubernetes/ description: Learn about discovery.kubernetes title: discovery.kubernetes @@ -31,28 +31,29 @@ discovery.kubernetes "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`api_server` | `string` | URL of Kubernetes API server. | | no -`role` | `string` | Type of Kubernetes resource to query. | | yes -`kubeconfig_file` | `string` | Path of kubeconfig file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - - [arguments]: #arguments +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of Kubernetes API server. | | no | +| `role` | `string` | Type of Kubernetes resource to query. | | yes | +| `kubeconfig_file` | `string` | Path of kubeconfig file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +[arguments]: #arguments {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -70,14 +71,14 @@ order of `NodeInternalIP`, `NodeExternalIP`, `NodeLegacyHostIP`, and The following labels are included for discovered nodes: -* `__meta_kubernetes_node_name`: The name of the node object. -* `__meta_kubernetes_node_provider_id`: The cloud provider's name for the node object. -* `__meta_kubernetes_node_label_`: Each label from the node object. -* `__meta_kubernetes_node_labelpresent_`: Set to `true` for each label from the node object. -* `__meta_kubernetes_node_annotation_`: Each annotation from the node object. -* `__meta_kubernetes_node_annotationpresent_`: Set to `true` +- `__meta_kubernetes_node_name`: The name of the node object. +- `__meta_kubernetes_node_provider_id`: The cloud provider's name for the node object. +- `__meta_kubernetes_node_label_`: Each label from the node object. +- `__meta_kubernetes_node_labelpresent_`: Set to `true` for each label from the node object. +- `__meta_kubernetes_node_annotation_`: Each annotation from the node object. +- `__meta_kubernetes_node_annotationpresent_`: Set to `true` for each annotation from the node object. -* `__meta_kubernetes_node_address_`: The first address for each +- `__meta_kubernetes_node_address_`: The first address for each node address type, if it exists. In addition, the `instance` label for the node will be set to the node name as @@ -91,27 +92,27 @@ be set to the Kubernetes DNS name of the service and respective service port. The following labels are included for discovered services: -* `__meta_kubernetes_namespace`: The namespace of the service object. -* `__meta_kubernetes_service_annotation_`: Each annotation from +- `__meta_kubernetes_namespace`: The namespace of the service object. +- `__meta_kubernetes_service_annotation_`: Each annotation from the service object. -* `__meta_kubernetes_service_annotationpresent_`: `true` for +- `__meta_kubernetes_service_annotationpresent_`: `true` for each annotation of the service object. -* `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the +- `__meta_kubernetes_service_cluster_ip`: The cluster IP address of the service. This does not apply to services of type `ExternalName`. -* `__meta_kubernetes_service_external_name`: The DNS name of the service. +- `__meta_kubernetes_service_external_name`: The DNS name of the service. This only applies to services of type `ExternalName`. -* `__meta_kubernetes_service_label_`: Each label from the service +- `__meta_kubernetes_service_label_`: Each label from the service object. -* `__meta_kubernetes_service_labelpresent_`: `true` for each label +- `__meta_kubernetes_service_labelpresent_`: `true` for each label of the service object. -* `__meta_kubernetes_service_name`: The name of the service object. -* `__meta_kubernetes_service_port_name`: Name of the service port for the +- `__meta_kubernetes_service_name`: The name of the service object. +- `__meta_kubernetes_service_port_name`: Name of the service port for the target. -* `__meta_kubernetes_service_port_number`: Number of the service port for the +- `__meta_kubernetes_service_port_number`: Number of the service port for the target. -* `__meta_kubernetes_service_port_protocol`: Protocol of the service port for +- `__meta_kubernetes_service_port_protocol`: Protocol of the service port for the target. -* `__meta_kubernetes_service_type`: The type of the service. +- `__meta_kubernetes_service_type`: The type of the service. ### pod role @@ -125,37 +126,37 @@ collected from them. The following labels are included for discovered pods: -* `__meta_kubernetes_namespace`: The namespace of the pod object. -* `__meta_kubernetes_pod_name`: The name of the pod object. -* `__meta_kubernetes_pod_ip`: The pod IP of the pod object. -* `__meta_kubernetes_pod_label_`: Each label from the pod object. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from +- `__meta_kubernetes_namespace`: The namespace of the pod object. +- `__meta_kubernetes_pod_name`: The name of the pod object. +- `__meta_kubernetes_pod_ip`: The pod IP of the pod object. +- `__meta_kubernetes_pod_label_`: Each label from the pod object. +- `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the pod object. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the +- `__meta_kubernetes_pod_annotation_`: Each annotation from the pod object. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each +- `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the pod object. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an +- `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container the target +- `__meta_kubernetes_pod_container_name`: Name of the container the target address points to. -* `__meta_kubernetes_pod_container_id`: ID of the container the target address +- `__meta_kubernetes_pod_container_id`: ID of the container the target address points to. The ID is in the form `://`. -* `__meta_kubernetes_pod_container_image`: The image the container is using. -* `__meta_kubernetes_pod_container_port_name`: Name of the container port. -* `__meta_kubernetes_pod_container_port_number`: Number of the container port. -* `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container +- `__meta_kubernetes_pod_container_image`: The image the container is using. +- `__meta_kubernetes_pod_container_port_name`: Name of the container port. +- `__meta_kubernetes_pod_container_port_number`: Number of the container port. +- `__meta_kubernetes_pod_container_port_protocol`: Protocol of the container port. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready +- `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the pod's ready state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or +- `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled +- `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the pod object. -* `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. -* `__meta_kubernetes_pod_controller_name`: Name of the pod controller. +- `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +- `__meta_kubernetes_pod_uid`: The UID of the pod object. +- `__meta_kubernetes_pod_controller_kind`: Object kind of the pod controller. +- `__meta_kubernetes_pod_controller_name`: Name of the pod controller. ### endpoints role @@ -166,28 +167,28 @@ they are not bound to an endpoint port. The following labels are included for discovered endpoints: -* `__meta_kubernetes_namespace:` The namespace of the endpoints object. -* `__meta_kubernetes_endpoints_name:` The names of the endpoints object. -* `__meta_kubernetes_endpoints_label_`: Each label from the +- `__meta_kubernetes_namespace:` The namespace of the endpoints object. +- `__meta_kubernetes_endpoints_name:` The names of the endpoints object. +- `__meta_kubernetes_endpoints_label_`: Each label from the endpoints object. -* `__meta_kubernetes_endpoints_labelpresent_`: `true` for each label +- `__meta_kubernetes_endpoints_labelpresent_`: `true` for each label from the endpoints object. -* The following labels are attached for all targets discovered directly from +- The following labels are attached for all targets discovered directly from the endpoints list: - * `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint. - * `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the + - `__meta_kubernetes_endpoint_hostname`: Hostname of the endpoint. + - `__meta_kubernetes_endpoint_node_name`: Name of the node hosting the endpoint. - * `__meta_kubernetes_endpoint_ready`: Set to `true` or `false` for the + - `__meta_kubernetes_endpoint_ready`: Set to `true` or `false` for the endpoint's ready state. - * `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port. - * `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port. - * `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint + - `__meta_kubernetes_endpoint_port_name`: Name of the endpoint port. + - `__meta_kubernetes_endpoint_port_protocol`: Protocol of the endpoint port. + - `__meta_kubernetes_endpoint_address_target_kind`: Kind of the endpoint address target. - * `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint + - `__meta_kubernetes_endpoint_address_target_name`: Name of the endpoint address target. -* If the endpoints belong to a service, all labels of the `service` role +- If the endpoints belong to a service, all labels of the `service` role discovery are attached. -* For all targets backed by a pod, all labels of the `pod` role discovery are +- For all targets backed by a pod, all labels of the `pod` role discovery are attached. ### endpointslice role @@ -200,30 +201,30 @@ port. The following labels are included for discovered endpoint slices: -* `__meta_kubernetes_namespace`: The namespace of the endpoints object. -* `__meta_kubernetes_endpointslice_name`: The name of endpoint slice object. -* The following labels are attached for all targets discovered directly from +- `__meta_kubernetes_namespace`: The namespace of the endpoints object. +- `__meta_kubernetes_endpointslice_name`: The name of endpoint slice object. +- The following labels are attached for all targets discovered directly from the endpoint slice list: - * `__meta_kubernetes_endpointslice_address_target_kind`: Kind of the + - `__meta_kubernetes_endpointslice_address_target_kind`: Kind of the referenced object. - * `__meta_kubernetes_endpointslice_address_target_name`: Name of referenced + - `__meta_kubernetes_endpointslice_address_target_name`: Name of referenced object. - * `__meta_kubernetes_endpointslice_address_type`: The IP protocol family of + - `__meta_kubernetes_endpointslice_address_type`: The IP protocol family of the address of the target. - * `__meta_kubernetes_endpointslice_endpoint_conditions_ready`: Set to `true` + - `__meta_kubernetes_endpointslice_endpoint_conditions_ready`: Set to `true` or `false` for the referenced endpoint's ready state. - * `__meta_kubernetes_endpointslice_endpoint_topology_kubernetes_io_hostname`: + - `__meta_kubernetes_endpointslice_endpoint_topology_kubernetes_io_hostname`: Name of the node hosting the referenced endpoint. - * `__meta_kubernetes_endpointslice_endpoint_topology_present_kubernetes_io_hostname`: + - `__meta_kubernetes_endpointslice_endpoint_topology_present_kubernetes_io_hostname`: `true` if the referenced object has a `kubernetes.io/hostname` annotation. - * `__meta_kubernetes_endpointslice_port`: Port of the referenced endpoint. - * `__meta_kubernetes_endpointslice_port_name`: Named port of the referenced + - `__meta_kubernetes_endpointslice_port`: Port of the referenced endpoint. + - `__meta_kubernetes_endpointslice_port_name`: Named port of the referenced endpoint. - * `__meta_kubernetes_endpointslice_port_protocol`: Protocol of the referenced + - `__meta_kubernetes_endpointslice_port_protocol`: Protocol of the referenced endpoint. -* If the endpoints belong to a service, all labels of the `service` role +- If the endpoints belong to a service, all labels of the `service` role discovery are attached. -* For all targets backed by a pod, all labels of the `pod` role discovery are +- For all targets backed by a pod, all labels of the `pod` role discovery are attached. ### ingress role @@ -234,37 +235,37 @@ to the host specified in the Kubernetes `Ingress`'s `spec` block. The following labels are included for discovered ingress objects: -* `__meta_kubernetes_namespace`: The namespace of the ingress object. -* `__meta_kubernetes_ingress_name`: The name of the ingress object. -* `__meta_kubernetes_ingress_label_`: Each label from the ingress +- `__meta_kubernetes_namespace`: The namespace of the ingress object. +- `__meta_kubernetes_ingress_name`: The name of the ingress object. +- `__meta_kubernetes_ingress_label_`: Each label from the ingress object. -* `__meta_kubernetes_ingress_labelpresent_`: `true` for each label +- `__meta_kubernetes_ingress_labelpresent_`: `true` for each label from the ingress object. -* `__meta_kubernetes_ingress_annotation_`: Each annotation from +- `__meta_kubernetes_ingress_annotation_`: Each annotation from the ingress object. -* `__meta_kubernetes_ingress_annotationpresent_`: `true` for each +- `__meta_kubernetes_ingress_annotationpresent_`: `true` for each annotation from the ingress object. -* `__meta_kubernetes_ingress_class_name`: Class name from ingress spec, if +- `__meta_kubernetes_ingress_class_name`: Class name from ingress spec, if present. -* `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress, `https` if TLS +- `__meta_kubernetes_ingress_scheme`: Protocol scheme of ingress, `https` if TLS config is set. Defaults to `http`. -* `__meta_kubernetes_ingress_path`: Path from ingress spec. Defaults to /. +- `__meta_kubernetes_ingress_path`: Path from ingress spec. Defaults to /. ## Blocks The following blocks are supported inside the definition of `discovery.kubernetes`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -namespaces | [namespaces][] | Information about which Kubernetes namespaces to search. | no -selectors | [selectors][] | Information about which Kubernetes namespaces to search. | no -attach_metadata | [attach_metadata][] | Optional metadata to attach to discovered targets. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ------------------- | -------------------------------------------------------- | -------- | +| namespaces | [namespaces][] | Information about which Kubernetes namespaces to search. | no | +| selectors | [selectors][] | Information about which Kubernetes namespaces to search. | no | +| attach_metadata | [attach_metadata][] | Optional metadata to attach to discovered targets. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -283,21 +284,21 @@ an `oauth2` block. The `namespaces` block limits the namespaces to discover resources in. If omitted, all namespaces are searched. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`own_namespace` | `bool` | Include the namespace {{< param "PRODUCT_NAME" >}} is running in. | | no -`names` | `list(string)` | List of namespaces to search. | | no +| Name | Type | Description | Default | Required | +| --------------- | -------------- | ----------------------------------------------------------------- | ------- | -------- | +| `own_namespace` | `bool` | Include the namespace {{< param "PRODUCT_NAME" >}} is running in. | | no | +| `names` | `list(string)` | List of namespaces to search. | | no | ### selectors block The `selectors` block contains optional label and field selectors to limit the discovery process to a subset of resources. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Role of the selector. | | yes -`label`| `string` | Label selector string. | | no -`field` | `string` | Field selector string. | | no +| Name | Type | Description | Default | Required | +| ------- | -------- | ---------------------- | ------- | -------- | +| `role` | `string` | Role of the selector. | | yes | +| `label` | `string` | Label selector string. | | no | +| `field` | `string` | Field selector string. | | no | See Kubernetes' documentation for [Field selectors][] and [Labels and selectors][] to learn more about the possible filters that can be used. @@ -316,15 +317,17 @@ Other roles only support selectors matching the role itself (e.g. node role can [Field selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ [Labels and selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ + [discovery.relabel]: {{< relref "./discovery.relabel.md" >}} ### attach_metadata block + The `attach_metadata` block allows to attach node metadata to discovered targets. Valid for roles: pod, endpoints, endpointslice. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`node` | `bool` | Attach node metadata. | | no +| Name | Type | Description | Default | Required | +| ------ | ------ | --------------------- | ------- | -------- | +| `node` | `bool` | Attach node metadata. | | no | ### basic_auth block @@ -346,9 +349,9 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Kubernetes API. +| Name | Type | Description | +| --------- | ------------------- | ------------------------------------------------------ | +| `targets` | `list(map(string))` | The set of targets discovered from the Kubernetes API. | ## Component health @@ -391,10 +394,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Kubeconfig authentication @@ -422,10 +427,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Limit searched namespaces and filter by labels value @@ -461,10 +468,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Limit to only pods on the same node @@ -503,9 +512,10 @@ prometheus.remote_write "demo" { ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.kuma.md b/docs/sources/flow/reference/components/discovery.kuma.md index e4eb17e69b04..79242c6eb33c 100644 --- a/docs/sources/flow/reference/components/discovery.kuma.md +++ b/docs/sources/flow/reference/components/discovery.kuma.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.kuma/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kuma/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kuma/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kuma/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.kuma/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.kuma/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.kuma/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.kuma/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.kuma/ description: Learn about discovery.kuma title: discovery.kuma @@ -27,39 +27,40 @@ discovery.kuma "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | -------------------------------------------------------------- | ------- | -------- -`server` | `string` | Address of the Kuma Control Plane's MADS xDS server. | | yes -`refresh_interval` | `duration` | The time to wait between polling update requests. | `"30s"` | no -`fetch_timeout` | `duration` | The time after which the monitoring assignments are refreshed. | `"2m"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `server` | `string` | Address of the Kuma Control Plane's MADS xDS server. | | yes | +| `refresh_interval` | `duration` | The time to wait between polling update requests. | `"30s"` | no | +| `fetch_timeout` | `duration` | The time after which the monitoring assignments are refreshed. | `"2m"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} The following blocks are supported inside the definition of `discovery.kuma`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -86,21 +87,21 @@ an `oauth2` block. {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} - ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Kuma API. +| Name | Type | Description | +| --------- | ------------------- | ------------------------------------------------ | +| `targets` | `list(map(string))` | The set of targets discovered from the Kuma API. | The following meta labels are available on targets and can be used by the discovery.relabel component: -* `__meta_kuma_mesh`: the name of the proxy's Mesh -* `__meta_kuma_dataplane`: the name of the proxy -* `__meta_kuma_service`: the name of the proxy's associated Service -* `__meta_kuma_label_`: each tag of the proxy + +- `__meta_kuma_mesh`: the name of the proxy's Mesh +- `__meta_kuma_dataplane`: the name of the proxy +- `__meta_kuma_service`: the name of the proxy's associated Service +- `__meta_kuma_label_`: each tag of the proxy ## Component health @@ -136,11 +137,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.lightsail.md b/docs/sources/flow/reference/components/discovery.lightsail.md index 81688b35a59d..752715837a6e 100644 --- a/docs/sources/flow/reference/components/discovery.lightsail.md +++ b/docs/sources/flow/reference/components/discovery.lightsail.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.lightsail/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.lightsail/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.lightsail/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.lightsail/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.lightsail/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.lightsail/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.lightsail/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.lightsail/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.lightsail/ description: Learn about discovery.lightsail title: discovery.lightsail @@ -24,33 +24,34 @@ discovery.lightsail "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | Custom endpoint to be used.| | no -`region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no -`access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no -`secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no -`profile` | `string` | Named AWS profile used to connect to the API. | | no -`role_arn` | `string` | AWS Role ARN, an alternative to using AWS API keys. | | no -`refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no -`port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `endpoint` | `string` | Custom endpoint to be used. | | no | +| `region` | `string` | The AWS region. If blank, the region from the instance metadata is used. | | no | +| `access_key` | `string` | The AWS API key ID. If blank, the environment variable `AWS_ACCESS_KEY_ID` is used. | | no | +| `secret_key` | `string` | The AWS API key secret. If blank, the environment variable `AWS_SECRET_ACCESS_KEY` is used. | | no | +| `profile` | `string` | Named AWS profile used to connect to the API. | | no | +| `role_arn` | `string` | AWS Role ARN, an alternative to using AWS API keys. | | no | +| `refresh_interval` | `string` | Refresh interval to re-read the instance list. | 60s | no | +| `port` | `int` | The port to scrape metrics from. If using the public IP address, this must instead be specified in the relabeling rule. | 80 | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - [arguments]: #arguments +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +[arguments]: #arguments {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -59,13 +60,13 @@ At most, one of the following can be provided: The following blocks are supported inside the definition of `discovery.lightsail`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -96,23 +97,23 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of discovered Lightsail targets. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------------- | +| `targets` | `list(map(string))` | The set of discovered Lightsail targets. | Each target includes the following labels: -* `__meta_lightsail_availability_zone`: The availability zone in which the instance is running. -* `__meta_lightsail_blueprint_id`: The Lightsail blueprint ID. -* `__meta_lightsail_bundle_id`: The Lightsail bundle ID. -* `__meta_lightsail_instance_name`: The name of the Lightsail instance. -* `__meta_lightsail_instance_state`: The state of the Lightsail instance. -* `__meta_lightsail_instance_support_code`: The support code of the Lightsail instance. -* `__meta_lightsail_ipv6_addresses`: Comma-separated list of IPv6 addresses assigned to the instance's network interfaces, if present. -* `__meta_lightsail_private_ip`: The private IP address of the instance. -* `__meta_lightsail_public_ip`: The public IP address of the instance, if available. -* `__meta_lightsail_region`: The region of the instance. -* `__meta_lightsail_tag_`: Each tag value of the instance. +- `__meta_lightsail_availability_zone`: The availability zone in which the instance is running. +- `__meta_lightsail_blueprint_id`: The Lightsail blueprint ID. +- `__meta_lightsail_bundle_id`: The Lightsail bundle ID. +- `__meta_lightsail_instance_name`: The name of the Lightsail instance. +- `__meta_lightsail_instance_state`: The state of the Lightsail instance. +- `__meta_lightsail_instance_support_code`: The support code of the Lightsail instance. +- `__meta_lightsail_ipv6_addresses`: Comma-separated list of IPv6 addresses assigned to the instance's network interfaces, if present. +- `__meta_lightsail_private_ip`: The private IP address of the instance. +- `__meta_lightsail_public_ip`: The public IP address of the instance, if available. +- `__meta_lightsail_region`: The region of the instance. +- `__meta_lightsail_tag_`: Each tag value of the instance. ## Component health @@ -151,10 +152,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.linode.md b/docs/sources/flow/reference/components/discovery.linode.md index 9b0bffc5535b..6e891da97629 100644 --- a/docs/sources/flow/reference/components/discovery.linode.md +++ b/docs/sources/flow/reference/components/discovery.linode.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.linode/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.linode/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.linode/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.linode/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.linode/ description: Learn about discovery.linode title: discovery.linode @@ -28,26 +28,27 @@ The linode APIv4 Token must be created with the scopes: `linodes:read_only`, `ip The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`refresh_interval` | `duration` | The time to wait between polling update requests. | `"60s"` | no -`port` | `int` | Port that metrics are scraped from. | `80` | no -`tag_separator` | `string` | The string by which Linode Instance tags are joined into the tag label. | `,` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `refresh_interval` | `duration` | The time to wait between polling update requests. | `"60s"` | no | +| `port` | `int` | Port that metrics are scraped from. | `80` | no | +| `tag_separator` | `string` | The string by which Linode Instance tags are joined into the tag label. | `,` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -56,13 +57,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.linode`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -89,36 +90,35 @@ an `oauth2` block. {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} - ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Linode API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Linode API. | The following meta labels are available on targets and can be used by the discovery.relabel component: -* `__meta_linode_instance_id`: the id of the Linode instance -* `__meta_linode_instance_label`: the label of the Linode instance -* `__meta_linode_image`: the slug of the Linode instance's image -* `__meta_linode_private_ipv4`: the private IPv4 of the Linode instance -* `__meta_linode_public_ipv4`: the public IPv4 of the Linode instance -* `__meta_linode_public_ipv6`: the public IPv6 of the Linode instance -* `__meta_linode_region`: the region of the Linode instance -* `__meta_linode_type`: the type of the Linode instance -* `__meta_linode_status`: the status of the Linode instance -* `__meta_linode_tags`: a list of tags of the Linode instance joined by the tag separator -* `__meta_linode_group`: the display group a Linode instance is a member of -* `__meta_linode_hypervisor`: the virtualization software powering the Linode instance -* `__meta_linode_backups`: the backup service status of the Linode instance -* `__meta_linode_specs_disk_bytes`: the amount of storage space the Linode instance has access to -* `__meta_linode_specs_memory_bytes`: the amount of RAM the Linode instance has access to -* `__meta_linode_specs_vcpus`: the number of VCPUS this Linode has access to -* `__meta_linode_specs_transfer_bytes`: the amount of network transfer the Linode instance is allotted each month -* `__meta_linode_extra_ips`: a list of all extra IPv4 addresses assigned to the Linode instance joined by the tag separator +- `__meta_linode_instance_id`: the id of the Linode instance +- `__meta_linode_instance_label`: the label of the Linode instance +- `__meta_linode_image`: the slug of the Linode instance's image +- `__meta_linode_private_ipv4`: the private IPv4 of the Linode instance +- `__meta_linode_public_ipv4`: the public IPv4 of the Linode instance +- `__meta_linode_public_ipv6`: the public IPv6 of the Linode instance +- `__meta_linode_region`: the region of the Linode instance +- `__meta_linode_type`: the type of the Linode instance +- `__meta_linode_status`: the status of the Linode instance +- `__meta_linode_tags`: a list of tags of the Linode instance joined by the tag separator +- `__meta_linode_group`: the display group a Linode instance is a member of +- `__meta_linode_hypervisor`: the virtualization software powering the Linode instance +- `__meta_linode_backups`: the backup service status of the Linode instance +- `__meta_linode_specs_disk_bytes`: the amount of storage space the Linode instance has access to +- `__meta_linode_specs_memory_bytes`: the amount of RAM the Linode instance has access to +- `__meta_linode_specs_vcpus`: the number of VCPUS this Linode has access to +- `__meta_linode_specs_transfer_bytes`: the amount of network transfer the Linode instance is allotted each month +- `__meta_linode_extra_ips`: a list of all extra IPv4 addresses assigned to the Linode instance joined by the tag separator ## Component health @@ -155,10 +155,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ### Using private IP address: diff --git a/docs/sources/flow/reference/components/discovery.marathon.md b/docs/sources/flow/reference/components/discovery.marathon.md index 69e8630b0495..82bb893bbad5 100644 --- a/docs/sources/flow/reference/components/discovery.marathon.md +++ b/docs/sources/flow/reference/components/discovery.marathon.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.marathon/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.marathon/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.marathon/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.marathon/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.marathon/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.marathon/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.marathon/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.marathon/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.marathon/ description: Learn about discovery.marathon title: discovery.marathon @@ -25,22 +25,23 @@ discovery.marathon "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`servers` | `list(string)` | List of Marathon servers. | | yes -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"30s"` | no -`auth_token` | `secret` | Auth token to authenticate with. | | no -`auth_token_file` | `string` | File containing an auth token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `servers` | `list(string)` | List of Marathon servers. | | yes | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `"30s"` | no | +| `auth_token` | `secret` | Auth token to authenticate with. | | no | +| `auth_token_file` | `string` | File containing an auth token to authenticate with. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + - [`auth_token` argument](#arguments). - [`auth_token_file` argument](#arguments). - [`bearer_token_file` argument](#arguments). diff --git a/docs/sources/flow/reference/components/discovery.nerve.md b/docs/sources/flow/reference/components/discovery.nerve.md index 04812c356b4b..d80cac19f02a 100644 --- a/docs/sources/flow/reference/components/discovery.nerve.md +++ b/docs/sources/flow/reference/components/discovery.nerve.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.nerve/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.nerve/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.nerve/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.nerve/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.nerve/ description: Learn about discovery.nerve title: discovery.nerve @@ -26,12 +26,11 @@ discovery.nerve "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | -------------- | ------------------------------------ | ------------- | -------- -`servers` | `list(string)` | The Zookeeper servers. | | yes -`paths` | `list(string)` | The paths to look for targets at. | | yes -`timeout` | `duration` | The timeout to use. | `"10s"` | no - +| Name | Type | Description | Default | Required | +| --------- | -------------- | --------------------------------- | ------- | -------- | +| `servers` | `list(string)` | The Zookeeper servers. | | yes | +| `paths` | `list(string)` | The paths to look for targets at. | | yes | +| `timeout` | `duration` | The timeout to use. | `"10s"` | no | Each element in the `path` list can either point to a single service, or to the root of a tree of services. @@ -45,16 +44,17 @@ fully through arguments. The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from Nerve's API. +| Name | Type | Description | +| --------- | ------------------- | ----------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from Nerve's API. | The following meta labels are available on targets and can be used by the discovery.relabel component -* `__meta_nerve_path`: the full path to the endpoint node in Zookeeper -* `__meta_nerve_endpoint_host`: the host of the endpoint -* `__meta_nerve_endpoint_port`: the port of the endpoint -* `__meta_nerve_endpoint_name`: the name of the endpoint + +- `__meta_nerve_path`: the full path to the endpoint node in Zookeeper +- `__meta_nerve_endpoint_host`: the host of the endpoint +- `__meta_nerve_endpoint_port`: the port of the endpoint +- `__meta_nerve_endpoint_name`: the name of the endpoint ## Component health @@ -94,9 +94,10 @@ prometheus.remote_write "demo" { ``` Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.nomad.md b/docs/sources/flow/reference/components/discovery.nomad.md index 372306a4e275..dd3d576bb25d 100644 --- a/docs/sources/flow/reference/components/discovery.nomad.md +++ b/docs/sources/flow/reference/components/discovery.nomad.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.nomad/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.nomad/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.nomad/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.nomad/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.nomad/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.nomad/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.nomad/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.nomad/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.nomad/ description: Learn about discovery.nomad title: discovery.nomad @@ -24,29 +24,30 @@ discovery.nomad "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ----------------------- | -------- -`server` | `string` | Address of nomad server. | `http://localhost:4646` | no -`namespace` | `string` | Nomad namespace to use. | `default` | no -`region` | `string` | Nomad region to use. | `global` | no -`allow_stale` | `bool` | Allow reading from non-leader nomad instances. | `true` | no -`tag_separator` | `string` | Seperator to join nomad tags into Prometheus labels. | `,` | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ----------------------- | -------- | +| `server` | `string` | Address of nomad server. | `http://localhost:4646` | no | +| `namespace` | `string` | Nomad namespace to use. | `default` | no | +| `region` | `string` | Nomad region to use. | `global` | no | +| `allow_stale` | `bool` | Allow reading from non-leader nomad instances. | `true` | no | +| `tag_separator` | `string` | Seperator to join nomad tags into Prometheus labels. | `,` | no | +| `refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -57,13 +58,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.nomad`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -94,21 +95,21 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the nomad server. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the nomad server. | Each target includes the following labels: -* `__meta_nomad_address`: the service address of the target. -* `__meta_nomad_dc`: the datacenter name for the target. -* `__meta_nomad_namespace`: the namespace of the target. -* `__meta_nomad_node_id`: the node name defined for the target. -* `__meta_nomad_service`: the name of the service the target belongs to. -* `__meta_nomad_service_address`: the service address of the target. -* `__meta_nomad_service_id`: the service ID of the target. -* `__meta_nomad_service_port`: the service port of the target. -* `__meta_nomad_tags`: the list of tags of the target joined by the tag separator. +- `__meta_nomad_address`: the service address of the target. +- `__meta_nomad_dc`: the datacenter name for the target. +- `__meta_nomad_namespace`: the namespace of the target. +- `__meta_nomad_node_id`: the node name defined for the target. +- `__meta_nomad_service`: the name of the service the target belongs to. +- `__meta_nomad_service_address`: the service address of the target. +- `__meta_nomad_service_id`: the service ID of the target. +- `__meta_nomad_service_port`: the service port of the target. +- `__meta_nomad_tags`: the list of tags of the target joined by the tag separator. ## Component health @@ -148,10 +149,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.openstack.md b/docs/sources/flow/reference/components/discovery.openstack.md index 6d269086027d..33cd09855bfc 100644 --- a/docs/sources/flow/reference/components/discovery.openstack.md +++ b/docs/sources/flow/reference/components/discovery.openstack.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.openstack/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.openstack/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.openstack/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.openstack/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.openstack/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.openstack/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.openstack/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.openstack/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.openstack/ description: Learn about discovery.openstack title: discovery.openstack @@ -28,25 +28,25 @@ discovery.openstack "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required -------------------- | ---------- | ---------------------------------------------------------------------- | -------------------- | -------- -`role` | `string` | Role of the discovered targets. | | yes -`region` | `string` | OpenStack region. | | yes -`identity_endpoint` | `string` | Specifies the HTTP endpoint that is required to work with te Identity API of the appropriate version | | no -`username` | `string` | OpenStack username for the Identity V2 and V3 APIs. | | no -`userid` | `string` | OpenStack userid for the Identity V2 and V3 APIs. | | no -`password` | `secret` | Password for the Identity V2 and V3 APIs. | | no -`domain_name` | `string` | OpenStack domain name for the Identity V2 and V3 APIs. | | no -`domain_id` | `string` | OpenStack domain ID for the Identity V2 and V3 APIs. | | no -`project_name` | `string` | OpenStack project name for the Identity V2 and V3 APIs. | | no -`project_id` | `string` | OpenStack project ID for the Identity V2 and V3 APIs. | | no -`application_credential_name` | `string` | OpenStack application credential name for the Identity V2 and V3 APIs. | | no -`application_credential_id` | `string` | OpenStack application credential ID for the Identity V2 and V3 APIs. | | no -`application_credential_secret` | `secret` | OpenStack application credential secret for the Identity V2 and V3 APIs. | | no -`all_tenants` | `bool` | Whether the service discovery should list all instances for all projects. | `false` | no -`refresh_interval` | `duration`| Refresh interval to re-read the instance list. | `60s` | no -`port` | `int` | The port to scrape metrics from. | `80` | no -`availability` | `string` | The availability of the endpoint to connect to. | `public` | no +| Name | Type | Description | Default | Required | +| ------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------- | -------- | -------- | +| `role` | `string` | Role of the discovered targets. | | yes | +| `region` | `string` | OpenStack region. | | yes | +| `identity_endpoint` | `string` | Specifies the HTTP endpoint that is required to work with te Identity API of the appropriate version | | no | +| `username` | `string` | OpenStack username for the Identity V2 and V3 APIs. | | no | +| `userid` | `string` | OpenStack userid for the Identity V2 and V3 APIs. | | no | +| `password` | `secret` | Password for the Identity V2 and V3 APIs. | | no | +| `domain_name` | `string` | OpenStack domain name for the Identity V2 and V3 APIs. | | no | +| `domain_id` | `string` | OpenStack domain ID for the Identity V2 and V3 APIs. | | no | +| `project_name` | `string` | OpenStack project name for the Identity V2 and V3 APIs. | | no | +| `project_id` | `string` | OpenStack project ID for the Identity V2 and V3 APIs. | | no | +| `application_credential_name` | `string` | OpenStack application credential name for the Identity V2 and V3 APIs. | | no | +| `application_credential_id` | `string` | OpenStack application credential ID for the Identity V2 and V3 APIs. | | no | +| `application_credential_secret` | `secret` | OpenStack application credential secret for the Identity V2 and V3 APIs. | | no | +| `all_tenants` | `bool` | Whether the service discovery should list all instances for all projects. | `false` | no | +| `refresh_interval` | `duration` | Refresh interval to re-read the instance list. | `60s` | no | +| `port` | `int` | The port to scrape metrics from. | `80` | no | +| `availability` | `string` | The availability of the endpoint to connect to. | `public` | no | `role` must be one of `hypervisor` or `instance`. @@ -63,11 +63,12 @@ Name | Type | Description `availability` must be one of `public`, `admin`, or `internal`. ## Blocks + The following blocks are supported inside the definition of `discovery.openstack`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls_config | [tls_config][] | TLS configuration for requests to the OpenStack API. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ---------------------------------------------------- | -------- | +| tls_config | [tls_config][] | TLS configuration for requests to the OpenStack API. | no | [tls_config]: #tls_config-block @@ -79,21 +80,21 @@ tls_config | [tls_config][] | TLS configuration for requests to the OpenStack AP The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the OpenStack API. +| Name | Type | Description | +| --------- | ------------------- | ----------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the OpenStack API. | #### `hypervisor` The `hypervisor` role discovers one target per Nova hypervisor node. The target address defaults to the `host_ip` attribute of the hypervisor. -* `__meta_openstack_hypervisor_host_ip`: the hypervisor node's IP address. -* `__meta_openstack_hypervisor_hostname`: the hypervisor node's name. -* `__meta_openstack_hypervisor_id`: the hypervisor node's ID. -* `__meta_openstack_hypervisor_state`: the hypervisor node's state. -* `__meta_openstack_hypervisor_status`: the hypervisor node's status. -* `__meta_openstack_hypervisor_type`: the hypervisor node's type. +- `__meta_openstack_hypervisor_host_ip`: the hypervisor node's IP address. +- `__meta_openstack_hypervisor_hostname`: the hypervisor node's name. +- `__meta_openstack_hypervisor_id`: the hypervisor node's ID. +- `__meta_openstack_hypervisor_state`: the hypervisor node's state. +- `__meta_openstack_hypervisor_status`: the hypervisor node's status. +- `__meta_openstack_hypervisor_type`: the hypervisor node's type. #### `instance` @@ -101,17 +102,17 @@ The `instance` role discovers one target per network interface of Nova instance. The target address defaults to the private IP address of the network interface. -* `__meta_openstack_address_pool`: the pool of the private IP. -* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance. -* `__meta_openstack_instance_id`: the OpenStack instance ID. -* `__meta_openstack_instance_image`: the ID of the image the OpenStack instance is using. -* `__meta_openstack_instance_name`: the OpenStack instance name. -* `__meta_openstack_instance_status`: the status of the OpenStack instance. -* `__meta_openstack_private_ip`: the private IP of the OpenStack instance. -* `__meta_openstack_project_id`: the project (tenant) owning this instance. -* `__meta_openstack_public_ip`: the public IP of the OpenStack instance. -* `__meta_openstack_tag_`: each tag value of the instance. -* `__meta_openstack_user_id`: the user account owning the tenant. +- `__meta_openstack_address_pool`: the pool of the private IP. +- `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance. +- `__meta_openstack_instance_id`: the OpenStack instance ID. +- `__meta_openstack_instance_image`: the ID of the image the OpenStack instance is using. +- `__meta_openstack_instance_name`: the OpenStack instance name. +- `__meta_openstack_instance_status`: the status of the OpenStack instance. +- `__meta_openstack_private_ip`: the private IP of the OpenStack instance. +- `__meta_openstack_project_id`: the project (tenant) owning this instance. +- `__meta_openstack_public_ip`: the public IP of the OpenStack instance. +- `__meta_openstack_tag_`: each tag value of the instance. +- `__meta_openstack_user_id`: the user account owning the tenant. ## Component health @@ -151,12 +152,14 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `OPENSTACK_ROLE`: Your OpenStack role. - - `OPENSTACK_REGION`: Your OpenStack region. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `OPENSTACK_ROLE`: Your OpenStack role. +- `OPENSTACK_REGION`: Your OpenStack region. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.ovhcloud.md b/docs/sources/flow/reference/components/discovery.ovhcloud.md index 2733256ee1ef..43847f82456f 100644 --- a/docs/sources/flow/reference/components/discovery.ovhcloud.md +++ b/docs/sources/flow/reference/components/discovery.ovhcloud.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.ovhcloud/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ovhcloud/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ovhcloud/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ovhcloud/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.ovhcloud/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.ovhcloud/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.ovhcloud/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.ovhcloud/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.ovhcloud/ description: Learn about discovery.ovhcloud title: discovery.ovhcloud @@ -11,10 +11,10 @@ title: discovery.ovhcloud # discovery.ovhcloud -`discovery.ovhcloud` discovers scrape targets from OVHcloud's [dedicated servers][] and [VPS][] using their [API][]. -{{< param "PRODUCT_ROOT_NAME" >}} will periodically check the REST endpoint and create a target for every discovered server. -The public IPv4 address will be used by default - if there's none, the IPv6 address will be used. -This may be changed via relabeling with `discovery.relabel`. +`discovery.ovhcloud` discovers scrape targets from OVHcloud's [dedicated servers][] and [VPS][] using their [API][]. +{{< param "PRODUCT_ROOT_NAME" >}} will periodically check the REST endpoint and create a target for every discovered server. +The public IPv4 address will be used by default - if there's none, the IPv6 address will be used. +This may be changed via relabeling with `discovery.relabel`. For OVHcloud's [public cloud][] instances you can use `discovery.openstack`. [API]: https://api.ovh.com/ @@ -37,14 +37,14 @@ discovery.ovhcloud "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | -------------- | -------------------------------------------------------------- | ------------- | -------- -application_key | `string` | [API][] application key. | | yes -application_secret | `secret` | [API][] application secret. | | yes -consumer_key | `secret` | [API][] consumer key. | | yes -endpoint | `string` | [API][] endpoint. | "ovh-eu" | no -refresh_interval | `duration` | Refresh interval to re-read the resources list. | "60s" | no -service | `string` | Service of the targets to retrieve. | | yes +| Name | Type | Description | Default | Required | +| ------------------ | ---------- | ----------------------------------------------- | -------- | -------- | +| application_key | `string` | [API][] application key. | | yes | +| application_secret | `secret` | [API][] application secret. | | yes | +| consumer_key | `secret` | [API][] consumer key. | | yes | +| endpoint | `string` | [API][] endpoint. | "ovh-eu" | no | +| refresh_interval | `duration` | Refresh interval to re-read the resources list. | "60s" | no | +| service | `string` | Service of the targets to retrieve. | | yes | `endpoint` must be one of the [supported API endpoints][supported-apis]. @@ -56,46 +56,48 @@ service | `string` | Service of the targets to retrieve. The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the OVHcloud API. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the OVHcloud API. | Multiple meta labels are available on `targets` and can be used by the `discovery.relabel` component. [VPS][] meta labels: -* `__meta_ovhcloud_vps_cluster`: the cluster of the server. -* `__meta_ovhcloud_vps_datacenter`: the datacenter of the server. -* `__meta_ovhcloud_vps_disk`: the disk of the server. -* `__meta_ovhcloud_vps_display_name`: the display name of the server. -* `__meta_ovhcloud_vps_ipv4`: the IPv4 of the server. -* `__meta_ovhcloud_vps_ipv6`: the IPv6 of the server. -* `__meta_ovhcloud_vps_keymap`: the KVM keyboard layout of the server. -* `__meta_ovhcloud_vps_maximum_additional_ip`: the maximum additional IPs of the server. -* `__meta_ovhcloud_vps_memory_limit`: the memory limit of the server. -* `__meta_ovhcloud_vps_memory`: the memory of the server. -* `__meta_ovhcloud_vps_monitoring_ip_blocks`: the monitoring IP blocks of the server. -* `__meta_ovhcloud_vps_name`: the name of the server. -* `__meta_ovhcloud_vps_netboot_mode`: the netboot mode of the server. -* `__meta_ovhcloud_vps_offer_type`: the offer type of the server. -* `__meta_ovhcloud_vps_offer`: the offer of the server. -* `__meta_ovhcloud_vps_state`: the state of the server. -* `__meta_ovhcloud_vps_vcore`: the number of virtual cores of the server. -* `__meta_ovhcloud_vps_version`: the version of the server. -* `__meta_ovhcloud_vps_zone`: the zone of the server. + +- `__meta_ovhcloud_vps_cluster`: the cluster of the server. +- `__meta_ovhcloud_vps_datacenter`: the datacenter of the server. +- `__meta_ovhcloud_vps_disk`: the disk of the server. +- `__meta_ovhcloud_vps_display_name`: the display name of the server. +- `__meta_ovhcloud_vps_ipv4`: the IPv4 of the server. +- `__meta_ovhcloud_vps_ipv6`: the IPv6 of the server. +- `__meta_ovhcloud_vps_keymap`: the KVM keyboard layout of the server. +- `__meta_ovhcloud_vps_maximum_additional_ip`: the maximum additional IPs of the server. +- `__meta_ovhcloud_vps_memory_limit`: the memory limit of the server. +- `__meta_ovhcloud_vps_memory`: the memory of the server. +- `__meta_ovhcloud_vps_monitoring_ip_blocks`: the monitoring IP blocks of the server. +- `__meta_ovhcloud_vps_name`: the name of the server. +- `__meta_ovhcloud_vps_netboot_mode`: the netboot mode of the server. +- `__meta_ovhcloud_vps_offer_type`: the offer type of the server. +- `__meta_ovhcloud_vps_offer`: the offer of the server. +- `__meta_ovhcloud_vps_state`: the state of the server. +- `__meta_ovhcloud_vps_vcore`: the number of virtual cores of the server. +- `__meta_ovhcloud_vps_version`: the version of the server. +- `__meta_ovhcloud_vps_zone`: the zone of the server. [Dedicated servers][] meta labels: -* `__meta_ovhcloud_dedicated_server_commercial_range`: the commercial range of the server. -* `__meta_ovhcloud_dedicated_server_datacenter`: the datacenter of the server. -* `__meta_ovhcloud_dedicated_server_ipv4`: the IPv4 of the server. -* `__meta_ovhcloud_dedicated_server_ipv6`: the IPv6 of the server. -* `__meta_ovhcloud_dedicated_server_link_speed`: the link speed of the server. -* `__meta_ovhcloud_dedicated_server_name`: the name of the server. -* `__meta_ovhcloud_dedicated_server_os`: the operating system of the server. -* `__meta_ovhcloud_dedicated_server_rack`: the rack of the server. -* `__meta_ovhcloud_dedicated_server_reverse`: the reverse DNS name of the server. -* `__meta_ovhcloud_dedicated_server_server_id`: the ID of the server. -* `__meta_ovhcloud_dedicated_server_state`: the state of the server. -* `__meta_ovhcloud_dedicated_server_support_level`: the support level of the server. + +- `__meta_ovhcloud_dedicated_server_commercial_range`: the commercial range of the server. +- `__meta_ovhcloud_dedicated_server_datacenter`: the datacenter of the server. +- `__meta_ovhcloud_dedicated_server_ipv4`: the IPv4 of the server. +- `__meta_ovhcloud_dedicated_server_ipv6`: the IPv6 of the server. +- `__meta_ovhcloud_dedicated_server_link_speed`: the link speed of the server. +- `__meta_ovhcloud_dedicated_server_name`: the name of the server. +- `__meta_ovhcloud_dedicated_server_os`: the operating system of the server. +- `__meta_ovhcloud_dedicated_server_rack`: the rack of the server. +- `__meta_ovhcloud_dedicated_server_reverse`: the reverse DNS name of the server. +- `__meta_ovhcloud_dedicated_server_server_id`: the ID of the server. +- `__meta_ovhcloud_dedicated_server_state`: the state of the server. +- `__meta_ovhcloud_dedicated_server_support_level`: the support level of the server. ## Component health @@ -138,14 +140,14 @@ prometheus.remote_write "demo" { ``` Replace the following: - - `APPLICATION_KEY`: The OVHcloud [API][] application key. - - `APPLICATION_SECRET`: The OVHcloud [API][] application secret. - - `CONSUMER_KEY`: The OVHcloud [API][] consumer key. - - `SERVICE`: The OVHcloud service of the targets to retrieve. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. +- `APPLICATION_KEY`: The OVHcloud [API][] application key. +- `APPLICATION_SECRET`: The OVHcloud [API][] application secret. +- `CONSUMER_KEY`: The OVHcloud [API][] consumer key. +- `SERVICE`: The OVHcloud service of the targets to retrieve. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.process.md b/docs/sources/flow/reference/components/discovery.process.md index 6749abe65a51..bb5dcfecf54e 100644 --- a/docs/sources/flow/reference/components/discovery.process.md +++ b/docs/sources/flow/reference/components/discovery.process.md @@ -31,10 +31,10 @@ discovery.process "LABEL" { The following arguments are supported: -| Name | Type | Description | Default | Required | -|--------------------|---------------------|-----------------------------------------------------------------------------------------|---------|----------| +| Name | Type | Description | Default | Required | +| ------------------ | ------------------- | ---------------------------------------------------------------------------------------- | ------- | -------- | | `join` | `list(map(string))` | Join external targets to discovered processes targets based on `__container_id__` label. | | no | -| `refresh_interval` | `duration` | How often to sync targets. | "60s" | no | +| `refresh_interval` | `duration` | How often to sync targets. | "60s" | no | ### Targets joining @@ -97,8 +97,8 @@ The resulting targets are: The following blocks are supported inside the definition of `discovery.process`: -| Hierarchy | Block | Description | Required | -|-----------------|---------------------|-----------------------------------------------|----------| +| Hierarchy | Block | Description | Required | +| --------------- | ------------------- | ---------------------------------------------- | -------- | | discover_config | [discover_config][] | Configures which process metadata to discover. | no | [discover_config]: #discover_config-block @@ -109,13 +109,13 @@ The `discover_config` block describes which process metadata to discover. The following arguments are supported: -| Name | Type | Description | Default | Required | -|----------------|--------|-----------------------------------------------------------------|---------|----------| -| `exe` | `bool` | A flag to enable discovering `__meta_process_exe` label. | true | no | +| Name | Type | Description | Default | Required | +| -------------- | ------ | ---------------------------------------------------------------- | ------- | -------- | +| `exe` | `bool` | A flag to enable discovering `__meta_process_exe` label. | true | no | | `cwd` | `bool` | A flag to enable discovering `__meta_process_cwd` label. | true | no | | `commandline` | `bool` | A flag to enable discovering `__meta_process_commandline` label. | true | no | | `uid` | `bool` | A flag to enable discovering `__meta_process_uid`: label. | true | no | -| `username` | `bool` | A flag to enable discovering `__meta_process_username`: label. | true | no | +| `username` | `bool` | A flag to enable discovering `__meta_process_username`: label. | true | no | | `container_id` | `bool` | A flag to enable discovering `__container_id__` label. | true | no | ## Exported fields @@ -123,18 +123,18 @@ The following arguments are supported: The following fields are exported and can be referenced by other components: | Name | Type | Description | -|-----------|---------------------|--------------------------------------------------------| +| --------- | ------------------- | ------------------------------------------------------ | | `targets` | `list(map(string))` | The set of processes discovered on the local Linux OS. | Each target includes the following labels: -* `__process_pid__`: The process PID. -* `__meta_process_exe`: The process executable path. Taken from `/proc//exe`. -* `__meta_process_cwd`: The process current working directory. Taken from `/proc//cwd`. -* `__meta_process_commandline`: The process command line. Taken from `/proc//cmdline`. -* `__meta_process_uid`: The process UID. Taken from `/proc//status`. -* `__meta_process_username`: The process username. Taken from `__meta_process_uid` and `os/user/LookupID`. -* `__container_id__`: The container ID. Taken from `/proc//cgroup`. If the process is not running in a container, +- `__process_pid__`: The process PID. +- `__meta_process_exe`: The process executable path. Taken from `/proc//exe`. +- `__meta_process_cwd`: The process current working directory. Taken from `/proc//cwd`. +- `__meta_process_commandline`: The process command line. Taken from `/proc//cmdline`. +- `__meta_process_uid`: The process UID. Taken from `/proc//status`. +- `__meta_process_username`: The process username. Taken from `__meta_process_uid` and `os/user/LookupID`. +- `__container_id__`: The container ID. Taken from `/proc//cgroup`. If the process is not running in a container, this label is not set. ## Component health @@ -195,6 +195,7 @@ discovery.process "all" { } ``` + ## Compatible components @@ -212,4 +213,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/discovery.puppetdb.md b/docs/sources/flow/reference/components/discovery.puppetdb.md index 01e0ac926971..ec6a7b94a053 100644 --- a/docs/sources/flow/reference/components/discovery.puppetdb.md +++ b/docs/sources/flow/reference/components/discovery.puppetdb.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.puppetdb/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.puppetdb/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.puppetdb/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.puppetdb/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.puppetdb/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.puppetdb/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.puppetdb/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.puppetdb/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.puppetdb/ description: Learn about discovery.puppetdb title: discovery.puppetdb @@ -31,28 +31,29 @@ discovery.puppetdb "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`url` | `string` | The URL of the PuppetDB root query endpoint. | | yes -`query` | `string` | Puppet Query Language (PQL) query. Only resources are supported. | | yes -`include_parameters` | `bool` | Whether to include the parameters as meta labels. Due to the differences between parameter types and Prometheus labels, some parameters might not be rendered. The format of the parameters might also change in future releases. Make sure that you don't have secrets exposed as parameters if you enable this. | `false` | no -`port` | `int` | The port to scrape metrics from. | `80` | no -`refresh_interval` | `duration` | Frequency to refresh targets. | `"30s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `url` | `string` | The URL of the PuppetDB root query endpoint. | | yes | +| `query` | `string` | Puppet Query Language (PQL) query. Only resources are supported. | | yes | +| `include_parameters` | `bool` | Whether to include the parameters as meta labels. Due to the differences between parameter types and Prometheus labels, some parameters might not be rendered. The format of the parameters might also change in future releases. Make sure that you don't have secrets exposed as parameters if you enable this. | `false` | no | +| `port` | `int` | The port to scrape metrics from. | `80` | no | +| `refresh_interval` | `duration` | Frequency to refresh targets. | `"30s"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -63,13 +64,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.puppetdb`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -100,22 +101,22 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from puppetdb. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from puppetdb. | Each target includes the following labels: -* `__meta_puppetdb_query`: the Puppet Query Language (PQL) query. -* `__meta_puppetdb_certname`: the name of the node associated with the resourcet. -* `__meta_puppetdb_resource`: a SHA-1 hash of the resource’s type, title, and parameters, for identification. -* `__meta_puppetdb_type`: the resource type. -* `__meta_puppetdb_title`: the resource title. -* `__meta_puppetdb_exported`: whether the resource is exported ("true" or "false"). -* `__meta_puppetdb_tags`: comma separated list of resource tags. -* `__meta_puppetdb_file`: the manifest file in which the resource was declared. -* `__meta_puppetdb_environment`: the environment of the node associated with the resource. -* `__meta_puppetdb_parameter_`: the parameters of the resource. +- `__meta_puppetdb_query`: the Puppet Query Language (PQL) query. +- `__meta_puppetdb_certname`: the name of the node associated with the resourcet. +- `__meta_puppetdb_resource`: a SHA-1 hash of the resource’s type, title, and parameters, for identification. +- `__meta_puppetdb_type`: the resource type. +- `__meta_puppetdb_title`: the resource title. +- `__meta_puppetdb_exported`: whether the resource is exported ("true" or "false"). +- `__meta_puppetdb_tags`: comma separated list of resource tags. +- `__meta_puppetdb_file`: the manifest file in which the resource was declared. +- `__meta_puppetdb_environment`: the environment of the node associated with the resource. +- `__meta_puppetdb_parameter_`: the parameters of the resource. ## Component health @@ -158,10 +159,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.relabel.md b/docs/sources/flow/reference/components/discovery.relabel.md index cd928ffb5a0a..b153ed6139ef 100644 --- a/docs/sources/flow/reference/components/discovery.relabel.md +++ b/docs/sources/flow/reference/components/discovery.relabel.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.relabel/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.relabel/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.relabel/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.relabel/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.relabel/ description: Learn about discovery.relabel title: discovery.relabel @@ -55,18 +55,18 @@ discovery.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | Targets to relabel | | yes +| Name | Type | Description | Default | Required | +| --------- | ------------------- | ------------------ | ------- | -------- | +| `targets` | `list(map(string))` | Targets to relabel | | yes | ## Blocks The following blocks are supported inside the definition of `discovery.relabel`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to targets. | no +| Hierarchy | Block | Description | Required | +| --------- | -------- | ------------------------------------- | -------- | +| rule | [rule][] | Relabeling rules to apply to targets. | no | [rule]: #rule-block @@ -78,10 +78,10 @@ rule | [rule][] | Relabeling rules to apply to targets. | no The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`output` | `list(map(string))` | The set of targets after applying relabeling. -`rules` | `RelabelRules` | The currently configured relabeling rules. +| Name | Type | Description | +| -------- | ------------------- | --------------------------------------------- | +| `output` | `list(map(string))` | The set of targets after applying relabeling. | +| `rules` | `RelabelRules` | The currently configured relabeling rules. | ## Component health @@ -122,7 +122,6 @@ discovery.relabel "keep_backend_only" { } ``` - ## Compatible components diff --git a/docs/sources/flow/reference/components/discovery.scaleway.md b/docs/sources/flow/reference/components/discovery.scaleway.md index 44c181011885..21c0e18891f1 100644 --- a/docs/sources/flow/reference/components/discovery.scaleway.md +++ b/docs/sources/flow/reference/components/discovery.scaleway.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.scaleway/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.scaleway/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.scaleway/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.scaleway/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.scaleway/ description: Learn about discovery.scaleway title: discovery.scaleway @@ -30,31 +30,31 @@ discovery.scaleway "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`project_id` | `string` | Scaleway project ID of targets. | | yes -`role` | `string` | Role of targets to retrieve. | | yes -`api_url` | `string` | Scaleway API URL. | `"https://api.scaleway.com"` | no -`zone` | `string` | Availability zone of targets. | `"fr-par-1"` | no -`access_key` | `string` | Access key for the Scaleway API. | | yes -`secret_key` | `secret` | Secret key for the Scaleway API. | | conditional -`secret_key_file` | `string` | Path to file containing secret key for the Scaleway API. | | conditional -`name_filter` | `string` | Name filter to apply against the listing request. | | no -`tags_filter` | `list(string)` | List of tags to search for. | | no -`refresh_interval` | `duration` | Frequency to rediscover targets. | `"60s"` | no -`port` | `number` | Default port on servers to associate with generated targets. | `80` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------- | ----------- | +| `project_id` | `string` | Scaleway project ID of targets. | | yes | +| `role` | `string` | Role of targets to retrieve. | | yes | +| `api_url` | `string` | Scaleway API URL. | `"https://api.scaleway.com"` | no | +| `zone` | `string` | Availability zone of targets. | `"fr-par-1"` | no | +| `access_key` | `string` | Access key for the Scaleway API. | | yes | +| `secret_key` | `secret` | Secret key for the Scaleway API. | | conditional | +| `secret_key_file` | `string` | Path to file containing secret key for the Scaleway API. | | conditional | +| `name_filter` | `string` | Name filter to apply against the listing request. | | no | +| `tags_filter` | `list(string)` | List of tags to search for. | | no | +| `refresh_interval` | `duration` | Frequency to rediscover targets. | `"60s"` | no | +| `port` | `number` | Default port on servers to associate with generated targets. | `80` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | The `role` argument determines what type of Scaleway machines to discover. It must be set to one of the following: -* `"baremetal"`: Discover [baremetal][] Scaleway machines. -* `"instance"`: Discover virtual Scaleway [instances][instance]. +- `"baremetal"`: Discover [baremetal][] Scaleway machines. +- `"instance"`: Discover virtual Scaleway [instances][instance]. The `name_filter` and `tags_filter` arguments can be used to filter the set of discovered servers. `name_filter` returns machines matching a specific name, @@ -68,9 +68,9 @@ while `tags_filter` returns machines who contain _all_ the tags listed in the The following blocks are supported inside the definition of `discovery.scaleway`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ------------------------------------------------------ | -------- | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -86,48 +86,48 @@ an `oauth2` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. +| Name | Type | Description | +| --------- | ------------------- | ---------------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Consul catalog API. | When `role` is `baremetal`, discovered targets include the following labels: -* `__meta_scaleway_baremetal_id`: ID of the server. -* `__meta_scaleway_baremetal_public_ipv4`: Public IPv4 address of the server. -* `__meta_scaleway_baremetal_public_ipv6`: Public IPv6 address of the server. -* `__meta_scaleway_baremetal_name`: Name of the server. -* `__meta_scaleway_baremetal_os_name`: Operating system name of the server. -* `__meta_scaleway_baremetal_os_version`: Operation system version of the server. -* `__meta_scaleway_baremetal_project_id`: Project ID the server belongs to. -* `__meta_scaleway_baremetal_status`: Current status of the server. -* `__meta_scaleway_baremetal_tags`: The list of tags associated with the server concatenated with a `,`. -* `__meta_scaleway_baremetal_type`: Commercial type of the server. -* `__meta_scaleway_baremetal_zone`: Availability zone of the server. +- `__meta_scaleway_baremetal_id`: ID of the server. +- `__meta_scaleway_baremetal_public_ipv4`: Public IPv4 address of the server. +- `__meta_scaleway_baremetal_public_ipv6`: Public IPv6 address of the server. +- `__meta_scaleway_baremetal_name`: Name of the server. +- `__meta_scaleway_baremetal_os_name`: Operating system name of the server. +- `__meta_scaleway_baremetal_os_version`: Operation system version of the server. +- `__meta_scaleway_baremetal_project_id`: Project ID the server belongs to. +- `__meta_scaleway_baremetal_status`: Current status of the server. +- `__meta_scaleway_baremetal_tags`: The list of tags associated with the server concatenated with a `,`. +- `__meta_scaleway_baremetal_type`: Commercial type of the server. +- `__meta_scaleway_baremetal_zone`: Availability zone of the server. When `role` is `instance`, discovered targets include the following labels: -* `__meta_scaleway_instance_boot_type`: Boot type of the server. -* `__meta_scaleway_instance_hostname`: Hostname of the server. -* `__meta_scaleway_instance_id`: ID of the server. -* `__meta_scaleway_instance_image_arch`: Architecture of the image the server is running. -* `__meta_scaleway_instance_image_id`: ID of the image the server is running. -* `__meta_scaleway_instance_image_name`: Name of the image the server is running. -* `__meta_scaleway_instance_location_cluster_id`: ID of the cluster for the server's location. -* `__meta_scaleway_instance_location_hypervisor_id`: Hypervisor ID for the server's location. -* `__meta_scaleway_instance_location_node_id`: Node ID for the server's location. -* `__meta_scaleway_instance_name`: Name of the server. -* `__meta_scaleway_instance_organization_id`: Organization ID that the server belongs to. -* `__meta_scaleway_instance_private_ipv4`: Private IPv4 address of the server. -* `__meta_scaleway_instance_project_id`: Project ID the server belongs to. -* `__meta_scaleway_instance_public_ipv4`: Public IPv4 address of the server. -* `__meta_scaleway_instance_public_ipv6`: Public IPv6 address of the server. -* `__meta_scaleway_instance_region`: Region of the server. -* `__meta_scaleway_instance_security_group_id`: ID of the security group the server is assigned to. -* `__meta_scaleway_instance_security_group_name`: Name of the security group the server is assigned to. -* `__meta_scaleway_instance_status`: Current status of the server. -* `__meta_scaleway_instance_tags`: The list of tags associated with the server concatenated with a `,`. -* `__meta_scaleway_instance_type`: Commercial type of the server. -* `__meta_scaleway_instance_zone`: Availability zone of the server. +- `__meta_scaleway_instance_boot_type`: Boot type of the server. +- `__meta_scaleway_instance_hostname`: Hostname of the server. +- `__meta_scaleway_instance_id`: ID of the server. +- `__meta_scaleway_instance_image_arch`: Architecture of the image the server is running. +- `__meta_scaleway_instance_image_id`: ID of the image the server is running. +- `__meta_scaleway_instance_image_name`: Name of the image the server is running. +- `__meta_scaleway_instance_location_cluster_id`: ID of the cluster for the server's location. +- `__meta_scaleway_instance_location_hypervisor_id`: Hypervisor ID for the server's location. +- `__meta_scaleway_instance_location_node_id`: Node ID for the server's location. +- `__meta_scaleway_instance_name`: Name of the server. +- `__meta_scaleway_instance_organization_id`: Organization ID that the server belongs to. +- `__meta_scaleway_instance_private_ipv4`: Private IPv4 address of the server. +- `__meta_scaleway_instance_project_id`: Project ID the server belongs to. +- `__meta_scaleway_instance_public_ipv4`: Public IPv4 address of the server. +- `__meta_scaleway_instance_public_ipv6`: Public IPv6 address of the server. +- `__meta_scaleway_instance_region`: Region of the server. +- `__meta_scaleway_instance_security_group_id`: ID of the security group the server is assigned to. +- `__meta_scaleway_instance_security_group_name`: Name of the security group the server is assigned to. +- `__meta_scaleway_instance_status`: Current status of the server. +- `__meta_scaleway_instance_tags`: The list of tags associated with the server concatenated with a `,`. +- `__meta_scaleway_instance_type`: Commercial type of the server. +- `__meta_scaleway_instance_zone`: Availability zone of the server. ## Component health @@ -172,13 +172,13 @@ prometheus.remote_write "demo" { Replace the following: -* `SCALEWAY_PROJECT_ID`: The project ID of your Scaleway machines. -* `SCALEWAY_PROJECT_ROLE`: Set to `baremetal` to discover [baremetal][] machines or `instance` to discover [virtual instances][instance]. -* `SCALEWAY_ACCESS_KEY`: Your Scaleway API access key. -* `SCALEWAY_SECRET_KEY`: Your Scaleway API secret key. -* `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -* `USERNAME`: The username to use for authentication to the remote_write API. -* `PASSWORD`: The password to use for authentication to the remote_write API. +- `SCALEWAY_PROJECT_ID`: The project ID of your Scaleway machines. +- `SCALEWAY_PROJECT_ROLE`: Set to `baremetal` to discover [baremetal][] machines or `instance` to discover [virtual instances][instance]. +- `SCALEWAY_ACCESS_KEY`: Your Scaleway API access key. +- `SCALEWAY_SECRET_KEY`: Your Scaleway API secret key. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.serverset.md b/docs/sources/flow/reference/components/discovery.serverset.md index bf45a1d79a19..8d97e6e6d306 100644 --- a/docs/sources/flow/reference/components/discovery.serverset.md +++ b/docs/sources/flow/reference/components/discovery.serverset.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.serverset/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.serverset/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.serverset/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.serverset/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.serverset/ description: Learn about discovery.serverset title: discovery.serverset @@ -32,27 +32,28 @@ Serverset data stored in Zookeeper must be in JSON format. The Thrift format is The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------|----------------|--------------------------------------------------|---------|----------| -| `servers` | `list(string)` | The Zookeeper servers to connect to. | | yes | +| --------- | -------------- | ------------------------------------------------ | ------- | -------- | +| `servers` | `list(string)` | The Zookeeper servers to connect to. | | yes | | `paths` | `list(string)` | The Zookeeper paths to discover Serversets from. | | yes | -| `timeout` | `duration` | The Zookeeper session timeout | `10s` | no | +| `timeout` | `duration` | The Zookeeper session timeout | `10s` | no | ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered. +| Name | Type | Description | +| --------- | ------------------- | ------------------------------ | +| `targets` | `list(map(string))` | The set of targets discovered. | The following metadata labels are available on targets during relabeling: -* `__meta_serverset_path`: the full path to the serverset member node in Zookeeper -* `__meta_serverset_endpoint_host`: the host of the default endpoint -* `__meta_serverset_endpoint_port`: the port of the default endpoint -* `__meta_serverset_endpoint_host_`: the host of the given endpoint -* `__meta_serverset_endpoint_port_`: the port of the given endpoint -* `__meta_serverset_shard`: the shard number of the member -* `__meta_serverset_status`: the status of the member + +- `__meta_serverset_path`: the full path to the serverset member node in Zookeeper +- `__meta_serverset_endpoint_host`: the host of the default endpoint +- `__meta_serverset_endpoint_port`: the port of the default endpoint +- `__meta_serverset_endpoint_host_`: the host of the given endpoint +- `__meta_serverset_endpoint_port_`: the port of the given endpoint +- `__meta_serverset_shard`: the shard number of the member +- `__meta_serverset_status`: the status of the member ## Component health diff --git a/docs/sources/flow/reference/components/discovery.triton.md b/docs/sources/flow/reference/components/discovery.triton.md index d9e3ac6a2323..8cc21dfb7487 100644 --- a/docs/sources/flow/reference/components/discovery.triton.md +++ b/docs/sources/flow/reference/components/discovery.triton.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.triton/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.triton/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.triton/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.triton/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.triton/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.triton/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.triton/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.triton/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.triton/ description: Learn about discovery.triton title: discovery.triton @@ -29,31 +29,33 @@ discovery.triton "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------- | -------------- | --------------------------------------------------- | ------------- | -------- -`account` | `string` | The account to use for discovering new targets. | | yes -`role` | `string` | The type of targets to discover. | `"container"` | no -`dns_suffix` | `string` | The DNS suffix that is applied to the target. | | yes -`endpoint` | `string` | The Triton discovery endpoint. | | yes -`groups` | `list(string)` | A list of groups to retrieve targets from. | | no -`port` | `int` | The port to use for discovery and metrics scraping. | `9163` | no -`refresh_interval` | `duration` | The refresh interval for the list of targets. | `60s` | no -`version` | `int` | The Triton discovery API version. | `1` | no +| Name | Type | Description | Default | Required | +| ------------------ | -------------- | --------------------------------------------------- | ------------- | -------- | +| `account` | `string` | The account to use for discovering new targets. | | yes | +| `role` | `string` | The type of targets to discover. | `"container"` | no | +| `dns_suffix` | `string` | The DNS suffix that is applied to the target. | | yes | +| `endpoint` | `string` | The Triton discovery endpoint. | | yes | +| `groups` | `list(string)` | A list of groups to retrieve targets from. | | no | +| `port` | `int` | The port to use for discovery and metrics scraping. | `9163` | no | +| `refresh_interval` | `duration` | The refresh interval for the list of targets. | `60s` | no | +| `version` | `int` | The Triton discovery API version. | `1` | no | `role` can be set to: -* `"container"` to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton -* `"cn"` to discover compute nodes (servers/global zones) making up the Triton infrastructure + +- `"container"` to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton +- `"cn"` to discover compute nodes (servers/global zones) making up the Triton infrastructure `groups` is only supported when `role` is set to `"container"`. If omitted all containers owned by the requesting account are scraped. ## Blocks + The following blocks are supported inside the definition of `discovery.triton`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls_config | [tls_config][] | TLS configuration for requests to the Triton API. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ------------------------------------------------- | -------- | +| tls_config | [tls_config][] | TLS configuration for requests to the Triton API. | no | [tls_config]: #tls_config-block @@ -65,23 +67,23 @@ tls_config | [tls_config][] | TLS configuration for requests to the Triton API. The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Triton API. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Triton API. | When `role` is set to `"container"`, each target includes the following labels: -* `__meta_triton_groups`: The list of groups belonging to the target joined by a comma separator. -* `__meta_triton_machine_alias`: The alias of the target container. -* `__meta_triton_machine_brand`: The brand of the target container. -* `__meta_triton_machine_id`: The UUID of the target container. -* `__meta_triton_machine_image`: The target container's image type. -* `__meta_triton_server_id`: The server UUID the target container is running on. +- `__meta_triton_groups`: The list of groups belonging to the target joined by a comma separator. +- `__meta_triton_machine_alias`: The alias of the target container. +- `__meta_triton_machine_brand`: The brand of the target container. +- `__meta_triton_machine_id`: The UUID of the target container. +- `__meta_triton_machine_image`: The target container's image type. +- `__meta_triton_server_id`: The server UUID the target container is running on. When `role` is set to `"cn"` each target includes the following labels: -* `__meta_triton_machine_alias`: The hostname of the target (requires triton-cmon 1.7.0 or newer). -* `__meta_triton_machine_id`: The UUID of the target. +- `__meta_triton_machine_alias`: The hostname of the target (requires triton-cmon 1.7.0 or newer). +- `__meta_triton_machine_id`: The UUID of the target. ## Component health @@ -122,13 +124,15 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `TRITON_ACCOUNT`: Your Triton account. - - `TRITON_DNS_SUFFIX`: Your Triton DNS suffix. - - `TRITON_ENDPOINT`: Your Triton endpoint. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `TRITON_ACCOUNT`: Your Triton account. +- `TRITON_DNS_SUFFIX`: Your Triton DNS suffix. +- `TRITON_ENDPOINT`: Your Triton endpoint. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. diff --git a/docs/sources/flow/reference/components/discovery.uyuni.md b/docs/sources/flow/reference/components/discovery.uyuni.md index ab2a968bb543..01fd9966f9db 100644 --- a/docs/sources/flow/reference/components/discovery.uyuni.md +++ b/docs/sources/flow/reference/components/discovery.uyuni.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/discovery.uyuni/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.uyuni/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.uyuni/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.uyuni/ + - /docs/grafana-cloud/agent/flow/reference/components/discovery.uyuni/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.uyuni/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.uyuni/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.uyuni/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/discovery.uyuni/ description: Learn about discovery.uyuni title: discovery.uyuni @@ -29,30 +29,31 @@ discovery.uyuni "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------------- | ----------------------- | -------- -`server` | `string` | The primary Uyuni Server. | | yes -`username` | `string` | The username to use for authentication to the Uyuni API. | | yes -`password` | `Secret` | The password to use for authentication to the Uyuni API. | | yes -`entitlement` | `string` | The entitlement to filter on when listing targets. | `"monitoring_entitled"` | no -`separator` | `string` | The separator to use when building the `__meta_uyuni_groups` label. | `","` | no -`refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `1m` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ----------------------- | -------- | +| `server` | `string` | The primary Uyuni Server. | | yes | +| `username` | `string` | The username to use for authentication to the Uyuni API. | | yes | +| `password` | `Secret` | The password to use for authentication to the Uyuni API. | | yes | +| `entitlement` | `string` | The entitlement to filter on when listing targets. | `"monitoring_entitled"` | no | +| `separator` | `string` | The separator to use when building the `__meta_uyuni_groups` label. | `","` | no | +| `refresh_interval` | `duration` | Interval at which to refresh the list of targets. | `1m` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} ## Blocks + The following blocks are supported inside the definition of `discovery.uyuni`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls_config | [tls_config][] | TLS configuration for requests to the Uyuni API. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ------------------------------------------------ | -------- | +| tls_config | [tls_config][] | TLS configuration for requests to the Uyuni API. | no | [tls_config]: #tls_config-block @@ -64,21 +65,21 @@ tls_config | [tls_config][] | TLS configuration for requests to the Uyuni API. | The following fields are exported and can be referenced by other components: -Name | Type | Description ---------- | ------------------- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the Uyuni API. +| Name | Type | Description | +| --------- | ------------------- | ------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the Uyuni API. | Each target includes the following labels: -* `__meta_uyuni_minion_hostname`: The hostname of the Uyuni Minion. -* `__meta_uyuni_primary_fqdn`: The FQDN of the Uyuni primary. -* `__meta_uyuni_system_id`: The system ID of the Uyuni Minion. -* `__meta_uyuni_groups`: The groups the Uyuni Minion belongs to. -* `__meta_uyuni_endpoint_name`: The name of the endpoint. -* `__meta_uyuni_exporter`: The name of the exporter. -* `__meta_uyuni_proxy_module`: The name of the Uyuni module. -* `__meta_uyuni_metrics_path`: The path to the metrics endpoint. -* `__meta_uyuni_scheme`: `https` if TLS is enabled on the endpoint, `http` otherwise. +- `__meta_uyuni_minion_hostname`: The hostname of the Uyuni Minion. +- `__meta_uyuni_primary_fqdn`: The FQDN of the Uyuni primary. +- `__meta_uyuni_system_id`: The system ID of the Uyuni Minion. +- `__meta_uyuni_groups`: The groups the Uyuni Minion belongs to. +- `__meta_uyuni_endpoint_name`: The name of the endpoint. +- `__meta_uyuni_exporter`: The name of the exporter. +- `__meta_uyuni_proxy_module`: The name of the Uyuni module. +- `__meta_uyuni_metrics_path`: The path to the metrics endpoint. +- `__meta_uyuni_scheme`: `https` if TLS is enabled on the endpoint, `http` otherwise. These labels are largely derived from a [listEndpoints](https://www.uyuni-project.org/uyuni-docs-api/uyuni/api/system.monitoring.html) API call to the Uyuni Server. @@ -122,12 +123,14 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `UYUNI_USERNAME`: The username to use for authentication to the Uyuni server. - - `UYUNI_PASSWORD`: The password to use for authentication to the Uyuni server. - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `UYUNI_USERNAME`: The username to use for authentication to the Uyuni server. +- `UYUNI_PASSWORD`: The password to use for authentication to the Uyuni server. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. ## Compatible components diff --git a/docs/sources/flow/reference/components/faro.receiver.md b/docs/sources/flow/reference/components/faro.receiver.md index 7644f4035309..80d5c05c40c2 100644 --- a/docs/sources/flow/reference/components/faro.receiver.md +++ b/docs/sources/flow/reference/components/faro.receiver.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/faro.receiver/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/faro.receiver/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/faro.receiver/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/faro.receiver/ + - /docs/grafana-cloud/agent/flow/reference/components/faro.receiver/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/faro.receiver/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/faro.receiver/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/faro.receiver/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/faro.receiver/ description: Learn about the faro.receiver title: faro.receiver @@ -31,21 +31,21 @@ faro.receiver "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`extra_log_labels` | `map(string)` | Extra labels to attach to emitted log lines. | `{}` | no +| Name | Type | Description | Default | Required | +| ------------------ | ------------- | -------------------------------------------- | ------- | -------- | +| `extra_log_labels` | `map(string)` | Extra labels to attach to emitted log lines. | `{}` | no | ## Blocks The following blocks are supported inside the definition of `faro.receiver`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -server | [server][] | Configures the HTTP server. | no -server > rate_limiting | [rate_limiting][] | Configures rate limiting for the HTTP server. | no -sourcemaps | [sourcemaps][] | Configures sourcemap retrieval. | no -sourcemaps > location | [location][] | Configures on-disk location for sourcemap retrieval. | no -output | [output][] | Configures where to send collected telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ---------------------- | ----------------- | ---------------------------------------------------- | -------- | +| server | [server][] | Configures the HTTP server. | no | +| server > rate_limiting | [rate_limiting][] | Configures rate limiting for the HTTP server. | no | +| sourcemaps | [sourcemaps][] | Configures sourcemap retrieval. | no | +| sourcemaps > location | [location][] | Configures on-disk location for sourcemap retrieval. | no | +| output | [output][] | Configures where to send collected telemetry data. | yes | [server]: #server-block [rate_limiting]: #rate_limiting-block @@ -59,14 +59,14 @@ The `server` block configures the HTTP server managed by the `faro.receiver` component. Clients using the [Grafana Faro Web SDK][faro-sdk] forward telemetry data to this HTTP server for processing. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`listen_address` | `string` | Address to listen for HTTP traffic on. | `127.0.0.1` | no -`listen_port` | `number` | Port to listen for HTTP traffic on. | `12347` | no -`cors_allowed_origins` | `list(string)` | Origins for which cross-origin requests are permitted. | `[]` | no -`api_key` | `secret` | Optional API key to validate client requests with. | `""` | no -`max_allowed_payload_size` | `string` | Maximum size (in bytes) for client requests. | `"5MiB"` | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | `false` | no +| Name | Type | Description | Default | Required | +| -------------------------- | -------------- | --------------------------------------------------------------- | ----------- | -------- | +| `listen_address` | `string` | Address to listen for HTTP traffic on. | `127.0.0.1` | no | +| `listen_port` | `number` | Port to listen for HTTP traffic on. | `12347` | no | +| `cors_allowed_origins` | `list(string)` | Origins for which cross-origin requests are permitted. | `[]` | no | +| `api_key` | `secret` | Optional API key to validate client requests with. | `""` | no | +| `max_allowed_payload_size` | `string` | Maximum size (in bytes) for client requests. | `"5MiB"` | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | `false` | no | By default, telemetry data is only accepted from applications on the same local network as the browser. To accept telemetry data from a wider set of clients, @@ -89,11 +89,11 @@ ignored. The `rate_limiting` block configures rate limiting for client requests. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Whether to enable rate limiting. | `true` | no -`rate` | `number` | Rate of allowed requests per second. | `50` | no -`burst_size` | `number` | Allowed burst size of requests. | `100` | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | ------------------------------------ | ------- | -------- | +| `enabled` | `bool` | Whether to enable rate limiting. | `true` | no | +| `rate` | `number` | Rate of allowed requests per second. | `50` | no | +| `burst_size` | `number` | Allowed burst size of requests. | `100` | no | Rate limiting functions as a [token bucket algorithm][token-bucket], where a bucket has a maximum capacity for up to `burst_size` requests and refills at a @@ -115,11 +115,11 @@ The `sourcemaps` block configures how to retrieve sourcemaps. Sourcemaps are then used to transform file and line information from minified code into the file and line information from the original source code. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`download` | `bool` | Whether to download sourcemaps. | `true` | no -`download_from_origins` | `list(string)` | Which origins to download sourcemaps from. | `["*"]` | no -`download_timeout` | `duration` | Timeout when downloading sourcemaps. | `"1s"` | no +| Name | Type | Description | Default | Required | +| ----------------------- | -------------- | ------------------------------------------ | ------- | -------- | +| `download` | `bool` | Whether to download sourcemaps. | `true` | no | +| `download_from_origins` | `list(string)` | Which origins to download sourcemaps from. | `["*"]` | no | +| `download_timeout` | `duration` | Timeout when downloading sourcemaps. | `"1s"` | no | When exceptions are sent to the `faro.receiver` component, it can download sourcemaps from the web application. You can disable this behavior by setting @@ -144,10 +144,10 @@ The `location` block declares a location where sourcemaps are stored on the filesystem. The `location` block can be specified multiple times to declare multiple locations where sourcemaps are stored. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`path` | `string` | The path on disk where sourcemaps are stored. | | yes -`minified_path_prefix` | `string` | The prefix of the minified path sent from browsers. | | yes +| Name | Type | Description | Default | Required | +| ---------------------- | -------- | --------------------------------------------------- | ------- | -------- | +| `path` | `string` | The path on disk where sourcemaps are stored. | | yes | +| `minified_path_prefix` | `string` | The prefix of the minified path sent from browsers. | | yes | The `minified_path_prefix` argument determines the prefix of paths to Javascript files, such as `http://example.com/`. The `path` argument then @@ -177,10 +177,10 @@ will be replaced with the release value provided by the [Faro Web App SDK][faro- The `output` block specifies where to forward collected logs and traces. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`logs` | `list(LogsReceiver)` | A list of `loki` components to forward logs to. | `[]` | no -`traces` | `list(otelcol.Consumer)` | A list of `otelcol` components to forward traces to. | `[]` | no +| Name | Type | Description | Default | Required | +| -------- | ------------------------ | ---------------------------------------------------- | ------- | -------- | +| `logs` | `list(LogsReceiver)` | A list of `loki` components to forward logs to. | `[]` | no | +| `traces` | `list(otelcol.Consumer)` | A list of `otelcol` components to forward traces to. | `[]` | no | ## Exported fields @@ -199,18 +199,18 @@ start. `faro.receiver` exposes the following metrics for monitoring the component: -* `faro_receiver_logs_total` (counter): Total number of ingested logs. -* `faro_receiver_measurements_total` (counter): Total number of ingested measurements. -* `faro_receiver_exceptions_total` (counter): Total number of ingested exceptions. -* `faro_receiver_events_total` (counter): Total number of ingested events. -* `faro_receiver_exporter_errors_total` (counter): Total number of errors produced by an internal exporter. -* `faro_receiver_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. -* `faro_receiver_request_message_bytes` (histogram): Size (in bytes) of HTTP requests received from clients. -* `faro_receiver_response_message_bytes` (histogram): Size (in bytes) of HTTP responses sent to clients. -* `faro_receiver_inflight_requests` (gauge): Current number of inflight requests. -* `faro_receiver_sourcemap_cache_size` (counter): Number of items in sourcemap cache per origin. -* `faro_receiver_sourcemap_downloads_total` (counter): Total number of sourcemap downloads performed per origin and status. -* `faro_receiver_sourcemap_file_reads_total` (counter): Total number of sourcemap retrievals using the filesystem per origin and status. +- `faro_receiver_logs_total` (counter): Total number of ingested logs. +- `faro_receiver_measurements_total` (counter): Total number of ingested measurements. +- `faro_receiver_exceptions_total` (counter): Total number of ingested exceptions. +- `faro_receiver_events_total` (counter): Total number of ingested events. +- `faro_receiver_exporter_errors_total` (counter): Total number of errors produced by an internal exporter. +- `faro_receiver_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. +- `faro_receiver_request_message_bytes` (histogram): Size (in bytes) of HTTP requests received from clients. +- `faro_receiver_response_message_bytes` (histogram): Size (in bytes) of HTTP responses sent to clients. +- `faro_receiver_inflight_requests` (gauge): Current number of inflight requests. +- `faro_receiver_sourcemap_cache_size` (counter): Number of items in sourcemap cache per origin. +- `faro_receiver_sourcemap_downloads_total` (counter): Total number of sourcemap downloads performed per origin and status. +- `faro_receiver_sourcemap_file_reads_total` (counter): Total number of sourcemap retrievals using the filesystem per origin and status. ## Example @@ -248,22 +248,22 @@ otelcol.exporter.otlp "traces" { Replace the following: -* `NETWORK_ADDRESS`: IP address of the network interface to listen to traffic +- `NETWORK_ADDRESS`: IP address of the network interface to listen to traffic on. This IP address must be reachable by browsers using the web application to instrument. -* `PATH_TO_SOURCEMAPS`: Path on disk where sourcemaps are located. +- `PATH_TO_SOURCEMAPS`: Path on disk where sourcemaps are located. -* `WEB_APP_PREFIX`: Prefix of the web application being instrumented. +- `WEB_APP_PREFIX`: Prefix of the web application being instrumented. -* `LOKI_ADDRESS`: Address of the Loki server to send logs to. +- `LOKI_ADDRESS`: Address of the Loki server to send logs to. - * If authentication is required to send logs to the Loki server, refer to the + - If authentication is required to send logs to the Loki server, refer to the documentation of [loki.write][] for more information. -* `OTLP_ADDRESS`: The address of the OTLP-compatible server to send traces to. +- `OTLP_ADDRESS`: The address of the OTLP-compatible server to send traces to. - * If authentication is required to send logs to the Loki server, refer to the + - If authentication is required to send logs to the Loki server, refer to the documentation of [otelcol.exporter.otlp][] for more information. [loki.write]: {{< relref "./loki.write.md" >}} @@ -278,7 +278,6 @@ Replace the following: - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/local.file.md b/docs/sources/flow/reference/components/local.file.md index 5e935a0bbbf5..3a6ad7180b64 100644 --- a/docs/sources/flow/reference/components/local.file.md +++ b/docs/sources/flow/reference/components/local.file.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/local.file/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/local.file/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/local.file/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/local.file/ + - /docs/grafana-cloud/agent/flow/reference/components/local.file/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/local.file/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/local.file/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/local.file/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/local.file/ description: Learn about local.file title: local.file @@ -32,12 +32,12 @@ local.file "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`filename` | `string` | Path of the file on disk to watch | | yes -`detector` | `string` | Which file change detector to use (fsnotify, poll) | `"fsnotify"` | no -`poll_frequency` | `duration` | How often to poll for file changes | `"1m"` | no -`is_secret` | `bool` | Marks the file as containing a [secret][] | `false` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | -------------------------------------------------- | ------------ | -------- | +| `filename` | `string` | Path of the file on disk to watch | | yes | +| `detector` | `string` | Which file change detector to use (fsnotify, poll) | `"fsnotify"` | no | +| `poll_frequency` | `duration` | How often to poll for file changes | `"1m"` | no | +| `is_secret` | `bool` | Marks the file as containing a [secret][] | `false` | no | [secret]: {{< relref "../../concepts/config-language/expressions/types_and_values.md#secrets" >}} @@ -47,9 +47,9 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`content` | `string` or `secret` | The contents of the file from the most recent read +| Name | Type | Description | +| --------- | -------------------- | -------------------------------------------------- | +| `content` | `string` or `secret` | The contents of the file from the most recent read | The `content` field will have the `secret` type only if the `is_secret` argument was true. @@ -71,7 +71,7 @@ component. ## Debug metrics -* `agent_local_file_timestamp_last_accessed_unix_seconds` (gauge): The +- `agent_local_file_timestamp_last_accessed_unix_seconds` (gauge): The timestamp, in Unix seconds, that the file was last successfully accessed. ## Example diff --git a/docs/sources/flow/reference/components/local.file_match.md b/docs/sources/flow/reference/components/local.file_match.md index 1413a1f8a226..8cb45fba7da3 100644 --- a/docs/sources/flow/reference/components/local.file_match.md +++ b/docs/sources/flow/reference/components/local.file_match.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/local.file_match/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/local.file_match/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/local.file_match/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/local.file_match/ + - /docs/grafana-cloud/agent/flow/reference/components/local.file_match/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/local.file_match/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/local.file_match/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/local.file_match/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/local.file_match/ description: Learn about local.file_match title: local.file_match @@ -27,28 +27,28 @@ local.file_match "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ---------------- | ------------------- | ------------------------------------------------------------------------------------------ |---------| -------- -`path_targets` | `list(map(string))` | Targets to expand; looks for glob patterns on the `__path__` and `__path_exclude__` keys. | | yes -`sync_period` | `duration` | How often to sync filesystem and targets. | `"10s"` | no +| Name | Type | Description | Default | Required | +| -------------- | ------------------- | ----------------------------------------------------------------------------------------- | ------- | -------- | +| `path_targets` | `list(map(string))` | Targets to expand; looks for glob patterns on the `__path__` and `__path_exclude__` keys. | | yes | +| `sync_period` | `duration` | How often to sync filesystem and targets. | `"10s"` | no | `path_targets` uses [doublestar][] style paths. -* `/tmp/**/*.log` will match all subfolders of `tmp` and include any files that end in `*.log`. -* `/tmp/apache/*.log` will match only files in `/tmp/apache/` that end in `*.log`. -* `/tmp/**` will match all subfolders of `tmp`, `tmp` itself, and all files. +- `/tmp/**/*.log` will match all subfolders of `tmp` and include any files that end in `*.log`. +- `/tmp/apache/*.log` will match only files in `/tmp/apache/` that end in `*.log`. +- `/tmp/**` will match all subfolders of `tmp`, `tmp` itself, and all files. ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`targets` | `list(map(string))` | The set of targets discovered from the filesystem. +| Name | Type | Description | +| --------- | ------------------- | -------------------------------------------------- | +| `targets` | `list(map(string))` | The set of targets discovered from the filesystem. | Each target includes the following labels: -* `__path__`: Absolute path to the file. +- `__path__`: Absolute path to the file. ## Component health @@ -68,7 +68,7 @@ values. ### Send `/tmp/logs/*.log` files to Loki -This example discovers all files and folders under `/tmp/logs`. The absolute paths are +This example discovers all files and folders under `/tmp/logs`. The absolute paths are used by `loki.source.file.files` targets. ```river @@ -91,10 +91,12 @@ loki.write "endpoint" { } } ``` + Replace the following: - - `LOKI_URL`: The URL of the Loki server to send logs to. - - `USERNAME`: The username to use for authentication to the Loki API. - - `PASSWORD`: The password to use for authentication to the Loki API. + +- `LOKI_URL`: The URL of the Loki server to send logs to. +- `USERNAME`: The username to use for authentication to the Loki API. +- `PASSWORD`: The password to use for authentication to the Loki API. ### Send Kubernetes pod logs to Loki @@ -141,10 +143,12 @@ loki.write "endpoint" { } } ``` + Replace the following: - - `LOKI_URL`: The URL of the Loki server to send logs to. - - `USERNAME`: The username to use for authentication to the Loki API. - - `PASSWORD`: The password to use for authentication to the Loki API. + +- `LOKI_URL`: The URL of the Loki server to send logs to. +- `USERNAME`: The username to use for authentication to the Loki API. +- `PASSWORD`: The password to use for authentication to the Loki API. diff --git a/docs/sources/flow/reference/components/loki.echo.md b/docs/sources/flow/reference/components/loki.echo.md index eb16448a8670..432ebde2175d 100644 --- a/docs/sources/flow/reference/components/loki.echo.md +++ b/docs/sources/flow/reference/components/loki.echo.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.echo/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.echo/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.echo/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.echo/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.echo/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.echo/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.echo/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.echo/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.echo/ description: Learn about loki.echo labels: @@ -35,9 +35,9 @@ loki.echo "LABEL" {} The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `LogsReceiver` | A value that other components can use to send log entries to. +| Name | Type | Description | +| ---------- | -------------- | ------------------------------------------------------------- | +| `receiver` | `LogsReceiver` | A value that other components can use to send log entries to. | ## Component health diff --git a/docs/sources/flow/reference/components/loki.process.md b/docs/sources/flow/reference/components/loki.process.md index ac8307a0e96a..d7bfbc94a123 100644 --- a/docs/sources/flow/reference/components/loki.process.md +++ b/docs/sources/flow/reference/components/loki.process.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.process/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.process/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.process/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.process/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.process/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.process/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.process/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.process/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.process/ description: Learn about loki.process title: loki.process @@ -52,7 +52,7 @@ loki.process "LABEL" { The following blocks are supported inside the definition of `loki.process`: | Hierarchy | Block | Description | Required | -|---------------------------|-------------------------------|----------------------------------------------------------------|----------| +| ------------------------- | ----------------------------- | -------------------------------------------------------------- | -------- | | stage.cri | [stage.cri][] | Configures a pre-defined CRI-format pipeline. | no | | stage.decolorize | [stage.decolorize][] | Strips ANSI color codes from log lines. | no | | stage.docker | [stage.docker][] | Configures a pre-defined Docker log format pipeline. | no | @@ -111,7 +111,6 @@ file. [stage.tenant]: #stagetenant-block [stage.timestamp]: #stagetimestamp-block - ### stage.cri block The `stage.cri` inner block enables a predefined pipeline which reads log lines using @@ -119,13 +118,13 @@ the CRI logging format. The following arguments are supported: -| Name | Type | Description | Default | Required | -| -------------------------------- | ---------- | -------------------------------------------------------------------- | -------------- | -------- | -| `max_partial_lines` | `number` | Maximum number of partial lines to hold in memory. | `100` | no | -| `max_partial_line_size` | `number` | Maximum number of characters which a partial line can have. | `0` | no | -| `max_partial_line_size_truncate` | `bool` | Truncate partial lines that are longer than `max_partial_line_size`. | `false` | no | +| Name | Type | Description | Default | Required | +| -------------------------------- | -------- | -------------------------------------------------------------------- | ------- | -------- | +| `max_partial_lines` | `number` | Maximum number of partial lines to hold in memory. | `100` | no | +| `max_partial_line_size` | `number` | Maximum number of characters which a partial line can have. | `0` | no | +| `max_partial_line_size_truncate` | `bool` | Truncate partial lines that are longer than `max_partial_line_size`. | `false` | no | -`max_partial_line_size` is only taken into account if +`max_partial_line_size` is only taken into account if `max_partial_line_size_truncate` is set to `true`. ```river @@ -135,13 +134,14 @@ stage.cri {} CRI specifies log lines as single space-delimited values with the following components: -* `time`: The timestamp string of the log -* `stream`: Either `stdout` or `stderr` -* `flags`: CRI flags including `F` or `P` -* `log`: The contents of the log line +- `time`: The timestamp string of the log +- `stream`: Either `stdout` or `stderr` +- `flags`: CRI flags including `F` or `P` +- `log`: The contents of the log line Given the following log line, the subsequent key-value pairs are created in the shared map of extracted data: + ``` "2019-04-30T02:12:41.8443515Z stdout F message" @@ -155,14 +155,14 @@ timestamp: 2019-04-30T02:12:41.8443515 The `stage.decolorize` strips ANSI color codes from the log lines, thus making it easier to parse logs further. -The `stage.decolorize` block does not support any arguments or inner blocks, so +The `stage.decolorize` block does not support any arguments or inner blocks, so it is always empty. ```river stage.decolorize {} ``` -`stage.decolorize` turns each line having a color code into a non-colored one, +`stage.decolorize` turns each line having a color code into a non-colored one, for example: ``` @@ -189,9 +189,9 @@ stage.docker {} Docker log entries are formatted as JSON with the following keys: -* `log`: The content of log line -* `stream`: Either `stdout` or `stderr` -* `time`: The timestamp string of the log line +- `log`: The content of log line +- `stream`: Either `stdout` or `stderr` +- `time`: The timestamp string of the log line Given the following log line, the subsequent key-value pairs are created in the shared map of extracted data: @@ -214,7 +214,7 @@ To drop entries with an OR clause, specify multiple `drop` blocks in sequence. The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------------------|------------|------------------------------------------------------------------------------------------------------------------------|----------------|----------| +| --------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------- | -------------- | -------- | | `source` | `string` | Name or comma-separated list of names from extracted data to match. If empty or not defined, it uses the log message. | `""` | no | | `separator` | `string` | When `source` is a comma-separated list of names, this separator is placed between concatenated extracted data values. | `";"` | no | | `expression` | `string` | A valid RE2 regular expression. | `""` | no | @@ -224,21 +224,23 @@ The following arguments are supported: | `drop_counter_reason` | `string` | A custom reason to report for dropped lines. | `"drop_stage"` | no | The `expression` field must be a RE2 regex string. -* If `source` is empty or not provided, the regex attempts to match the log -line itself. -* If `source` is a single name, the regex attempts to match the corresponding -value from the extracted map. -* If `source` is a comma-separated list of names, the corresponding values from -the extracted map are concatenated using `separator` and the regex attempts to -match the concatenated string. + +- If `source` is empty or not provided, the regex attempts to match the log + line itself. +- If `source` is a single name, the regex attempts to match the corresponding + value from the extracted map. +- If `source` is a comma-separated list of names, the corresponding values from + the extracted map are concatenated using `separator` and the regex attempts to + match the concatenated string. The `value` field can only work with values from the extracted map, and must be specified together with `source`. -* If `source` is a single name, the entries are dropped when there is an exact -match between the corresponding value from the extracted map and the `value`. -* If `source` is a comma-separated list of names, the entries are dropped when -the `value` matches the `source` values from extracted data, concatenated using -the `separator`. + +- If `source` is a single name, the entries are dropped when there is an exact + match between the corresponding value from the extracted map and the `value`. +- If `source` is a comma-separated list of names, the entries are dropped when + the `value` matches the `source` values from extracted data, concatenated using + the `separator`. Whenever an entry is dropped, the metric `loki_process_dropped_lines_total` is incremented. By default, the reason label is `"drop_stage"`, but you can @@ -283,7 +285,7 @@ in the Windows Event Log. The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------------------|----------|--------------------------------------------------------|-----------|----------| +| --------------------- | -------- | ------------------------------------------------------ | --------- | -------- | | `source` | `string` | Name of the field in the extracted data to parse. | `message` | no | | `overwrite_existing` | `bool` | Whether to overwrite existing extracted data fields. | `false` | no | | `drop_invalid_labels` | `bool` | Whether to drop fields that are not valid label names. | `false` | no | @@ -292,7 +294,7 @@ When `overwrite_existing` is set to `true`, the stage overwrites existing extrac fields with the same name. If set to `false`, the `_extracted` suffix will be appended to an already existing field name. -When `drop_invalid_labels` is set to `true`, the stage drops fields that are +When `drop_invalid_labels` is set to `true`, the stage drops fields that are not valid label names. If set to `false`, the stage will automatically convert them into valid labels replacing invalid characters with underscores. @@ -300,8 +302,8 @@ them into valid labels replacing invalid characters with underscores. ```river stage.json { - expressions = { - message = "", + expressions = { + message = "", Overwritten = "", } } @@ -313,6 +315,7 @@ stage.eventlogmessage { ``` Given the following log line: + ``` {"event_id": 1, "Overwritten": "old", "message": "Message type:\r\nOverwritten: new\r\nImage: C:\\Users\\User\\agent.exe"} ``` @@ -373,6 +376,7 @@ loki.process "username" { In this example, the first stage uses the log line as the source and populates these values in the shared map. An empty expression means using the same value as the key (so `extra="extra"`). + ``` output: log message\n extra: {"user": "agent"} @@ -380,6 +384,7 @@ extra: {"user": "agent"} The second stage uses the value in `extra` as the input and appends the following key-value pair to the set of extracted data. + ``` username: agent ``` @@ -388,7 +393,6 @@ username: agent Due to a limitation of the upstream jmespath library, you must wrap any string that contains a hyphen `-` in quotes so that it's not considered a numerical expression. - If you don't use quotes to wrap a string that contains a hyphen, you will get errors like: `Unexpected token at the end of the expression: tNumber` @@ -396,7 +400,7 @@ You can use one of two options to circumvent this issue: 1. An escaped double quote. For example: `http_user_agent = "\"request_User-Agent\""` 1. A backtick quote. For example: ``http_user_agent = `"request_User-Agent"` `` -{{< /admonition >}} + {{< /admonition >}} ### stage.label_drop block @@ -426,7 +430,6 @@ The following arguments are supported: | -------- | -------------- | ------------------------------------------- | ------- | -------- | | `values` | `list(string)` | Configures a `label_keep` processing stage. | `{}` | no | - ```river stage.label_keep { values = [ "kubernetes_pod_name", "kubernetes_pod_container_name" ] @@ -465,7 +468,7 @@ data from the extracted values map and add them to log entries as structured met The following arguments are supported: | Name | Type | Description | Default | Required | -| -------- | ------------- |-----------------------------------------------------------------------------| ------- | -------- | +| -------- | ------------- | --------------------------------------------------------------------------- | ------- | -------- | | `values` | `map(string)` | Specifies the list of labels to add from extracted values map to log entry. | `{}` | no | In a structured_metadata stage, the map's keys define the label to set and the values are @@ -515,6 +518,7 @@ The following example rate-limits entries from each unique `namespace` value independently. Any entries without the `namespace` label are not rate-limited. The stage keeps track of up to `max_distinct_labels` unique values, defaulting at 10000. + ```river stage.limit { rate = 10 @@ -537,7 +541,6 @@ The following arguments are supported: | `mapping` | `map(string)` | Key-value pairs of logmft fields to extract. | | yes | | `source` | `string` | Source of the data to parse as logfmt. | `""` | no | - The `source` field defines the source of data to parse as logfmt. When `source` is missing or empty, the stage parses the log line itself, but it can also be used to parse a previously extracted value. @@ -582,12 +585,11 @@ Many Payment Card Industry environments require these numbers to be redacted. The following arguments are supported: -| Name | Type | Description | Default | Required | -| ------------- | ------------- | ---------------------------------------------- | ---------------- | -------- | -| `replacement` | `string` | String to substitute the matched patterns with | `"**REDACTED**"` | no | -| `source` | `string` | Source of the data to parse. | `""` | no | -| `minLength` | `int` | Minimum length of digits to consider | `13` | no | - +| Name | Type | Description | Default | Required | +| ------------- | -------- | ---------------------------------------------- | ---------------- | -------- | +| `replacement` | `string` | String to substitute the matched patterns with | `"**REDACTED**"` | no | +| `source` | `string` | Source of the data to parse. | `""` | no | +| `minLength` | `int` | Minimum length of digits to consider | `13` | no | The `source` field defines the source of data to search. When `source` is missing or empty, the stage parses the log line itself, but it can also be used @@ -633,13 +635,13 @@ block. These are used to construct the nested set of stages to run if the selector matches the labels and content of the log entries. It supports all the same `stage.NAME` blocks as the in the top level of the loki.process component. - If the specified action is `"drop"`, the metric `loki_process_dropped_lines_total` is incremented with every line dropped. By default, the reason label is `"match_stage"`, but a custom reason can be provided by using the `drop_counter_reason` argument. Let's see this in action, with the following log lines and stages + ``` { "time":"2023-01-18T17:08:41+00:00", "app":"foo", "component": ["parser","type"], "level" : "WARN", "message" : "app1 log line" } { "time":"2023-01-18T17:08:42+00:00", "app":"bar", "component": ["parser","type"], "level" : "ERROR", "message" : "foo noisy error" } @@ -717,22 +719,18 @@ The following blocks are supported inside the definition of `stage.metrics`: | metric.gauge | [metric.gauge][] | Defines a `gauge` metric. | no | | metric.histogram | [metric.histogram][] | Defines a `histogram` metric. | no | -{{< admonition type="note" >}} -The metrics will be reset if you reload the {{< param "PRODUCT_ROOT_NAME" >}} configuration file. -{{< /admonition >}} - [metric.counter]: #metriccounter-block [metric.gauge]: #metricgauge-block [metric.histogram]: #metrichistogram-block - #### metric.counter block + Defines a metric whose value only goes up. The following arguments are supported: | Name | Type | Description | Default | Required | -|---------------------|------------|----------------------------------------------------------------------------------------------------------|--------------------------|----------| +| ------------------- | ---------- | -------------------------------------------------------------------------------------------------------- | ------------------------ | -------- | | `name` | `string` | The metric name. | | yes | | `action` | `string` | The action to take. Valid actions are `set`, `inc`, `dec`,` add`, or `sub`. | | yes | | `description` | `string` | The metric's description and help text. | `""` | no | @@ -750,14 +748,14 @@ The valid `action` values are `inc` and `add`. The `inc` action increases the metric value by 1 for each log line that passed the filter. The `add` action converts the extracted value to a positive float and adds it to the metric. - #### metric.gauge block + Defines a gauge metric whose value can go up or down. The following arguments are supported: | Name | Type | Description | Default | Required | -|---------------------|------------|-------------------------------------------------------------------------------------|--------------------------|----------| +| ------------------- | ---------- | ----------------------------------------------------------------------------------- | ------------------------ | -------- | | `name` | `string` | The metric name. | | yes | | `action` | `string` | The action to take. Valid actions are `inc` and `add`. | | yes | | `description` | `string` | The metric's description and help text. | `""` | no | @@ -766,21 +764,19 @@ The following arguments are supported: | `max_idle_duration` | `duration` | Maximum amount of time to wait until the metric is marked as 'stale' and removed. | `"5m"` | no | | `value` | `string` | If set, the metric only changes if `source` exactly matches the `value`. | `""` | no | - The valid `action` values are `inc`, `dec`, `set`, `add`, or `sub`. `inc` and `dec` increment and decrement the metric's value by 1 respectively. If `set`, `add, or `sub` is chosen, the extracted value must be convertible to a positive float and is set, added to, or subtracted from the metric's value. - #### metric.histogram block -Defines a histogram metric whose values are recorded in predefined buckets. +Defines a histogram metric whose values are recorded in predefined buckets. The following arguments are supported: | Name | Type | Description | Default | Required | -|---------------------|---------------|-------------------------------------------------------------------------------------|--------------------------|----------| +| ------------------- | ------------- | ----------------------------------------------------------------------------------- | ------------------------ | -------- | | `name` | `string` | The metric name. | | yes | | `buckets` | `list(float)` | The action to take. Valid actions are `set`, `inc`, `dec`,` add`, or `sub`. | | yes | | `description` | `string` | The metric's description and help text. | `""` | no | @@ -802,18 +798,19 @@ metrics which have not been updated within `max_idle_duration` are removed. The The metric values extracted from the log data are internally converted to floats. The supported values are the following: -* integer -* floating point number -* string - Two types of string format are supported: - * Strings that represent floating point numbers, for example, "0.804" is converted to 0.804. - * Duration format strings. Valid time units are “ns”, “us”, “ms”, “s”, “m”, “h”. A value in this format is converted to a floating point number of seconds, for example, "0.5ms" is converted to 0.0005. -* boolean: - * true is converted to 1. - * false is converted to 0. +- integer +- floating point number +- string - Two types of string format are supported: + - Strings that represent floating point numbers, for example, "0.804" is converted to 0.804. + - Duration format strings. Valid time units are “ns”, “us”, “ms”, “s”, “m”, “h”. A value in this format is converted to a floating point number of seconds, for example, "0.5ms" is converted to 0.0005. +- boolean: + - true is converted to 1. + - false is converted to 0. The following pipeline creates a counter which increments every time any log line is received by using the `match_all` parameter. The pipeline creates a second counter which adds the byte size of these log lines by using the `count_entry_bytes` parameter. These two metrics disappear after 24 hours if no new entries are received, to avoid building up metrics which no longer serve any use. These two metrics are a good starting point to track the volume of log streams in both the number of entries and their byte size, to identify sources of high-volume or high-cardinality data. + ```river stage.metrics { metric.counter { @@ -912,10 +909,8 @@ The following arguments are supported: | `max_wait_time` | `duration` | The maximum time to wait for a multiline block. | `"3s"` | no | | `max_lines` | `number` | The maximum number of lines a block can have. | `128` | no | - A new block is identified by the RE2 regular expression passed in `firstline`. - Any line that does _not_ match the expression is considered to be part of the block of the previous match. If no new logs arrive with `max_wait_time`, the block is sent on. The `max_lines` field defines the maximum number of lines a @@ -969,7 +964,6 @@ The following arguments are supported: | -------- | -------- | -------------------------------------------------- | ------- | -------- | | `source` | `string` | Name from extracted data to use for the log entry. | | yes | - Let's see how this works for the following log line and three-stage pipeline: ``` @@ -989,6 +983,7 @@ stage.output { ``` The first stage extracts the following key-value pairs into the shared map: + ``` user: John Doe message: hello, world! @@ -1020,12 +1015,14 @@ The querying capabilities of Loki make it easy to still access this data so it c be filtered and aggregated at query time. For example, consider the following log entry: + ``` log_line: "something went wrong" labels: { "level" = "error", "env" = "dev", "user_id" = "f8fas0r" } ``` and this processing stage: + ```river stage.pack { labels = ["env", "user_id"] @@ -1034,6 +1031,7 @@ stage.pack { The stage transforms the log entry into the following JSON object, where the two embedded labels are removed from the original log entry: + ```json { "_entry": "something went wrong", @@ -1063,7 +1061,6 @@ The following arguments are supported: | `expression` | `string` | A valid RE2 regular expression. Each capture group must be named. | | yes | | `source` | `string` | Name from extracted data to parse. If empty, uses the log message. | `""` | no | - The `expression` field needs to be a RE2 regex string. Every matched capture group is added to the extracted map, so it must be named like: `(?Pre)`. The name of the capture group is then used as the key in the extracted map for @@ -1096,6 +1093,7 @@ the value stored in the shared map under that name. Let's see what happens when the following log line is put through this two-stage pipeline: + ``` {"timestamp":"2022-01-01T01:00:00.000000001Z"} @@ -1109,12 +1107,14 @@ stage.regex { ``` The first stage adds the following key-value pair into the extracted map: + ``` time: 2022-01-01T01:00:00.000000001Z ``` Then, the regex stage parses the value for time from the shared values and appends the subsequent key-value pair back into the extracted values map: + ``` year: 2022 ``` @@ -1133,7 +1133,6 @@ The following arguments are supported: | `source` | `string` | Source of the data to parse. If empty, it uses the log message. | | no | | `replace` | `string` | Value replaced by the capture group. | | no | - The `source` field defines the source of data to parse using `expression`. When `source` is missing or empty, the stage parses the log line itself, but it can also be used to parse a previously extracted value. The replaced value is @@ -1146,7 +1145,7 @@ Because of how River treats backslashes in double-quoted strings, note that all backslashes in a regex expression must be escaped like `"\\w*"`. Let's see how this works with the following log line and stage. Since `source` -is omitted, the replacement occurs on the log line itself. +is omitted, the replacement occurs on the log line itself. ``` 2023-01-01T01:00:00.000000001Z stderr P i'm a log message who has sensitive information with password xyz! @@ -1158,6 +1157,7 @@ stage.replace { ``` The log line is transformed to + ``` 2023-01-01T01:00:00.000000001Z stderr P i'm a log message who has sensitive information with password *****! ``` @@ -1165,6 +1165,7 @@ The log line is transformed to If `replace` is empty, then the captured value is omitted instead. In the following example, `source` is defined. + ``` {"time":"2023-01-01T01:00:00.000000001Z", "level": "info", "msg":"11.11.11.11 - \"POST /loki/api/push/ HTTP/1.1\" 200 932 \"-\" \"Mozilla/5.0\"} @@ -1180,6 +1181,7 @@ stage.replace { ``` The JSON stage adds the following key-value pairs into the extracted map: + ``` time: 2023-01-01T01:00:00.000000001Z level: info @@ -1190,6 +1192,7 @@ The `replace` stage acts on the `msg` value. The capture group matches against `/loki/api/push` and is replaced by `redacted_url`. The `msg` value is finally transformed into: + ``` msg: "11.11.11.11 - "POST redacted_url HTTP/1.1" 200 932 "-" "Mozilla/5.0" ``` @@ -1199,6 +1202,7 @@ The `replace` field can use a set of templating functions, by utilizing Go's Let's see how this works with named capture groups with a sample log line and stage. + ``` 11.11.11.11 - agent [01/Jan/2023:00:00:01 +0200] @@ -1211,6 +1215,7 @@ stage.replace { Since `source` is empty, the regex parses the log line itself and extracts the named capture groups to the shared map of values. The `replace` field acts on these extracted values and converts them to uppercase: + ``` ip: 11.11.11.11 identd: - @@ -1219,12 +1224,14 @@ timestamp: 01/JAN/2023:00:00:01 +0200 ``` and the log line becomes: + ``` 11.11.11.11 - FRANK [01/JAN/2023:00:00:01 +0200] ``` The following list contains available functions with examples of more complex `replace` fields. + ``` ToLower, ToUpper, Replace, Trim, TrimLeftTrimRight, TrimPrefix, TrimSuffix, TrimSpace, Hash, Sha2Hash, regexReplaceAll, regexReplaceAllLiteral @@ -1234,19 +1241,19 @@ ToLower, ToUpper, Replace, Trim, TrimLeftTrimRight, TrimPrefix, TrimSuffix, Trim ### stage.sampling block -The `sampling` stage is used to sample the logs. Configuring the value +The `sampling` stage is used to sample the logs. Configuring the value `rate = 0.1` means that 10% of the logs will continue to be processed. The remaining 90% of the logs will be dropped. The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------------------|----------|----------------------------------------------------------------------------------------------------|----------------|----------| +| --------------------- | -------- | -------------------------------------------------------------------------------------------------- | -------------- | -------- | | `rate` | `float` | The sampling rate in a range of `[0, 1]` | | yes | | `drop_counter_reason` | `string` | The label to add to `loki_process_dropped_lines_total` metric when logs are dropped by this stage. | sampling_stage | no | -For example, the configuration below will sample 25% of the logs and drop the -remaining 75%. When logs are dropped, the `loki_process_dropped_lines_total` +For example, the configuration below will sample 25% of the logs and drop the +remaining 75%. When logs are dropped, the `loki_process_dropped_lines_total` metric is incremented with an additional `reason=logs_sampling` label. ```river @@ -1267,7 +1274,6 @@ The following arguments are supported: | -------- | ------------- | ---------------------------------------------- | ------- | -------- | | `values` | `map(string)` | Configures a `static_labels` processing stage. | `{}` | no | - ```river stage.static_labels { values = { @@ -1297,6 +1303,7 @@ The following arguments are supported: | `template` | `string` | Go template string to use. | | yes | The template string can be any valid template that can be used by Go's `text/template`. It supports all functions from the [sprig package](http://masterminds.github.io/sprig/), as well as the following list of custom functions: + ``` ToLower, ToUpper, Replace, Trim, TrimLeftTrimRight, TrimPrefix, TrimSuffix, TrimSpace, Hash, Sha2Hash, regexReplaceAll, regexReplaceAllLiteral ``` @@ -1308,6 +1315,7 @@ functions][] section below. Assuming no data is present on the extracted map, the following stage simply adds the `new_key: "hello_world"`key-value pair to the shared map. + ```river stage.template { source = "new_key" @@ -1318,6 +1326,7 @@ stage.template { If the `source` value exists in the extract fields, its value can be referred to as `.Value` in the template. The next stage takes the current value of `app` from the extracted map, converts it to lowercase, and adds a suffix to its value: + ```river stage.template { source = "app" @@ -1328,6 +1337,7 @@ stage.template { Any previously extracted keys are available for `template` to expand and use. The next stage takes the current values for `level`, `app` and `module` and creates a new key named `output_message`: + ```river stage.template { source = "output_msg" @@ -1337,6 +1347,7 @@ stage.template { A special key named `Entry` can be used to reference the current line; this can be useful when you need to append/prepend something to the log line, like this snippet: + ```river stage.template { source = "message" @@ -1348,13 +1359,16 @@ stage.output { ``` #### Supported functions + In addition to supporting all functions from the [sprig package](http://masterminds.github.io/sprig/), the `template` stage supports the following custom functions. ##### ToLower and ToUpper + `ToLower` and `ToUpper` convert the entire string to lowercase and uppercase, respectively. Examples: + ```river stage.template { source = "out" @@ -1367,6 +1381,7 @@ stage.template { ``` ##### Replace + The `Replace` function syntax is defined as `{{ Replace }}`. The function returns a copy of the input string, with instances of the `` @@ -1376,6 +1391,7 @@ there is no limit on the number of replacement. Finally, if `` is empty, it matches before and after every UTF-8 character in the string. This example replaces the first two instances of the `loki` word by `Loki`: + ```river stage.template { source = "output" @@ -1384,14 +1400,16 @@ stage.template { ``` ##### Trim, TrimLeft, TrimRight, TrimSpace, TrimPrefix, TrimSuffix -* `Trim` returns a slice of the string `s` with all leading and trailing Unicode + +- `Trim` returns a slice of the string `s` with all leading and trailing Unicode code points contained in `cutset` removed. -* `TrimLeft` and `TrimRight` are the same as Trim except that they +- `TrimLeft` and `TrimRight` are the same as Trim except that they trim only leading and trailing characters, respectively. -* `TrimSpace` returns a slice of the string s, with all leading and trailing -white space removed, as defined by Unicode. -* `TrimPrefix` and `TrimSuffix` trim the supplied prefix or suffix, respectively. -Examples: +- `TrimSpace` returns a slice of the string s, with all leading and trailing + white space removed, as defined by Unicode. +- `TrimPrefix` and `TrimSuffix` trim the supplied prefix or suffix, respectively. + Examples: + ```river stage.template { source = "output" @@ -1408,6 +1426,7 @@ stage.template { ``` ##### Regex + `regexReplaceAll` returns a copy of the input string, replacing matches of the Regexp with the replacement string. Inside the replacement string, `$` characters are interpreted as in Expand functions, so for instance, $1 represents the first captured @@ -1429,10 +1448,12 @@ stage.template { ``` ##### Hash and Sha2Hash + `Hash` returns a `Sha3_256` hash of the string, represented as a hexadecimal number of 64 digits. You can use it to obfuscate sensitive data and PII in the logs. It requires a (fixed) salt value, to add complexity to low input domains (e.g., all possible social security numbers). `Sha2Hash` returns a `Sha2_256` of the string which is faster and less CPU-intensive than `Hash`, however it is less secure. Examples: + ```river stage.template { source = "output" @@ -1462,6 +1483,7 @@ The following arguments are supported: The block expects only one of `label`, `source` or `value` to be provided. The following stage assigns the fixed value `team-a` as the tenant ID: + ```river stage.tenant { value = "team-a" @@ -1470,6 +1492,7 @@ stage.tenant { This stage extracts the tenant ID from the `customer_id` field after parsing the log entry as JSON in the shared extracted map: + ```river stage.json { expressions = { "customer_id" = "" } @@ -1480,6 +1503,7 @@ stage.tenant { ``` The final example extracts the tenant ID from a label set by a previous stage: + ```river stage.labels { "namespace" = "k8s_namespace" @@ -1518,6 +1542,7 @@ The `format` field defines _how_ that source should be parsed. First off, the `format` can be set to one of the following shorthand values for commonly-used forms: + ``` ANSIC: Mon Jan _2 15:04:05 2006 UnixDate: Mon Jan _2 15:04:05 MST 2006 @@ -1533,6 +1558,7 @@ RFC3339Nano: 2006-01-02T15:04:05.999999999-07:00 Additionally, support for common Unix timestamps is supported with the following format values: + ``` Unix: 1562708916 or with fractions 1562708916.000000123 UnixMs: 1562708916414 @@ -1557,7 +1583,7 @@ custom format. | ------------------- | ------------------------------------------------------------------------------------------------------------------------ | | Year | 06, 2006 | | Month | 1, 01, Jan, January | -| Day | 2, 02, _2 (two digits right justified) | +| Day | 2, 02, \_2 (two digits right justified) | | Day of the week | Mon, Monday | | Hour | 3 (12-hour), 03 (12-hour zero prefixed), 15 (24-hour) | | Minute | 4, 04 | @@ -1579,9 +1605,9 @@ doesn't exist in the shared extracted map, or if the timestamp parsing fails. The supported actions are: -* fudge (default): Change the timestamp to the last known timestamp, summing up +- fudge (default): Change the timestamp to the last known timestamp, summing up 1 nanosecond (to guarantee log entries ordering). -* skip: Do not change the timestamp and keep the time when the log entry was +- skip: Do not change the timestamp and keep the time when the log entry was scraped. The following stage fetches the `time` value from the shared values map, parses @@ -1600,13 +1626,12 @@ The `stage.geoip` inner block configures a processing stage that reads an IP add The following arguments are supported: -| Name | Type | Description | Default | Required | -| ---------------- | ------------- | -------------------------------------------------- | ------- | -------- | -| `db` | `string` | Path to the Maxmind DB file. | | yes | -| `source` | `string` | IP from extracted data to parse. | | yes | +| Name | Type | Description | Default | Required | +| ---------------- | ------------- | ------------------------------------------------------------- | ------- | -------- | +| `db` | `string` | Path to the Maxmind DB file. | | yes | +| `source` | `string` | IP from extracted data to parse. | | yes | | `db_type` | `string` | Maxmind DB type. Allowed values are "city", "asn", "country". | | no | -| `custom_lookups` | `map(string)` | Key-value pairs of JMESPath expressions. | | no | - +| `custom_lookups` | `map(string)` | Key-value pairs of JMESPath expressions. | | no | #### GeoIP with City database example: @@ -1642,7 +1667,7 @@ loki.process "example" { } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. +The `json` stage extracts the IP address from the `client_ip` key in the log line. Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the following fields in the shared map which are added as labels using the `labels` stage. The extracted data from the IP used in this example: @@ -1682,7 +1707,7 @@ loki.process "example" { } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. +The `json` stage extracts the IP address from the `client_ip` key in the log line. Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the shared map. The extracted data from the IP used in this example: @@ -1717,7 +1742,7 @@ loki.process "example" { } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. +The `json` stage extracts the IP address from the `client_ip` key in the log line. Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the following fields in the shared map which are added as labels using the `labels` stage. The extracted data from the IP used in this example: @@ -1757,7 +1782,8 @@ loki.process "example" { } } ``` -The `json` stage extracts the IP address from the `client_ip` key in the log line. + +The `json` stage extracts the IP address from the `client_ip` key in the log line. Then the extracted `ip` value is given as source to geoip stage. The geoip stage performs a lookup on the IP and populates the shared map with the data from the city database results in addition to the custom lookups. Lastly, the custom lookup fields from the shared map are added as labels. ## Exported fields @@ -1777,8 +1803,9 @@ The following fields are exported and can be referenced by other components: `loki.process` does not expose any component-specific debug information. ## Debug metrics -* `loki_process_dropped_lines_total` (counter): Number of lines dropped as part of a processing stage. -* `loki_process_dropped_lines_by_label_total` (counter): Number of lines dropped when `by_label_name` is non-empty in [stage.limit][]. + +- `loki_process_dropped_lines_total` (counter): Number of lines dropped as part of a processing stage. +- `loki_process_dropped_lines_by_label_total` (counter): Number of lines dropped when `by_label_name` is non-empty in [stage.limit][]. ## Example @@ -1798,6 +1825,7 @@ loki.process "local" { } } ``` + ## Compatible components diff --git a/docs/sources/flow/reference/components/loki.relabel.md b/docs/sources/flow/reference/components/loki.relabel.md index 04f548da514c..bfa03b88dd84 100644 --- a/docs/sources/flow/reference/components/loki.relabel.md +++ b/docs/sources/flow/reference/components/loki.relabel.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.relabel/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.relabel/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.relabel/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.relabel/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.relabel/ description: Learn about loki.relabel title: loki.relabel @@ -50,18 +50,18 @@ loki.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes -`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------------- | -------------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(receiver)` | Where to forward log entries after relabeling. | | yes | +| `max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache | 10,000 | no | ## Blocks The following blocks are supported inside the definition of `loki.relabel`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to received log entries. | no +| Hierarchy | Name | Description | Required | +| --------- | -------- | -------------------------------------------------- | -------- | +| rule | [rule][] | Relabeling rules to apply to received log entries. | no | [rule]: #rule-block @@ -73,10 +73,10 @@ rule | [rule][] | Relabeling rules to apply to received log entries. | no The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. -`rules` | `RelabelRules` | The currently configured relabeling rules. +| Name | Type | Description | +| ---------- | -------------- | ------------------------------------------------------------ | +| `receiver` | `receiver` | The input receiver where log lines are sent to be relabeled. | +| `rules` | `RelabelRules` | The currently configured relabeling rules. | ## Component health @@ -89,11 +89,11 @@ In those cases, exported fields are kept at their last healthy values. ## Debug metrics -* `loki_relabel_entries_processed` (counter): Total number of log entries processed. -* `loki_relabel_entries_written` (counter): Total number of log entries forwarded. -* `loki_relabel_cache_misses` (counter): Total number of cache misses. -* `loki_relabel_cache_hits` (counter): Total number of cache hits. -* `loki_relabel_cache_size` (gauge): Total size of relabel cache. +- `loki_relabel_entries_processed` (counter): Total number of log entries processed. +- `loki_relabel_entries_written` (counter): Total number of log entries forwarded. +- `loki_relabel_cache_misses` (counter): Total number of cache misses. +- `loki_relabel_cache_hits` (counter): Total number of cache hits. +- `loki_relabel_cache_size` (gauge): Total size of relabel cache. ## Example diff --git a/docs/sources/flow/reference/components/loki.rules.kubernetes.md b/docs/sources/flow/reference/components/loki.rules.kubernetes.md index 314b7a41595c..4ff84a1dab33 100644 --- a/docs/sources/flow/reference/components/loki.rules.kubernetes.md +++ b/docs/sources/flow/reference/components/loki.rules.kubernetes.md @@ -11,12 +11,12 @@ labels: `loki.rules.kubernetes` discovers `PrometheusRule` Kubernetes resources and loads them into a Loki instance. -* You can specify multiple `loki.rules.kubernetes` components by giving them different labels. -* [Kubernetes label selectors][] can be used to limit the `Namespace` and +- You can specify multiple `loki.rules.kubernetes` components by giving them different labels. +- [Kubernetes label selectors][] can be used to limit the `Namespace` and `PrometheusRule` resources considered during reconciliation. -* Compatible with the Ruler APIs of Grafana Loki, Grafana Cloud, and Grafana Enterprise Metrics. -* Compatible with the `PrometheusRule` CRD from the [prometheus-operator][]. -* This component accesses the Kubernetes REST API from [within a Pod][]. +- Compatible with the Ruler APIs of Grafana Loki, Grafana Cloud, and Grafana Enterprise Metrics. +- Compatible with the `PrometheusRule` CRD from the [prometheus-operator][]. +- This component accesses the Kubernetes REST API from [within a Pod][]. {{< admonition type="note" >}} This component requires [Role-based access control (RBAC)][] to be set up @@ -41,27 +41,28 @@ loki.rules.kubernetes "LABEL" { `loki.rules.kubernetes` supports the following arguments: -Name | Type | Description | Default | Required --------------------------|------------|----------------------------------------------------------|---------|--------- -`address` | `string` | URL of the Loki ruler. | | yes -`tenant_id` | `string` | Loki tenant ID. | | no -`use_legacy_routes` | `bool` | Whether to use deprecated ruler API endpoints. | false | no -`sync_interval` | `duration` | Amount of time between reconciliations with Loki. | "30s" | no -`loki_namespace_prefix` | `string` | Prefix used to differentiate multiple {{< param "PRODUCT_ROOT_NAME" >}} deployments. | "agent" | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`proxy_url` | `string` | HTTP proxy to proxy requests through. | | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - - [arguments]: #arguments +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ------------------------------------------------------------------------------------ | ------- | -------- | +| `address` | `string` | URL of the Loki ruler. | | yes | +| `tenant_id` | `string` | Loki tenant ID. | | no | +| `use_legacy_routes` | `bool` | Whether to use deprecated ruler API endpoints. | false | no | +| `sync_interval` | `duration` | Amount of time between reconciliations with Loki. | "30s" | no | +| `loki_namespace_prefix` | `string` | Prefix used to differentiate multiple {{< param "PRODUCT_ROOT_NAME" >}} deployments. | "agent" | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `proxy_url` | `string` | HTTP proxy to proxy requests through. | | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +[arguments]: #arguments If no `tenant_id` is provided, the component assumes that the Loki instance at `address` is running in single-tenant mode and no `X-Scope-OrgID` header is sent. @@ -80,17 +81,17 @@ unique value for each deployment. The following blocks are supported inside the definition of `loki.rules.kubernetes`: -Hierarchy | Block | Description | Required --------------------------------------------|------------------------|----------------------------------------------------------|--------- -rule_namespace_selector | [label_selector][] | Label selector for `Namespace` resources. | no -rule_namespace_selector > match_expression | [match_expression][] | Label match expression for `Namespace` resources. | no -rule_selector | [label_selector][] | Label selector for `PrometheusRule` resources. | no -rule_selector > match_expression | [match_expression][] | Label match expression for `PrometheusRule` resources. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------------ | -------------------- | -------------------------------------------------------- | -------- | +| rule_namespace_selector | [label_selector][] | Label selector for `Namespace` resources. | no | +| rule_namespace_selector > match_expression | [match_expression][] | Label match expression for `Namespace` resources. | no | +| rule_selector | [label_selector][] | Label selector for `PrometheusRule` resources. | no | +| rule_selector > match_expression | [match_expression][] | Label match expression for `PrometheusRule` resources. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -109,9 +110,9 @@ The `label_selector` block describes a Kubernetes label selector for rule or nam The following arguments are supported: -Name | Type | Description | Default | Required ----------------|---------------|---------------------------------------------------|-----------------------------|--------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | yes +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | yes | When the `match_labels` argument is empty, all resources will be matched. @@ -121,17 +122,17 @@ The `match_expression` block describes a Kubernetes label match expression for r The following arguments are supported: -Name | Type | Description | Default | Required ------------|----------------|----------------------------------------------------|---------|--------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values` | `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument should be one of the following strings: -* `"in"` -* `"notin"` -* `"exists"` +- `"in"` +- `"notin"` +- `"exists"` ### basic_auth block @@ -162,27 +163,29 @@ The `operator` argument should be one of the following strings: `loki.rules.kubernetes` exposes resource-level debug information. The following are exposed per discovered `PrometheusRule` resource: -* The Kubernetes namespace. -* The resource name. -* The resource uid. -* The number of rule groups. + +- The Kubernetes namespace. +- The resource name. +- The resource uid. +- The number of rule groups. The following are exposed per discovered Loki rule namespace resource: -* The namespace name. -* The number of rule groups. + +- The namespace name. +- The number of rule groups. Only resources managed by the component are exposed - regardless of how many actually exist. ## Debug metrics -Metric Name | Type | Description -----------------------------------------------|-------------|------------------------------------------------------------------------- -`loki_rules_config_updates_total` | `counter` | Number of times the configuration has been updated. -`loki_rules_events_total` | `counter` | Number of events processed, partitioned by event type. -`loki_rules_events_failed_total` | `counter` | Number of events that failed to be processed, partitioned by event type. -`loki_rules_events_retried_total` | `counter` | Number of events that were retried, partitioned by event type. -`loki_rules_client_request_duration_seconds` | `histogram` | Duration of requests to the Loki API. +| Metric Name | Type | Description | +| -------------------------------------------- | ----------- | ------------------------------------------------------------------------ | +| `loki_rules_config_updates_total` | `counter` | Number of times the configuration has been updated. | +| `loki_rules_events_total` | `counter` | Number of events processed, partitioned by event type. | +| `loki_rules_events_failed_total` | `counter` | Number of events that failed to be processed, partitioned by event type. | +| `loki_rules_events_retried_total` | `counter` | Number of events that were retried, partitioned by event type. | +| `loki_rules_client_request_duration_seconds` | `histogram` | Duration of requests to the Loki API. | ## Example @@ -238,21 +241,21 @@ kind: ClusterRole metadata: name: grafana-agent rules: -- apiGroups: [""] - resources: ["namespaces"] - verbs: ["get", "list", "watch"] -- apiGroups: ["monitoring.coreos.com"] - resources: ["prometheusrules"] - verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch"] + - apiGroups: ["monitoring.coreos.com"] + resources: ["prometheusrules"] + verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-agent subjects: -- kind: ServiceAccount - name: grafana-agent - namespace: default + - kind: ServiceAccount + name: grafana-agent + namespace: default roleRef: kind: ClusterRole name: grafana-agent diff --git a/docs/sources/flow/reference/components/loki.source.api.md b/docs/sources/flow/reference/components/loki.source.api.md index cc508ad976b7..371fe188f855 100644 --- a/docs/sources/flow/reference/components/loki.source.api.md +++ b/docs/sources/flow/reference/components/loki.source.api.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.api/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.api/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.api/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.api/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.api/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.api/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.api/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.api/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.api/ description: Learn about loki.source.api title: loki.source.api @@ -38,19 +38,18 @@ The component will start HTTP server on the configured port and address with the - `/api/v1/push` - internally reroutes to `/loki/api/v1/push` - `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw` - [promtail-push-api]: /docs/loki/latest/clients/promtail/configuration/#loki_push_api ## Arguments `loki.source.api` supports the following arguments: -Name | Type | Description | Default | Required --------------------------|----------------------|------------------------------------------------------------|---------|--------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no -`labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +| Name | Type | Description | Default | Required | +| ------------------------ | -------------------- | ---------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from request. | `false` | no | +| `labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. @@ -61,9 +60,9 @@ The `relabel_rules` field can make use of the `rules` export value from a The following blocks are supported inside the definition of `loki.source.api`: -Hierarchy | Name | Description | Required -----------|----------|----------------------------------------------------|--------- -`http` | [http][] | Configures the HTTP server that receives requests. | no +| Hierarchy | Name | Description | Required | +| --------- | -------- | -------------------------------------------------- | -------- | +| `http` | [http][] | Configures the HTTP server that receives requests. | no | [http]: #http @@ -83,10 +82,10 @@ Hierarchy | Name | Description | Requ The following are some of the metrics that are exposed when this component is used. Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. -* `loki_source_api_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. -* `loki_source_api_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. -* `loki_source_api_response_message_bytes` (histogram): Size (in bytes) of messages sent in response. -* `loki_source_api_tcp_connections` (gauge): Current number of accepted TCP connections. +- `loki_source_api_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. +- `loki_source_api_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. +- `loki_source_api_response_message_bytes` (histogram): Size (in bytes) of messages sent in response. +- `loki_source_api_tcp_connections` (gauge): Current number of accepted TCP connections. ## Example @@ -125,7 +124,6 @@ loki.source.api "loki_push_api" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.awsfirehose.md b/docs/sources/flow/reference/components/loki.source.awsfirehose.md index 2d43d6f82bb9..589280e95cf9 100644 --- a/docs/sources/flow/reference/components/loki.source.awsfirehose.md +++ b/docs/sources/flow/reference/components/loki.source.awsfirehose.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.awsfirehose/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.awsfirehose/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.awsfirehose/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.awsfirehose/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.awsfirehose/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.awsfirehose/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.awsfirehose/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.awsfirehose/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.awsfirehose/ description: Learn about loki.source.awsfirehose title: loki.source.awsfirehose @@ -36,21 +36,21 @@ the raw records to Loki. The decoding process goes as follows: The component exposes some internal labels, available for relabeling. The following tables describes internal labels available in records coming from any source. -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | -| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | +| Name | Description | Example | +| --------------------------- | ----------------------------- | ------------------------------------------------------------------------ | +| `__aws_firehose_request_id` | Firehose request ID. | `a1af4300-6c09-4916-ba8f-12f336176246` | +| `__aws_firehose_source_arn` | Firehose delivery stream ARN. | `arn:aws:firehose:us-east-2:123:deliverystream/aws_firehose_test_stream` | If the source of the Firehose record is CloudWatch logs, the request is further decoded and enriched with even more labels, exposed as follows: -| Name | Description | Example | -|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------| -| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | -| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | -| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | -| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | -| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | +| Name | Description | Example | +| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | +| `__aws_owner` | The AWS Account ID of the originating log data. | `111111111111` | +| `__aws_cw_log_group` | The log group name of the originating log data. | `CloudTrail/logs` | +| `__aws_cw_log_stream` | The log stream name of the originating log data. | `111111111111_CloudTrail/logs_us-east-1` | +| `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | +| `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | See [Examples](#example) for a full example configuration showing how to enrich each log entry with these labels. @@ -60,7 +60,7 @@ See [Examples](#example) for a full example configuration showing how to enrich loki.source.awsfirehose "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -93,12 +93,11 @@ to the list of receivers in `forward_to`. The following blocks are supported inside the definition of `loki.source.awsfirehose`: | Hierarchy | Name | Description | Required | - |-----------|----------|----------------------------------------------------|----------| +| --------- | -------- | -------------------------------------------------- | -------- | | `http` | [http][] | Configures the HTTP server that receives requests. | no | | `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no | [http]: #http - [grpc]: #grpc ### http @@ -119,9 +118,9 @@ The following blocks are supported inside the definition of `loki.source.awsfire ## Debug metrics -The following are some of the metrics that are exposed when this component is used. +The following are some of the metrics that are exposed when this component is used. {{< admonition type="note" >}} -The metrics include labels such as `status_code` where relevant, which you can use to measure request success rates. +The metrics include labels such as `status_code` where relevant, which you can use to measure request success rates. {{< /admonition >}} - `loki_source_awsfirehose_request_errors` (counter): Count of errors while receiving a request. @@ -197,6 +196,7 @@ loki.relabel "logging_origin" { forward_to = [] } ``` + ## Compatible components @@ -205,7 +205,6 @@ loki.relabel "logging_origin" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md index 8a5c8fdfaa82..54bffdb6f74e 100644 --- a/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md +++ b/docs/sources/flow/reference/components/loki.source.azure_event_hubs.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.azure_event_hubs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.azure_event_hubs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.azure_event_hubs/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.azure_event_hubs/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.azure_event_hubs/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.azure_event_hubs/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.azure_event_hubs/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.azure_event_hubs/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.azure_event_hubs/ description: Learn about loki.source.azure_event_hubs title: loki.source.azure_event_hubs @@ -15,7 +15,7 @@ title: loki.source.azure_event_hubs endpoint on Event Hubs. For more information, see the [Azure Event Hubs documentation](https://learn.microsoft.com/en-us/azure/event-hubs/azure-event-hubs-kafka-overview). -To learn more about streaming Azure logs to an Azure Event Hubs, refer to +To learn more about streaming Azure logs to an Azure Event Hubs, refer to Microsoft's tutorial on how to [Stream Azure Active Directory logs to an Azure event hub](https://learn.microsoft.com/en-us/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub). Note that an Apache Kafka endpoint is not available within the Basic pricing plan. For more information, see @@ -42,18 +42,18 @@ loki.source.azure_event_hubs "LABEL" { `loki.source.azure_event_hubs` supports the following arguments: - Name | Type | Description | Default | Required ------------------------------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|---------- - `fully_qualified_namespace` | `string` | Event hub namespace. | | yes - `event_hubs` | `list(string)` | Event Hubs to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no - `labels` | `map(string)` | The labels to associate with each received event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no - `disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +| Name | Type | Description | Default | Required | +| --------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------- | -------- | +| `fully_qualified_namespace` | `string` | Event hub namespace. | | yes | +| `event_hubs` | `list(string)` | Event Hubs to consume. | | yes | +| `group_id` | `string` | The Kafka consumer group id. | `"loki.source.azure_event_hubs"` | no | +| `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no | +| `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Azure Event Hub. | `false` | no | +| `labels` | `map(string)` | The labels to associate with each received event. | `{}` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | +| `disallow_custom_messages` | `bool` | Whether to ignore messages that don't match the [schema](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-schema) for Azure resource logs. | `false` | no | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | The `fully_qualified_namespace` argument must refer to a full `HOST:PORT` that points to your event hub, such as `NAMESPACE.servicebus.windows.net:9093`. The `assignor` argument must be set to one of `"range"`, `"roundrobin"`, or `"sticky"`. @@ -79,9 +79,9 @@ The following internal labels prefixed with `__` are available but are discarded The following blocks are supported inside the definition of `loki.source.azure_event_hubs`: - Hierarchy | Name | Description | Required -----------------|------------------|----------------------------------------------------|---------- - authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes +| Hierarchy | Name | Description | Required | +| -------------- | ---------------- | -------------------------------------------------- | -------- | +| authentication | [authentication] | Authentication configuration with Azure Event Hub. | yes | [authentication]: #authentication-block @@ -89,11 +89,11 @@ The following blocks are supported inside the definition of `loki.source.azure_e The `authentication` block defines the authentication method when communicating with Azure Event Hub. - Name | Type | Description | Default | Required ----------------------|----------------|---------------------------------------------------------------------------|---------|---------- - `mechanism` | `string` | Authentication mechanism. | | yes - `connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no - `scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------- | ------------------------------------------------------------------------- | ------- | -------- | +| `mechanism` | `string` | Authentication mechanism. | | yes | +| `connection_string` | `string` | Event Hubs ConnectionString for authentication on Azure Cloud. | | no | +| `scopes` | `list(string)` | Access token scopes. Default is `fully_qualified_namespace` without port. | | no | `mechanism` supports the values `"connection_string"` and `"oauth"`. If `"connection_string"` is used, you must set the `connection_string` attribute. If `"oauth"` is used, you must configure one of the supported credential @@ -118,7 +118,7 @@ configuration. This example consumes messages from Azure Event Hub and uses OAuth to authenticate itself. -```river +````river loki.source.azure_event_hubs "example" { fully_qualified_namespace = "my-ns.servicebus.windows.net:9093" event_hubs = ["gw-logs"] @@ -149,3 +149,4 @@ Refer to the linked documentation for more details. {{< /admonition >}} +```` diff --git a/docs/sources/flow/reference/components/loki.source.cloudflare.md b/docs/sources/flow/reference/components/loki.source.cloudflare.md index dbbd2e57b1dc..696e5858de47 100644 --- a/docs/sources/flow/reference/components/loki.source.cloudflare.md +++ b/docs/sources/flow/reference/components/loki.source.cloudflare.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.cloudflare/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.cloudflare/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.cloudflare/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.cloudflare/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.cloudflare/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.cloudflare/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.cloudflare/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.cloudflare/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.cloudflare/ description: Learn about loki.source.cloudflare title: loki.source.cloudflare @@ -36,47 +36,54 @@ loki.source.cloudflare "LABEL" { `loki.source.cloudflare` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`api_token` | `string` | The API token to authenticate with. | | yes -`zone_id` | `string` | The Cloudflare zone ID to use. | | yes -`labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no -`workers` | `int` | The number of workers to use for parsing logs. | `3` | no -`pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no -`fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no -`additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no - +| Name | Type | Description | Default | Required | +| ------------------- | -------------------- | ----------------------------------------------------------------------------- | ----------- | -------- | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `api_token` | `string` | The API token to authenticate with. | | yes | +| `zone_id` | `string` | The Cloudflare zone ID to use. | | yes | +| `labels` | `map(string)` | The labels to associate with incoming log entries. | `{}` | no | +| `workers` | `int` | The number of workers to use for parsing logs. | `3` | no | +| `pull_range` | `duration` | The timeframe to fetch for each pull request. | `"1m"` | no | +| `fields_type` | `string` | The set of fields to fetch for log entries. | `"default"` | no | +| `additional_fields` | `list(string)` | The additional list of fields to supplement those provided via `fields_type`. | | no | By default `loki.source.cloudflare` fetches logs with the `default` set of fields. Here are the different sets of `fields_type` available for selection, and the fields they include: -* `default` includes: +- `default` includes: + ``` "ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp", "EdgeResponseBytes", "EdgeRequestHost", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID" ``` + plus any extra fields provided via `additional_fields` argument. -* `minimal` includes all `default` fields and adds: +- `minimal` includes all `default` fields and adds: + ``` "ZoneID", "ClientSSLProtocol", "ClientRequestProtocol", "ClientRequestPath", "ClientRequestUserAgent", "ClientRequestReferer", "EdgeColoCode", "ClientCountry", "CacheCacheStatus", "CacheResponseStatus", "EdgeResponseContentType" ``` + plus any extra fields provided via `additional_fields` argument. -* `extended` includes all `minimal` fields and adds: +- `extended` includes all `minimal` fields and adds: + ``` "ClientSSLCipher", "ClientASN", "ClientIPClass", "CacheResponseBytes", "EdgePathingOp", "EdgePathingSrc", "EdgePathingStatus", "ParentRayID", "WorkerCPUTime", "WorkerStatus", "WorkerSubrequest", "WorkerSubrequestCount", "OriginIP", "OriginResponseStatus", "OriginSSLProtocol", "OriginResponseHTTPExpires", "OriginResponseHTTPLastModified" ``` + plus any extra fields provided via `additional_fields` argument. -* `all` includes all `extended` fields and adds: +- `all` includes all `extended` fields and adds: + ``` "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` ``` + plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). -* `custom` includes only the fields defined in `additional_fields`. +- `custom` includes only the fields defined in `additional_fields`. The component saves the last successfully-fetched timestamp in its positions file. If a position is found in the file for a given zone ID, the component @@ -95,6 +102,7 @@ The last timestamp fetched by the component is recorded in the All incoming Cloudflare log entries are in JSON format. You can make use of the `loki.process` component and a JSON processing stage to extract more labels or change the log line format. A sample log looks like this: + ```json { "CacheCacheStatus": "miss", @@ -165,7 +173,6 @@ change the log line format. A sample log looks like this: } ``` - ## Exported fields `loki.source.cloudflare` does not export any fields. @@ -178,17 +185,19 @@ configuration. ## Debug information `loki.source.cloudflare` exposes the following debug information: -* Whether the target is ready and reading logs from the API. -* The Cloudflare zone ID. -* The last error reported, if any. -* The stored positions file entry, as the combination of zone_id, labels and + +- Whether the target is ready and reading logs from the API. +- The Cloudflare zone ID. +- The last error reported, if any. +- The stored positions file entry, as the combination of zone_id, labels and last fetched timestamp. -* The last timestamp fetched. -* The set of fields being fetched. +- The last timestamp fetched. +- The set of fields being fetched. ## Debug metrics -* `loki_source_cloudflare_target_entries_total` (counter): Total number of successful entries sent via the cloudflare target. -* `loki_source_cloudflare_target_last_requested_end_timestamp` (gauge): The last cloudflare request end timestamp fetched, for calculating how far behind the target is. + +- `loki_source_cloudflare_target_entries_total` (counter): Total number of successful entries sent via the cloudflare target. +- `loki_source_cloudflare_target_last_requested_end_timestamp` (gauge): The last cloudflare request end timestamp fetched, for calculating how far behind the target is. ## Example @@ -209,6 +218,7 @@ loki.write "local" { } } ``` + ## Compatible components @@ -217,7 +227,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.docker.md b/docs/sources/flow/reference/components/loki.source.docker.md index 09b88a743645..ef86c4b46f14 100644 --- a/docs/sources/flow/reference/components/loki.source.docker.md +++ b/docs/sources/flow/reference/components/loki.source.docker.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/latest/flow/reference/components/loki.source.docker/ -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.docker/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.docker/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.docker/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.docker/ + - /docs/agent/latest/flow/reference/components/loki.source.docker/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.docker/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.docker/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.docker/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.docker/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.docker/ description: Learn about loki.source.docker title: loki.source.docker @@ -30,32 +30,33 @@ loki.source.docker "LABEL" { ``` ## Arguments + The component starts a new reader for each of the given `targets` and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.docker` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | -------------------- | -------------------- | ------- | -------- -`host` | `string` | Address of the Docker daemon. | | yes -`targets` | `list(map(string))` | List of containers to read logs from. | | yes -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no -`refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no +| Name | Type | Description | Default | Required | +| ------------------ | -------------------- | ------------------------------------------------------------------------------ | ------- | -------- | +| `host` | `string` | Address of the Docker daemon. | | yes | +| `targets` | `list(map(string))` | List of containers to read logs from. | | yes | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `labels` | `map(string)` | The default set of labels to apply on entries. | `"{}"` | no | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `"{}"` | no | +| `refresh_interval` | `duration` | The refresh interval to use when connecting to the Docker daemon over HTTP(S). | `"60s"` | no | ## Blocks The following blocks are supported inside the definition of `loki.source.docker`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | HTTP client settings when connecting to the endpoint. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | -------------------------------------------------------- | -------- | +| client | [client][] | HTTP client settings when connecting to the endpoint. | no | +| client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to an `basic_auth` block defined inside a `client` block. @@ -116,16 +117,18 @@ configuration. ## Debug information `loki.source.docker` exposes some debug information per target: -* Whether the target is ready to tail entries. -* The labels associated with the target. -* The most recent time a log line was read. + +- Whether the target is ready to tail entries. +- The labels associated with the target. +- The most recent time a log line was read. ## Debug metrics -* `loki_source_docker_target_entries_total` (gauge): Total number of successful entries sent to the Docker target. -* `loki_source_docker_target_parsing_errors_total` (gauge): Total number of parsing errors while receiving Docker messages. +- `loki_source_docker_target_entries_total` (gauge): Total number of successful entries sent to the Docker target. +- `loki_source_docker_target_parsing_errors_total` (gauge): Total number of parsing errors while receiving Docker messages. ## Component behavior + The component uses its data path, a directory named after the domain's fully qualified name, to store its _positions file_. The positions file is used to store read offsets, so that if a component or {{< param "PRODUCT_ROOT_NAME" >}} restarts, @@ -135,7 +138,7 @@ If the target's argument contains multiple entries with the same container ID (for example as a result of `discovery.docker` picking up multiple exposed ports or networks), `loki.source.docker` will deduplicate them, and only keep the first of each container ID instances, based on the -`__meta_docker_container_id` label. As such, the Docker daemon is queried +`__meta_docker_container_id` label. As such, the Docker daemon is queried for each container ID only once, and only one target will be available in the component's debug info. @@ -151,7 +154,7 @@ discovery.docker "linux" { loki.source.docker "default" { host = "unix:///var/run/docker.sock" - targets = discovery.docker.linux.targets + targets = discovery.docker.linux.targets forward_to = [loki.write.local.receiver] } @@ -171,7 +174,6 @@ loki.write "local" { - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.file.md b/docs/sources/flow/reference/components/loki.source.file.md index 8fe8354850bf..bed2fbf58a81 100644 --- a/docs/sources/flow/reference/components/loki.source.file.md +++ b/docs/sources/flow/reference/components/loki.source.file.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.file/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.file/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.file/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.file/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.file/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.file/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.file/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.file/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.file/ description: Learn about loki.source.file title: loki.source.file @@ -38,12 +38,12 @@ log entries to the list of receivers passed in `forward_to`. `loki.source.file` supports the following arguments: | Name | Type | Description | Default | Required | -| ------------------------| -------------------- | ----------------------------------------------------------------------------------- | ------- | -------- | +| ----------------------- | -------------------- | ----------------------------------------------------------------------------------- | ------- | -------- | | `targets` | `list(map(string))` | List of files to read from. | | yes | | `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | | `encoding` | `string` | The encoding to convert from when reading files. | `""` | no | | `tail_from_end` | `bool` | Whether a log file should be tailed from the end if a stored position is not found. | `false` | no | -| `legacy_positions_file` | `string` | Allows conversion from legacy positions file. | `""` | no | +| `legacy_positions_file` | `string` | Allows conversion from legacy positions file. | `""` | no | The `encoding` argument must be a valid [IANA encoding][] name. If not set, it defaults to UTF-8. @@ -51,7 +51,6 @@ defaults to UTF-8. You can use the `tail_from_end` argument when you want to tail a large file without reading its entire content. When set to true, only new logs will be read, ignoring the existing ones. - {{< admonition type="note" >}} The `legacy_positions_file` argument is used when you are transitioning from legacy. The legacy positions file will be rewritten into the new format. This operation will only occur if the new positions file does not exist and the `legacy_positions_file` is valid. @@ -64,10 +63,10 @@ The legacy positions file did not have a concept of labels in the positions file The following blocks are supported inside the definition of `loki.source.file`: -| Hierarchy | Name | Description | Required | -| -------------- | ------------------ | ----------------------------------------------------------------- | -------- | -| decompression | [decompression][] | Configure reading logs from compressed files. | no | -| file_watch | [file_watch][] | Configure how often files should be polled from disk for changes. | no | +| Hierarchy | Name | Description | Required | +| ------------- | ----------------- | ----------------------------------------------------------------- | -------- | +| decompression | [decompression][] | Configure reading logs from compressed files. | no | +| file_watch | [file_watch][] | Configure how often files should be polled from disk for changes. | no | [decompression]: #decompression-block [file_watch]: #file_watch-block @@ -258,7 +257,6 @@ loki.write "local" { - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.gcplog.md b/docs/sources/flow/reference/components/loki.source.gcplog.md index d57cf28cc06b..951b7df5af3d 100644 --- a/docs/sources/flow/reference/components/loki.source.gcplog.md +++ b/docs/sources/flow/reference/components/loki.source.gcplog.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.gcplog/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.gcplog/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.gcplog/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.gcplog/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.gcplog/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.gcplog/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.gcplog/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.gcplog/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.gcplog/ description: Learn about loki.source.gcplog title: loki.source.gcplog @@ -39,7 +39,7 @@ loki.source.gcplog "LABEL" { `loki.source.gcplog` supports the following arguments: | Name | Type | Description | Default | Required | -|-----------------|----------------------|-------------------------------------------|---------|----------| +| --------------- | -------------------- | ----------------------------------------- | ------- | -------- | | `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | | `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no | @@ -49,7 +49,7 @@ The following blocks are supported inside the definition of `loki.source.gcplog`: | Hierarchy | Name | Description | Required | -|-------------|----------|-------------------------------------------------------------------------------|----------| +| ----------- | -------- | ----------------------------------------------------------------------------- | -------- | | pull | [pull][] | Configures a target to pull logs from a GCP Pub/Sub subscription. | no | | push | [push][] | Configures a server to receive logs as GCP Pub/Sub push requests. | no | | push > http | [http][] | Configures the HTTP server that receives requests when using the `push` mode. | no | @@ -73,7 +73,7 @@ The following arguments can be used to configure the `pull` block. Any omitted fields take their default values. | Name | Type | Description | Default | Required | -|--------------------------|---------------|---------------------------------------------------------------------------|---------|----------| +| ------------------------ | ------------- | ------------------------------------------------------------------------- | ------- | -------- | | `project_id` | `string` | The GCP project id the subscription belongs to. | | yes | | `subscription` | `string` | The subscription to pull logs from. | | yes | | `labels` | `map(string)` | Additional labels to associate with incoming logs. | `"{}"` | no | @@ -100,7 +100,7 @@ The following arguments can be used to configure the `push` block. Any omitted fields take their default values. | Name | Type | Description | Default | Required | -|-----------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------| +| --------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | | `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | | `push_timeout` | `duration` | Sets a maximum processing time for each incoming GCP log entry. | `"0s"` | no | | `labels` | `map(string)` | Additional labels to associate with incoming entries. | `"{}"` | no | @@ -136,23 +136,25 @@ configuration. ## Debug information `loki.source.gcplog` exposes some debug information per gcplog listener: -* The configured strategy. -* Their label set. -* When using a `push` strategy, the listen address. + +- The configured strategy. +- Their label set. +- When using a `push` strategy, the listen address. ## Debug metrics When using the `pull` strategy, the component exposes the following debug metrics: -* `loki_source_gcplog_pull_entries_total` (counter): Number of entries received by the gcplog target. -* `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. -* `loki_source_gcplog_pull_last_success_scrape` (gauge): Timestamp of target's last successful poll. + +- `loki_source_gcplog_pull_entries_total` (counter): Number of entries received by the gcplog target. +- `loki_source_gcplog_pull_parsing_errors_total` (counter): Total number of parsing errors while receiving gcplog messages. +- `loki_source_gcplog_pull_last_success_scrape` (gauge): Timestamp of target's last successful poll. When using the `push` strategy, the component exposes the following debug metrics: -* `loki_source_gcplog_push_entries_total` (counter): Number of entries received by the gcplog target. -* `loki_source_gcplog_push_entries_total` (counter): Number of parsing errors while receiving gcplog messages. +- `loki_source_gcplog_push_entries_total` (counter): Number of entries received by the gcplog target. +- `loki_source_gcplog_push_entries_total` (counter): Number of parsing errors while receiving gcplog messages. ## Example @@ -193,6 +195,7 @@ loki.write "local" { } } ``` + ## Compatible components @@ -201,7 +204,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.gelf.md b/docs/sources/flow/reference/components/loki.source.gelf.md index eec3ef5c9af8..044ea6a63bb2 100644 --- a/docs/sources/flow/reference/components/loki.source.gelf.md +++ b/docs/sources/flow/reference/components/loki.source.gelf.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.gelf/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.gelf/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.gelf/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.gelf/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.gelf/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.gelf/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.gelf/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.gelf/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.gelf/ description: Learn about loki.source.gelf title: loki.source.gelf @@ -26,17 +26,17 @@ loki.source.gelf "LABEL" { ``` ## Arguments + The component starts a new UDP listener and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.gelf` supports the following arguments: -Name | Type | Description | Default | Required ------------- |----------------------|--------------------------------------------------------------------------------|----------------------------| -------- -`listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no -`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no - +| Name | Type | Description | Default | Required | +| ------------------------ | -------------- | -------------------------------------------------------------------------- | --------------- | -------- | +| `listen_address` | `string` | UDP address and port to listen for Graylog messages. | `0.0.0.0:12201` | no | +| `use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed | `false` | no | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no | > **NOTE**: GELF logs can be sent uncompressed or compressed with GZIP or ZLIB. > A `job` label is added with the full name of the component `loki.source.gelf.LABEL`. @@ -47,10 +47,10 @@ before they're forward to the list of receivers specified in `forward_to`. Incoming messages have the following internal labels available: -* `__gelf_message_level`: The GELF level as a string. -* `__gelf_message_host`: The host sending the GELF message. -* `__gelf_message_host`: The GELF level message version sent by the client. -* `__gelf_message_facility`: The GELF facility. +- `__gelf_message_level`: The GELF level as a string. +- `__gelf_message_host`: The host sending the GELF message. +- `__gelf_message_host`: The GELF level message version sent by the client. +- `__gelf_message_facility`: The GELF facility. All labels starting with `__` are removed prior to forwarding log entries. To keep these labels, relabel them using a [loki.relabel][] component and pass its @@ -65,8 +65,8 @@ configuration. ## Debug Metrics -* `gelf_target_entries_total` (counter): Total number of successful entries sent to the GELF target. -* `gelf_target_parsing_errors_total` (counter): Total number of parsing errors while receiving GELF messages. +- `gelf_target_entries_total` (counter): Total number of successful entries sent to the GELF target. +- `gelf_target_parsing_errors_total` (counter): Total number of parsing errors while receiving GELF messages. ## Example @@ -89,6 +89,7 @@ loki.write "endpoint" { } } ``` + ## Compatible components @@ -97,7 +98,6 @@ loki.write "endpoint" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.heroku.md b/docs/sources/flow/reference/components/loki.source.heroku.md index 62aaff4db741..8521201e2d10 100644 --- a/docs/sources/flow/reference/components/loki.source.heroku.md +++ b/docs/sources/flow/reference/components/loki.source.heroku.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.heroku/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.heroku/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.heroku/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.heroku/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.heroku/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.heroku/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.heroku/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.heroku/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.heroku/ description: Learn about loki.source.heroku title: loki.source.heroku @@ -42,13 +42,13 @@ loki.source.heroku "LABEL" { `loki.source.heroku` supports the following arguments: -Name | Type | Description | Default | Required -----------------------------|----------------------|------------------------------------------------------------------------------------|---------|--------- -`use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no -`labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no +| Name | Type | Description | Default | Required | +| --------------------------- | -------------------- | ---------------------------------------------------------------------------------- | ------- | -------- | +| `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Heroku. | `false` | no | +| `labels` | `map(string)` | The labels to associate with each received Heroku record. | `{}` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | +| `graceful_shutdown_timeout` | `duration` | Timeout for servers graceful shutdown. If configured, should be greater than zero. | "30s" | no | The `relabel_rules` field can make use of the `rules` export value from a `loki.relabel` component to apply one or more relabeling rules to log entries @@ -58,10 +58,10 @@ before they're forwarded to the list of receivers in `forward_to`. The following blocks are supported inside the definition of `loki.source.heroku`: -Hierarchy | Name | Description | Required -----------|----------|----------------------------------------------------|--------- -`http` | [http][] | Configures the HTTP server that receives requests. | no -`grpc` | [grpc][] | Configures the gRPC server that receives requests. | no +| Hierarchy | Name | Description | Required | +| --------- | -------- | -------------------------------------------------- | -------- | +| `http` | [http][] | Configures the HTTP server that receives requests. | no | +| `grpc` | [grpc][] | Configures the gRPC server that receives requests. | no | [http]: #http [grpc]: #grpc @@ -79,6 +79,7 @@ Hierarchy | Name | Description | Requ The `labels` map is applied to every message that the component reads. The following internal labels all prefixed with `__` are available but will be discarded if not relabeled: + - `__heroku_drain_host` - `__heroku_drain_app` - `__heroku_drain_proc` @@ -100,12 +101,14 @@ configuration. ## Debug information `loki.source.heroku` exposes some debug information per Heroku listener: -* Whether the listener is currently running. -* The listen address. + +- Whether the listener is currently running. +- The listen address. ## Debug metrics -* `loki_source_heroku_drain_entries_total` (counter): Number of successful entries received by the Heroku target. -* `loki_source_heroku_drain_parsing_errors_total` (counter): Number of parsing errors while receiving Heroku messages. + +- `loki_source_heroku_drain_entries_total` (counter): Number of successful entries received by the Heroku target. +- `loki_source_heroku_drain_parsing_errors_total` (counter): Number of parsing errors while receiving Heroku messages. ## Example @@ -144,6 +147,7 @@ loki.write "local" { } } ``` + ## Compatible components @@ -152,7 +156,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.journal.md b/docs/sources/flow/reference/components/loki.source.journal.md index de776c97b7ab..624fff4e2229 100644 --- a/docs/sources/flow/reference/components/loki.source.journal.md +++ b/docs/sources/flow/reference/components/loki.source.journal.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.journal/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.journal/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.journal/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.journal/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.journal/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.journal/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.journal/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.journal/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.journal/ description: Learn about loki.source.journal title: loki.source.journal @@ -26,22 +26,23 @@ loki.source.journal "LABEL" { ``` ## Arguments + The component starts a new journal reader and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.journal` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no -`max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no -`path` | `string` | Path to a directory to read entries from. | `""` | no -`matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -`labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no +| Name | Type | Description | Default | Required | +| ---------------- | -------------------- | ------------------------------------------------------------------------------------------------------ | ------- | -------- | +| `format_as_json` | `bool` | Whether to forward the original journal entry as JSON. | `false` | no | +| `max_age` | `duration` | The oldest relative time from process start that will be read. | `"7h"` | no | +| `path` | `string` | Path to a directory to read entries from. | `""` | no | +| `matches` | `string` | Journal matches to filter. The `+` character is not supported, only logical AND matches will be added. | `""` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | +| `labels` | `map(string)` | The labels to apply to every log coming out of the journal. | `{}` | no | -> **NOTE**: A `job` label is added with the full name of the component `loki.source.journal.LABEL`. +> **NOTE**: A `job` label is added with the full name of the component `loki.source.journal.LABEL`. When the `format_as_json` argument is true, log messages are passed through as JSON with all of the original fields from the journal entry. Otherwise, the log @@ -74,8 +75,8 @@ configuration. ## Debug Metrics -* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. -* `agent_loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read. +- `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages. +- `agent_loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read. ## Example @@ -101,6 +102,7 @@ loki.write "endpoint" { } } ``` + ## Compatible components @@ -109,7 +111,6 @@ loki.write "endpoint" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.kafka.md b/docs/sources/flow/reference/components/loki.source.kafka.md index e7aaa2e59905..df7f5c7fee8d 100644 --- a/docs/sources/flow/reference/components/loki.source.kafka.md +++ b/docs/sources/flow/reference/components/loki.source.kafka.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kafka/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kafka/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kafka/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kafka/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kafka/ description: Learn about loki.source.kafka title: loki.source.kafka @@ -39,17 +39,17 @@ loki.source.kafka "LABEL" { `loki.source.kafka` supports the following arguments: - Name | Type | Description | Default | Required ---------------------------|----------------------|----------------------------------------------------------|-----------------------|---------- - `brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes - `topics` | `list(string)` | The list of Kafka topics to consume. | | yes - `group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no - `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no - `version` | `string` | Kafka version to connect to. | `"2.2.1"` | no - `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no - `labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no - `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes - `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no +| Name | Type | Description | Default | Required | +| ------------------------ | -------------------- | -------------------------------------------------------- | --------------------- | -------- | +| `brokers` | `list(string)` | The list of brokers to connect to Kafka. | | yes | +| `topics` | `list(string)` | The list of Kafka topics to consume. | | yes | +| `group_id` | `string` | The Kafka consumer group id. | `"loki.source.kafka"` | no | +| `assignor` | `string` | The consumer group rebalancing strategy to use. | `"range"` | no | +| `version` | `string` | Kafka version to connect to. | `"2.2.1"` | no | +| `use_incoming_timestamp` | `bool` | Whether or not to use the timestamp received from Kafka. | `false` | no | +| `labels` | `map(string)` | The labels to associate with each received Kafka event. | `{}` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no | `assignor` values can be either `"range"`, `"roundrobin"`, or `"sticky"`. @@ -78,29 +78,26 @@ keep these labels, relabel them using a [loki.relabel][] component and pass its The following blocks are supported inside the definition of `loki.source.kafka`: - Hierarchy | Name | Description | Required ----------------------------------------------|------------------|-----------------------------------------------------------|---------- - authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no - authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no - authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no +| Hierarchy | Name | Description | Required | +| ------------------------------------------- | ---------------- | --------------------------------------------------------- | -------- | +| authentication | [authentication] | Optional authentication configuration with Kafka brokers. | no | +| authentication > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no | +| authentication > sasl_config | [sasl_config] | Optional authentication configuration with Kafka brokers. | no | +| authentication > sasl_config > tls_config | [tls_config] | Optional authentication configuration with Kafka brokers. | no | +| authentication > sasl_config > oauth_config | [oauth_config] | Optional authentication configuration with Kafka brokers. | no | [authentication]: #authentication-block - [tls_config]: #tls_config-block - [sasl_config]: #sasl_config-block - [oauth_config]: #oauth_config-block ### authentication block The `authentication` block defines the authentication method when communicating with the Kafka event brokers. - Name | Type | Description | Default | Required ---------|----------|-------------------------|----------|---------- - `type` | `string` | Type of authentication. | `"none"` | no +| Name | Type | Description | Default | Required | +| ------ | -------- | ----------------------- | -------- | -------- | +| `type` | `string` | Type of authentication. | `"none"` | no | `type` supports the values `"none"`, `"ssl"`, and `"sasl"`. If `"ssl"` is used, you must set the `tls_config` block. If `"sasl"` is used, you must set the `sasl_config` block. @@ -114,21 +111,21 @@ you must set the `tls_config` block. If `"sasl"` is used, you must set the `sasl The `sasl_config` block defines the listen address and port where the listener expects Kafka messages to be sent to. - Name | Type | Description | Default | Required --------------|----------|--------------------------------------------------------------------|----------|----------------------- - `mechanism` | `string` | Specifies the SASL mechanism the client uses to authenticate with the broker. | `"PLAIN""` | no - `user` | `string` | The user name to use for SASL authentication. | `""` | no - `password` | `secret` | The password to use for SASL authentication. | `""` | no - `use_tls` | `bool` | If true, SASL authentication is executed over TLS. | `false` | no +| Name | Type | Description | Default | Required | +| ----------- | -------- | ----------------------------------------------------------------------------- | ---------- | -------- | +| `mechanism` | `string` | Specifies the SASL mechanism the client uses to authenticate with the broker. | `"PLAIN""` | no | +| `user` | `string` | The user name to use for SASL authentication. | `""` | no | +| `password` | `secret` | The password to use for SASL authentication. | `""` | no | +| `use_tls` | `bool` | If true, SASL authentication is executed over TLS. | `false` | no | ### oauth_config block The `oauth_config` is required when the SASL mechanism is set to `OAUTHBEARER`. - Name | Type | Description | Default | Required -------------------|----------------|------------------------------------------------------------------------|---------|---------- - `token_provider` | `string` | The OAuth provider to be used. The only supported provider is `azure`. | `""` | yes - `scopes` | `list(string)` | The scopes to set in the access token | `[]` | yes +| Name | Type | Description | Default | Required | +| ---------------- | -------------- | ---------------------------------------------------------------------- | ------- | -------- | +| `token_provider` | `string` | The OAuth provider to be used. The only supported provider is `azure`. | `""` | yes | +| `scopes` | `list(string)` | The scopes to set in the access token | `[]` | yes | ## Exported fields @@ -182,7 +179,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes.md b/docs/sources/flow/reference/components/loki.source.kubernetes.md index eb79e6cf817d..42e65066c9a5 100644 --- a/docs/sources/flow/reference/components/loki.source.kubernetes.md +++ b/docs/sources/flow/reference/components/loki.source.kubernetes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kubernetes/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kubernetes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes/ description: Learn about loki.source.kubernetes labels: @@ -18,10 +18,10 @@ title: loki.source.kubernetes `loki.source.kubernetes` tails logs from Kubernetes containers using the Kubernetes API. It has the following benefits over `loki.source.file`: -* It works without a privileged container. -* It works without a root user. -* It works without needing access to the filesystem of the Kubernetes node. -* It doesn't require a DaemonSet to collect logs, so one {{< param "PRODUCT_ROOT_NAME" >}} could collect +- It works without a privileged container. +- It works without a root user. +- It works without needing access to the filesystem of the Kubernetes node. +- It doesn't require a DaemonSet to collect logs, so one {{< param "PRODUCT_ROOT_NAME" >}} could collect logs for the whole cluster. > **NOTE**: Because `loki.source.kubernetes` uses the Kubernetes API to tail @@ -47,20 +47,20 @@ log entries to the list of receivers passed in `forward_to`. `loki.source.kubernetes` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | List of files to read from. | | yes -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------------- | ----------------------------------------- | ------- | -------- | +| `targets` | `list(map(string))` | List of files to read from. | | yes | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | Each target in `targets` must have the following labels: -* `__meta_kubernetes_namespace` or `__pod_namespace__` to specify the namespace +- `__meta_kubernetes_namespace` or `__pod_namespace__` to specify the namespace of the pod to tail. -* `__meta_kubernetes_pod_name` or `__pod_name__` to specify the name of the pod +- `__meta_kubernetes_pod_name` or `__pod_name__` to specify the name of the pod to tail. -* `__meta_kubernetes_pod_container_name` or `__pod_container_name__` to specify +- `__meta_kubernetes_pod_container_name` or `__pod_container_name__` to specify the container within the pod to tail. -* `__meta_kubernetes_pod_uid` or `__pod_uid__` to specify the UID of the pod to +- `__meta_kubernetes_pod_uid` or `__pod_uid__` to specify the UID of the pod to tail. By default, all of these labels are present when the output @@ -75,15 +75,15 @@ before the container has permanently terminated. The following blocks are supported inside the definition of `loki.source.kubernetes`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | ------------------------------------------------------------------------------------------- | -------- | +| client | [client][] | Configures Kubernetes client used to tail logs. | no | +| client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -105,25 +105,26 @@ used. The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -145,9 +146,9 @@ Name | Type | Description ### clustering block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | --------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes | When {{< param "PRODUCT_ROOT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this `loki.source.kubernetes` component instance opts-in to participating in the @@ -173,11 +174,11 @@ configuration. `loki.source.kubernetes` exposes some target-level debug information per target: -* The labels associated with the target. -* The full set of labels which were found during service discovery. -* The most recent time a log line was read and forwarded to the next components +- The labels associated with the target. +- The full set of labels which were found during service discovery. +- The most recent time a log line was read and forwarded to the next components in the pipeline. -* The most recent error from tailing, if any. +- The most recent error from tailing, if any. ## Debug metrics @@ -214,7 +215,6 @@ loki.write "local" { - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md index 85a1d59637fd..527e166d94b2 100644 --- a/docs/sources/flow/reference/components/loki.source.kubernetes_events.md +++ b/docs/sources/flow/reference/components/loki.source.kubernetes_events.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.kubernetes_events/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kubernetes_events/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kubernetes_events/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kubernetes_events/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.kubernetes_events/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.kubernetes_events/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.kubernetes_events/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.kubernetes_events/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.kubernetes_events/ description: Learn about loki.source.kubernetes_events title: loki.source.kubernetes_events @@ -32,18 +32,18 @@ log entries to the list of receivers passed in `forward_to`. `loki.source.kubernetes_events` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`job_name` | `string` | Value to use for `job` label for generated logs. | `"loki.source.kubernetes_events"` | no -`log_format` | `string` | Format of the log. | `"logfmt"` | no -`namespaces` | `list(string)` | Namespaces to watch for Events in. | `[]` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------------- | ------------------------------------------------ | --------------------------------- | -------- | +| `job_name` | `string` | Value to use for `job` label for generated logs. | `"loki.source.kubernetes_events"` | no | +| `log_format` | `string` | Format of the log. | `"logfmt"` | no | +| `namespaces` | `list(string)` | Namespaces to watch for Events in. | `[]` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | By default, `loki.source.kubernetes_events` will watch for events in all namespaces. A list of explicit namespaces to watch can be provided in the `namespaces` argument. -By default, the generated log lines will be in the `logfmt` format. Use the +By default, the generated log lines will be in the `logfmt` format. Use the `log_format` argument to change it to `json`. These formats are also names of LogQL parsers, which can be used for processing the logs. @@ -55,9 +55,9 @@ LogQL parsers, which can be used for processing the logs. Log lines generated by `loki.source.kubernetes_events` have the following labels: -* `namespace`: Namespace of the Kubernetes object involved in the event. -* `job`: Value specified by the `job_name` argument. -* `instance`: Value matching the component ID. +- `namespace`: Namespace of the Kubernetes object involved in the event. +- `job`: Value specified by the `job_name` argument. +- `instance`: Value matching the component ID. If `job_name` argument is the empty string, the component will fail to load. To remove the job label, forward the output of `loki.source.kubernetes_events` to @@ -73,14 +73,14 @@ For compatibility with the `eventhandler` integration from static mode, The following blocks are supported inside the definition of `loki.source.kubernetes_events`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | -------------------------------------------------------- | -------- | +| client | [client][] | Configures Kubernetes client used to tail logs. | no | +| client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -101,25 +101,26 @@ used. The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -191,6 +192,7 @@ loki.write "local" { } } ``` + ## Compatible components @@ -199,7 +201,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.podlogs.md b/docs/sources/flow/reference/components/loki.source.podlogs.md index 5220c43e373e..f3b352ce7f6e 100644 --- a/docs/sources/flow/reference/components/loki.source.podlogs.md +++ b/docs/sources/flow/reference/components/loki.source.podlogs.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.podlogs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.podlogs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.podlogs/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.podlogs/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.podlogs/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.podlogs/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.podlogs/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.podlogs/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.podlogs/ description: Learn about loki.source.podlogs labels: @@ -49,9 +49,9 @@ log entries to the list of receivers passed in `forward_to`. `loki.source.podlogs` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------------- | ----------------------------------------- | ------- | -------- | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | `loki.source.podlogs` searches for `PodLogs` resources on Kubernetes. Each `PodLogs` resource describes a set of pods to tail logs from. @@ -64,12 +64,12 @@ The `PodLogs` resource describes a set of Pods to collect logs from. > `monitoring.grafana.com/v1alpha2`, and is not compatible with `PodLogs` from > the {{< param "PRODUCT_ROOT_NAME" >}} Operator, which are version `v1alpha1`. -Field | Type | Description ------ | ---- | ----------- -`apiVersion` | string | `monitoring.grafana.com/v1alpha2` -`kind` | string | `PodLogs` -`metadata` | [ObjectMeta][] | Metadata for the PodLogs. -`spec` | [PodLogsSpec][] | Definition of what Pods to collect logs from. +| Field | Type | Description | +| ------------ | --------------- | --------------------------------------------- | +| `apiVersion` | string | `monitoring.grafana.com/v1alpha2` | +| `kind` | string | `PodLogs` | +| `metadata` | [ObjectMeta][] | Metadata for the PodLogs. | +| `spec` | [PodLogsSpec][] | Definition of what Pods to collect logs from. | [ObjectMeta]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta [PodLogsSpec]: #podlogsspec @@ -78,11 +78,11 @@ Field | Type | Description `PodLogsSpec` describes a set of Pods to collect logs from. -Field | Type | Description ------ | ---- | ----------- -`selector` | [LabelSelector][] | Label selector of Pods to collect logs from. -`namespaceSelector` | [LabelSelector][] | Label selector of Namespaces that Pods can be discovered in. -`relabelings` | [RelabelConfig][] | Relabel rules to apply to discovered Pods. +| Field | Type | Description | +| ------------------- | ----------------- | ------------------------------------------------------------ | +| `selector` | [LabelSelector][] | Label selector of Pods to collect logs from. | +| `namespaceSelector` | [LabelSelector][] | Label selector of Namespaces that Pods can be discovered in. | +| `relabelings` | [RelabelConfig][] | Relabel rules to apply to discovered Pods. | If `selector` is left as the default value, all Pods are discovered. If `namespaceSelector` is left as the default value, all Namespaces are used for @@ -91,38 +91,38 @@ Pod discovery. The `relabelings` field can be used to modify labels from discovered Pods. The following meta labels are available for relabeling: -* `__meta_kubernetes_namespace`: The namespace of the Pod. -* `__meta_kubernetes_pod_name`: The name of the Pod. -* `__meta_kubernetes_pod_ip`: The pod IP of the Pod. -* `__meta_kubernetes_pod_label_`: Each label from the Pod. -* `__meta_kubernetes_pod_labelpresent_`: `true` for each label from +- `__meta_kubernetes_namespace`: The namespace of the Pod. +- `__meta_kubernetes_pod_name`: The name of the Pod. +- `__meta_kubernetes_pod_ip`: The pod IP of the Pod. +- `__meta_kubernetes_pod_label_`: Each label from the Pod. +- `__meta_kubernetes_pod_labelpresent_`: `true` for each label from the Pod. -* `__meta_kubernetes_pod_annotation_`: Each annotation from the +- `__meta_kubernetes_pod_annotation_`: Each annotation from the Pod. -* `__meta_kubernetes_pod_annotationpresent_`: `true` for each +- `__meta_kubernetes_pod_annotationpresent_`: `true` for each annotation from the Pod. -* `__meta_kubernetes_pod_container_init`: `true` if the container is an +- `__meta_kubernetes_pod_container_init`: `true` if the container is an `InitContainer`. -* `__meta_kubernetes_pod_container_name`: Name of the container. -* `__meta_kubernetes_pod_container_image`: The image the container is using. -* `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the Pod's ready +- `__meta_kubernetes_pod_container_name`: Name of the container. +- `__meta_kubernetes_pod_container_image`: The image the container is using. +- `__meta_kubernetes_pod_ready`: Set to `true` or `false` for the Pod's ready state. -* `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or +- `__meta_kubernetes_pod_phase`: Set to `Pending`, `Running`, `Succeeded`, `Failed` or `Unknown` in the lifecycle. -* `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled +- `__meta_kubernetes_pod_node_name`: The name of the node the pod is scheduled onto. -* `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. -* `__meta_kubernetes_pod_uid`: The UID of the Pod. -* `__meta_kubernetes_pod_controller_kind`: Object kind of the Pod's controller. -* `__meta_kubernetes_pod_controller_name`: Name of the Pod's controller. +- `__meta_kubernetes_pod_host_ip`: The current host IP of the pod object. +- `__meta_kubernetes_pod_uid`: The UID of the Pod. +- `__meta_kubernetes_pod_controller_kind`: Object kind of the Pod's controller. +- `__meta_kubernetes_pod_controller_name`: Name of the Pod's controller. In addition to the meta labels, the following labels are exposed to tell `loki.source.podlogs` which container to tail: -* `__pod_namespace__`: The namespace of the Pod. -* `__pod_name__`: The name of the Pod. -* `__pod_container_name__`: The container name within the Pod. -* `__pod_uid__`: The UID of the Pod. +- `__pod_namespace__`: The namespace of the Pod. +- `__pod_name__`: The name of the Pod. +- `__pod_container_name__`: The container name within the Pod. +- `__pod_uid__`: The UID of the Pod. [LabelSelector]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta [RelabelConfig]: https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig @@ -132,19 +132,19 @@ In addition to the meta labels, the following labels are exposed to tell The following blocks are supported inside the definition of `loki.source.podlogs`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to tail logs. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -selector | [selector][] | Label selector for which `PodLogs` to discover. | no -selector > match_expression | [match_expression][] | Label selector expression for which `PodLogs` to discover. | no -namespace_selector | [selector][] | Label selector for which namespaces to discover `PodLogs` in. | no -namespace_selector > match_expression | [match_expression][] | Label selector expression for which namespaces to discover `PodLogs` in. | no -clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------ | -------- | +| client | [client][] | Configures Kubernetes client used to tail logs. | no | +| client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| selector | [selector][] | Label selector for which `PodLogs` to discover. | no | +| selector > match_expression | [match_expression][] | Label selector expression for which `PodLogs` to discover. | no | +| namespace_selector | [selector][] | Label selector for which namespaces to discover `PodLogs` in. | no | +| namespace_selector > match_expression | [match_expression][] | Label selector expression for which namespaces to discover `PodLogs` in. | no | +| clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -168,25 +168,26 @@ used. The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | ------- | -------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -213,9 +214,9 @@ Namespace discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no | When the `match_labels` argument is empty, all resources will be matched. @@ -226,27 +227,27 @@ The `match_expression` block describes a Kubernetes label match expression for The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values`| `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument must be one of the following strings: -* `"In"` -* `"NotIn"` -* `"Exists"` -* `"DoesNotExist"` +- `"In"` +- `"NotIn"` +- `"Exists"` +- `"DoesNotExist"` Both `selector` and `namespace_selector` can make use of multiple `match_expression` inner blocks which are treated as AND clauses. ### clustering block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | --------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Distribute log collection with other cluster nodes. | | yes | When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this `loki.source.podlogs` component instance opts-in to participating in the @@ -270,11 +271,11 @@ configuration. `loki.source.podlogs` exposes some target-level debug information per target: -* The labels associated with the target. -* The full set of labels which were found during service discovery. -* The most recent time a log line was read and forwarded to the next components +- The labels associated with the target. +- The full set of labels which were found during service discovery. +- The most recent time a log line was read and forwarded to the next components in the pipeline. -* The most recent error from tailing, if any. +- The most recent error from tailing, if any. ## Debug metrics @@ -296,6 +297,7 @@ loki.write "local" { } } ``` + ## Compatible components @@ -304,7 +306,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.syslog.md b/docs/sources/flow/reference/components/loki.source.syslog.md index b1b08bd67528..142bc3dbd635 100644 --- a/docs/sources/flow/reference/components/loki.source.syslog.md +++ b/docs/sources/flow/reference/components/loki.source.syslog.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.syslog/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.syslog/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.syslog/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.syslog/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.syslog/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.syslog/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.syslog/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.syslog/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.syslog/ description: Learn about loki.source.syslog title: loki.source.syslog @@ -38,10 +38,10 @@ loki.source.syslog "LABEL" { `loki.source.syslog` supports the following arguments: -Name | Type | Description | Default | Required ---------------- | ---------------------- | -------------------- | ------- | -------- -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no +| Name | Type | Description | Default | Required | +| --------------- | -------------------- | ----------------------------------------- | ------- | -------- | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | "{}" | no | The `relabel_rules` field can make use of the `rules` export value from a [loki.relabel][] component to apply one or more relabeling rules to log entries @@ -54,10 +54,10 @@ before they're forwarded to the list of receivers in `forward_to`. The following blocks are supported inside the definition of `loki.source.syslog`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -listener | [listener][] | Configures a listener for IETF Syslog (RFC5424) messages. | no -listener > tls_config | [tls_config][] | Configures TLS settings for connecting to the endpoint for TCP connections. | no +| Hierarchy | Name | Description | Required | +| --------------------- | -------------- | --------------------------------------------------------------------------- | -------- | +| listener | [listener][] | Configures a listener for IETF Syslog (RFC5424) messages. | no | +| listener > tls_config | [tls_config][] | Configures TLS settings for connecting to the endpoint for TCP connections. | no | The `>` symbol indicates deeper levels of nesting. For example, `config > tls_config` refers to a `tls_config` block defined inside a `config` block. @@ -75,16 +75,16 @@ The following arguments can be used to configure a `listener`. Only the `address` field is required and any omitted fields take their default values. -Name | Type | Description | Default | Required ------------------------- | ------------- | ----------- | ------- | -------- -`address` | `string` | The `` address to listen to for syslog messages. | | yes -`protocol` | `string` | The protocol to listen to for syslog messages. Must be either `tcp` or `udp`. | `tcp` | no -`idle_timeout` | `duration` | The idle timeout for tcp connections. | `"120s"` | no -`label_structured_data` | `bool` | Whether to translate syslog structured data to loki labels. | `false` | no -`labels` | `map(string)` | The labels to associate with each received syslog record. | `{}` | no -`use_incoming_timestamp` | `bool` | Whether to set the timestamp to the incoming syslog record timestamp. | `false` | no -`use_rfc5424_message` | `bool` | Whether to forward the full RFC5424-formatted syslog message. | `false` | no -`max_message_length` | `int` | The maximum limit to the length of syslog messages. | `8192` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ------------- | ----------------------------------------------------------------------------- | -------- | -------- | +| `address` | `string` | The `` address to listen to for syslog messages. | | yes | +| `protocol` | `string` | The protocol to listen to for syslog messages. Must be either `tcp` or `udp`. | `tcp` | no | +| `idle_timeout` | `duration` | The idle timeout for tcp connections. | `"120s"` | no | +| `label_structured_data` | `bool` | Whether to translate syslog structured data to loki labels. | `false` | no | +| `labels` | `map(string)` | The labels to associate with each received syslog record. | `{}` | no | +| `use_incoming_timestamp` | `bool` | Whether to set the timestamp to the incoming syslog record timestamp. | `false` | no | +| `use_rfc5424_message` | `bool` | Whether to forward the full RFC5424-formatted syslog message. | `false` | no | +| `max_message_length` | `int` | The maximum limit to the length of syslog messages. | `8192` | no | By default, the component assigns the log entry timestamp as the time it was processed. @@ -96,7 +96,7 @@ internal labels, prefixed with `__syslog_`. If `label_structured_data` is set, structured data in the syslog header is also translated to internal labels in the form of -`__syslog_message_sd__`. For example, a structured data entry of +`__syslog_message_sd__`. For example, a structured data entry of `[example@99999 test="yes"]` becomes the label `__syslog_message_sd_example_99999_test` with the value `"yes"`. @@ -116,14 +116,16 @@ configuration. ## Debug information `loki.source.syslog` exposes some debug information per syslog listener: -* Whether the listener is currently running. -* The listen address. -* The labels that the listener applies to incoming log entries. + +- Whether the listener is currently running. +- The listen address. +- The labels that the listener applies to incoming log entries. ## Debug metrics -* `loki_source_syslog_entries_total` (counter): Total number of successful entries sent to the syslog component. -* `loki_source_syslog_parsing_errors_total` (counter): Total number of parsing errors while receiving syslog messages. -* `loki_source_syslog_empty_messages_total` (counter): Total number of empty messages received from the syslog component. + +- `loki_source_syslog_entries_total` (counter): Total number of successful entries sent to the syslog component. +- `loki_source_syslog_parsing_errors_total` (counter): Total number of parsing errors while receiving syslog messages. +- `loki_source_syslog_empty_messages_total` (counter): Total number of empty messages received from the syslog component. ## Example @@ -161,7 +163,6 @@ loki.write "local" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.source.windowsevent.md b/docs/sources/flow/reference/components/loki.source.windowsevent.md index 58bc3431bd9b..b7140ee9376a 100644 --- a/docs/sources/flow/reference/components/loki.source.windowsevent.md +++ b/docs/sources/flow/reference/components/loki.source.windowsevent.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.source.windowsevent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.windowsevent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.windowsevent/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.windowsevent/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.source.windowsevent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.source.windowsevent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.source.windowsevent/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.source.windowsevent/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.source.windowsevent/ description: Learn about loki.windowsevent title: loki.source.windowsevent @@ -27,26 +27,26 @@ loki.source.windowsevent "LABEL" { ``` ## Arguments + The component starts a new reader and fans out log entries to the list of receivers passed in `forward_to`. `loki.source.windowsevent` supports the following arguments: -Name | Type | Description | Default | Required ------------------------- |----------------------|-----------------------------------------------------------------------------|----------------------------| -------- -`locale` | `number` | Locale ID for event rendering. 0 default is Windows Locale. | `0` | no -`eventlog_name` | `string` | Event log to read from. | | See below. -`xpath_query` | `string` | Event log to read from. | `"*"` | See below. -`bookmark_path` | `string` | Keeps position in event log. | `"DATA_PATH/bookmark.xml"` | no -`poll_interval` | `duration` | How often to poll the event log. | `"3s"` | no -`exclude_event_data` | `bool` | Exclude event data. | `false` | no -`exclude_user_data` | `bool` | Exclude user data. | `false` | no -`exclude_event_message` | `bool` | Exclude the human-friendly event message. | `false` | no -`use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed. | `false` | no -`forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes -`labels` | `map(string)` | The labels to associate with incoming logs. | | no -`legacy_bookmark_path` | `string` | The location of the Grafana Agent Static bookmark path. | `` | no - +| Name | Type | Description | Default | Required | +| ------------------------ | -------------------- | --------------------------------------------------------------------------- | -------------------------- | ---------- | +| `locale` | `number` | Locale ID for event rendering. 0 default is Windows Locale. | `0` | no | +| `eventlog_name` | `string` | Event log to read from. | | See below. | +| `xpath_query` | `string` | Event log to read from. | `"*"` | See below. | +| `bookmark_path` | `string` | Keeps position in event log. | `"DATA_PATH/bookmark.xml"` | no | +| `poll_interval` | `duration` | How often to poll the event log. | `"3s"` | no | +| `exclude_event_data` | `bool` | Exclude event data. | `false` | no | +| `exclude_user_data` | `bool` | Exclude user data. | `false` | no | +| `exclude_event_message` | `bool` | Exclude the human-friendly event message. | `false` | no | +| `use_incoming_timestamp` | `bool` | When false, assigns the current timestamp to the log when it was processed. | `false` | no | +| `forward_to` | `list(LogsReceiver)` | List of receivers to send log entries to. | | yes | +| `labels` | `map(string)` | The labels to associate with incoming logs. | | no | +| `legacy_bookmark_path` | `string` | The location of the Grafana Agent Static bookmark path. | `` | no | > **NOTE**: `eventlog_name` is required if `xpath_query` does not specify the event log. > You can define `xpath_query` in [short or xml form](https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events). @@ -79,6 +79,7 @@ loki.write "endpoint" { } } ``` + ## Compatible components @@ -87,7 +88,6 @@ loki.write "endpoint" { - Components that export [Loki `LogsReceiver`](../../compatibility/#loki-logsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/loki.write.md b/docs/sources/flow/reference/components/loki.write.md index bb50817385e9..f1913842ac93 100644 --- a/docs/sources/flow/reference/components/loki.write.md +++ b/docs/sources/flow/reference/components/loki.write.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/loki.write/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.write/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.write/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.write/ + - /docs/grafana-cloud/agent/flow/reference/components/loki.write/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.write/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.write/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.write/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/loki.write/ description: Learn about loki.write title: loki.write @@ -31,25 +31,25 @@ loki.write "LABEL" { `loki.write` supports the following arguments: -Name | Type | Description | Default | Required ------------------ | ------------- | ------------------------------------------------ | ------- | -------- -`max_streams` | `int` | Maximum number of active streams. | 0 (no limit) | no -`external_labels` | `map(string)` | Labels to add to logs sent over the network. | | no +| Name | Type | Description | Default | Required | +| ----------------- | ------------- | -------------------------------------------- | ------------ | -------- | +| `max_streams` | `int` | Maximum number of active streams. | 0 (no limit) | no | +| `external_labels` | `map(string)` | Labels to add to logs sent over the network. | | no | ## Blocks The following blocks are supported inside the definition of `loki.write`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -endpoint | [endpoint][] | Location to send logs to. | no -wal | [wal][] | Write-ahead log configuration. | no -endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------------------ | ----------------- | -------------------------------------------------------- | -------- | +| endpoint | [endpoint][] | Location to send logs to. | no | +| wal | [wal][] | Write-ahead log configuration. | no | +| endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | | endpoint > queue_config | [queue_config][] | When WAL is enabled, configures the queue client. | no | The `>` symbol indicates deeper levels of nesting. For example, `endpoint > @@ -71,34 +71,35 @@ The `endpoint` block describes a single location to send logs to. Multiple The following arguments are supported: -Name | Type | Description | Default | Required ------------------------- | ------------------- | ------------------------------------------------------------- | --------- | -------- -`url` | `string` | Full URL to send logs to. | | yes -`name` | `string` | Optional name to identify this endpoint with. | | no -`headers` | `map(string)` | Extra headers to deliver with the request. | | no -`batch_wait` | `duration` | Maximum amount of time to wait before sending a batch. | `"1s"` | no -`batch_size` | `string` | Maximum batch size of logs to accumulate before sending. | `"1MiB"` | no -`remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no -`tenant_id` | `string` | The tenant ID used by default to push logs. | | no -`min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no -`max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no -`max_backoff_retries` | `int` | Maximum number of retries. | 10 | no -`retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#endpoint-block). - - [`bearer_token_file` argument](#endpoint-block). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | --------- | -------- | +| `url` | `string` | Full URL to send logs to. | | yes | +| `name` | `string` | Optional name to identify this endpoint with. | | no | +| `headers` | `map(string)` | Extra headers to deliver with the request. | | no | +| `batch_wait` | `duration` | Maximum amount of time to wait before sending a batch. | `"1s"` | no | +| `batch_size` | `string` | Maximum batch size of logs to accumulate before sending. | `"1MiB"` | no | +| `remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no | +| `tenant_id` | `string` | The tenant ID used by default to push logs. | | no | +| `min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no | +| `max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no | +| `max_backoff_retries` | `int` | Maximum number of retries. | 10 | no | +| `retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#endpoint-block). +- [`bearer_token_file` argument](#endpoint-block). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -143,9 +144,9 @@ underlying client queues batches of logs to be sent to Loki. The following arguments are supported: -| Name | Type | Description | Default | Required | -| --------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | -| `capacity` | `string` | Controls the size of the underlying send queue buffer. This setting should be considered a worst-case scenario of memory consumption, in which all enqueued batches are full. | `10MiB` | no | +| Name | Type | Description | Default | Required | +| --------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `capacity` | `string` | Controls the size of the underlying send queue buffer. This setting should be considered a worst-case scenario of memory consumption, in which all enqueued batches are full. | `10MiB` | no | | `drain_timeout` | `duration` | Configures the maximum time the client can take to drain the send queue upon shutdown. During that time, it will enqueue pending batches and drain the send queue sending each. | `"1m"` | no | ### wal block (experimental) @@ -155,9 +156,10 @@ you must include the `wal` block in your configuration. When the WAL is enabled, component are first written to a WAL under the `dir` directory and then read into the remote-write client. This process provides durability guarantees when an entry reaches this component. The client knows when to read from the WAL using the following two mechanisms: + - The WAL-writer side of the `loki.write` component notifies the reader side that new data is available. - The WAL-reader side periodically checks if there is new data, increasing the wait time exponentially between -`min_read_frequency` and `max_read_frequency`. + `min_read_frequency` and `max_read_frequency`. The WAL is located inside a component-specific directory relative to the storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the @@ -165,13 +167,13 @@ storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the The following arguments are supported: -Name | Type | Description | Default | Required ---------------------- |------------|--------------------------------------------------------------------------------------------------------------------|-----------| -------- -`enabled` | `bool` | Whether to enable the WAL. | false | no -`max_segment_age` | `duration` | Maximum time a WAL segment should be allowed to live. Segments older than this setting will be eventually deleted. | `"1h"` | no -`min_read_frequency` | `duration` | Minimum backoff time in the backup read mechanism. | `"250ms"` | no -`max_read_frequency` | `duration` | Maximum backoff time in the backup read mechanism. | `"1s"` | no -`drain_timeout` | `duration` | Maximum time the WAL drain procedure can take, before being forcefully stopped. | `"30s"` | no +| Name | Type | Description | Default | Required | +| -------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------ | --------- | -------- | +| `enabled` | `bool` | Whether to enable the WAL. | false | no | +| `max_segment_age` | `duration` | Maximum time a WAL segment should be allowed to live. Segments older than this setting will be eventually deleted. | `"1h"` | no | +| `min_read_frequency` | `duration` | Minimum backoff time in the backup read mechanism. | `"250ms"` | no | +| `max_read_frequency` | `duration` | Maximum backoff time in the backup read mechanism. | `"1s"` | no | +| `drain_timeout` | `duration` | Maximum time the WAL drain procedure can take, before being forcefully stopped. | `"30s"` | no | [run]: {{< relref "../cli/run.md" >}} @@ -179,9 +181,9 @@ Name | Type | Description The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `LogsReceiver` | A value that other components can use to send log entries to. +| Name | Type | Description | +| ---------- | -------------- | ------------------------------------------------------------- | +| `receiver` | `LogsReceiver` | A value that other components can use to send log entries to. | ## Component health @@ -194,14 +196,15 @@ configuration. information. ## Debug metrics -* `loki_write_encoded_bytes_total` (counter): Number of bytes encoded and ready to send. -* `loki_write_sent_bytes_total` (counter): Number of bytes sent. -* `loki_write_dropped_bytes_total` (counter): Number of bytes dropped because failed to be sent to the ingester after all retries. -* `loki_write_sent_entries_total` (counter): Number of log entries sent to the ingester. -* `loki_write_dropped_entries_total` (counter): Number of log entries dropped because they failed to be sent to the ingester after all retries. -* `loki_write_request_duration_seconds` (histogram): Duration of sent requests. -* `loki_write_batch_retries_total` (counter): Number of times batches have had to be retried. -* `loki_write_stream_lag_seconds` (gauge): Difference between current time and last batch timestamp for successful sends. + +- `loki_write_encoded_bytes_total` (counter): Number of bytes encoded and ready to send. +- `loki_write_sent_bytes_total` (counter): Number of bytes sent. +- `loki_write_dropped_bytes_total` (counter): Number of bytes dropped because failed to be sent to the ingester after all retries. +- `loki_write_sent_entries_total` (counter): Number of log entries sent to the ingester. +- `loki_write_dropped_entries_total` (counter): Number of log entries dropped because they failed to be sent to the ingester after all retries. +- `loki_write_request_duration_seconds` (histogram): Duration of sent requests. +- `loki_write_batch_retries_total` (counter): Number of times batches have had to be retried. +- `loki_write_stream_lag_seconds` (gauge): Difference between current time and last batch timestamp for successful sends. ## Examples @@ -234,9 +237,10 @@ loki.write "default" { } } ``` + ## Technical details -`loki.write` uses [snappy](https://en.wikipedia.org/wiki/Snappy_(compression)) for compression. +`loki.write` uses [snappy]() for compression. Any labels that start with `__` will be removed before sending to the endpoint. diff --git a/docs/sources/flow/reference/components/mimir.rules.kubernetes.md b/docs/sources/flow/reference/components/mimir.rules.kubernetes.md index 17bb3c63fc37..70eac8821d23 100644 --- a/docs/sources/flow/reference/components/mimir.rules.kubernetes.md +++ b/docs/sources/flow/reference/components/mimir.rules.kubernetes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/mimir.rules.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/mimir.rules.kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/mimir.rules.kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/mimir.rules.kubernetes/ + - /docs/grafana-cloud/agent/flow/reference/components/mimir.rules.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/mimir.rules.kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/mimir.rules.kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/mimir.rules.kubernetes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/mimir.rules.kubernetes/ description: Learn about mimir.rules.kubernetes labels: @@ -18,13 +18,13 @@ title: mimir.rules.kubernetes `mimir.rules.kubernetes` discovers `PrometheusRule` Kubernetes resources and loads them into a Mimir instance. -* Multiple `mimir.rules.kubernetes` components can be specified by giving them +- Multiple `mimir.rules.kubernetes` components can be specified by giving them different labels. -* [Kubernetes label selectors][] can be used to limit the `Namespace` and +- [Kubernetes label selectors][] can be used to limit the `Namespace` and `PrometheusRule` resources considered during reconciliation. -* Compatible with the Ruler APIs of Grafana Mimir, Grafana Cloud, and Grafana Enterprise Metrics. -* Compatible with the `PrometheusRule` CRD from the [prometheus-operator][]. -* This component accesses the Kubernetes REST API from [within a Pod][]. +- Compatible with the Ruler APIs of Grafana Mimir, Grafana Cloud, and Grafana Enterprise Metrics. +- Compatible with the `PrometheusRule` CRD from the [prometheus-operator][]. +- This component accesses the Kubernetes REST API from [within a Pod][]. > **NOTE**: This component requires [Role-based access control (RBAC)][] to be setup > in Kubernetes in order for the Agent to access it via the Kubernetes REST API. @@ -47,31 +47,32 @@ mimir.rules.kubernetes "LABEL" { `mimir.rules.kubernetes` supports the following arguments: -Name | Type | Description | Default | Required ------------------------- | ------------------- | --------------------------------------------------------------- | ------------- | -------- -`address` | `string` | URL of the Mimir ruler. | | yes -`tenant_id` | `string` | Mimir tenant ID. | | no -`use_legacy_routes` | `bool` | Whether to use [deprecated][gem-2_2] ruler API endpoints. | false | no -`prometheus_http_prefix` | `string` | Path prefix for [Mimir's Prometheus endpoint][gem-path-prefix]. | `/prometheus` | no -`sync_interval` | `duration` | Amount of time between reconciliations with Mimir. | "5m" | no -`mimir_namespace_prefix` | `string` | Prefix used to differentiate multiple {{< param "PRODUCT_NAME" >}} deployments. | "agent" | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - - [arguments]: #arguments +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------------- | -------- | +| `address` | `string` | URL of the Mimir ruler. | | yes | +| `tenant_id` | `string` | Mimir tenant ID. | | no | +| `use_legacy_routes` | `bool` | Whether to use [deprecated][gem-2_2] ruler API endpoints. | false | no | +| `prometheus_http_prefix` | `string` | Path prefix for [Mimir's Prometheus endpoint][gem-path-prefix]. | `/prometheus` | no | +| `sync_interval` | `duration` | Amount of time between reconciliations with Mimir. | "5m" | no | +| `mimir_namespace_prefix` | `string` | Prefix used to differentiate multiple {{< param "PRODUCT_NAME" >}} deployments. | "agent" | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. + +[arguments]: #arguments {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -99,17 +100,17 @@ This is useful if you configure Mimir to use a different [prefix][gem-path-prefi The following blocks are supported inside the definition of `mimir.rules.kubernetes`: -Hierarchy | Block | Description | Required --------------------------------------------|------------------------|----------------------------------------------------------|--------- -rule_namespace_selector | [label_selector][] | Label selector for `Namespace` resources. | no -rule_namespace_selector > match_expression | [match_expression][] | Label match expression for `Namespace` resources. | no -rule_selector | [label_selector][] | Label selector for `PrometheusRule` resources. | no -rule_selector > match_expression | [match_expression][] | Label match expression for `PrometheusRule` resources. | no -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------------ | -------------------- | -------------------------------------------------------- | -------- | +| rule_namespace_selector | [label_selector][] | Label selector for `Namespace` resources. | no | +| rule_namespace_selector > match_expression | [match_expression][] | Label match expression for `Namespace` resources. | no | +| rule_selector | [label_selector][] | Label selector for `PrometheusRule` resources. | no | +| rule_selector > match_expression | [match_expression][] | Label match expression for `PrometheusRule` resources. | no | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -128,9 +129,9 @@ The `label_selector` block describes a Kubernetes label selector for rule or nam The following arguments are supported: -Name | Type | Description | Default | Required ----------------|---------------|---------------------------------------------------|-----------------------------|--------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | yes +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | yes | When the `match_labels` argument is empty, all resources will be matched. @@ -140,18 +141,18 @@ The `match_expression` block describes a Kubernetes label match expression for r The following arguments are supported: -Name | Type | Description | Default | Required ------------|----------------|----------------------------------------------------|---------|--------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values` | `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument should be one of the following strings: -* `"In"` -* `"NotIn"` -* `"Exists"` -* `"DoesNotExist"` +- `"In"` +- `"NotIn"` +- `"Exists"` +- `"DoesNotExist"` The `values` argument must not be provided when `operator` is set to `"Exists"` or `"DoesNotExist"`. @@ -184,27 +185,29 @@ The `values` argument must not be provided when `operator` is set to `"Exists"` `mimir.rules.kubernetes` exposes resource-level debug information. The following are exposed per discovered `PrometheusRule` resource: -* The Kubernetes namespace. -* The resource name. -* The resource uid. -* The number of rule groups. + +- The Kubernetes namespace. +- The resource name. +- The resource uid. +- The number of rule groups. The following are exposed per discovered Mimir rule namespace resource: -* The namespace name. -* The number of rule groups. + +- The namespace name. +- The number of rule groups. Only resources managed by the component are exposed - regardless of how many actually exist. ## Debug metrics -Metric Name | Type | Description -----------------------------------------------|-------------|------------------------------------------------------------------------- -`mimir_rules_config_updates_total` | `counter` | Number of times the configuration has been updated. -`mimir_rules_events_total` | `counter` | Number of events processed, partitioned by event type. -`mimir_rules_events_failed_total` | `counter` | Number of events that failed to be processed, partitioned by event type. -`mimir_rules_events_retried_total` | `counter` | Number of events that were retried, partitioned by event type. -`mimir_rules_client_request_duration_seconds` | `histogram` | Duration of requests to the Mimir API. +| Metric Name | Type | Description | +| --------------------------------------------- | ----------- | ------------------------------------------------------------------------ | +| `mimir_rules_config_updates_total` | `counter` | Number of times the configuration has been updated. | +| `mimir_rules_events_total` | `counter` | Number of events processed, partitioned by event type. | +| `mimir_rules_events_failed_total` | `counter` | Number of events that failed to be processed, partitioned by event type. | +| `mimir_rules_events_retried_total` | `counter` | Number of events that were retried, partitioned by event type. | +| `mimir_rules_client_request_duration_seconds` | `histogram` | Duration of requests to the Mimir API. | ## Example @@ -260,21 +263,21 @@ kind: ClusterRole metadata: name: grafana-agent rules: -- apiGroups: [""] - resources: ["namespaces"] - verbs: ["get", "list", "watch"] -- apiGroups: ["monitoring.coreos.com"] - resources: ["prometheusrules"] - verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch"] + - apiGroups: ["monitoring.coreos.com"] + resources: ["prometheusrules"] + verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-agent subjects: -- kind: ServiceAccount - name: grafana-agent - namespace: default + - kind: ServiceAccount + name: grafana-agent + namespace: default roleRef: kind: ClusterRole name: grafana-agent diff --git a/docs/sources/flow/reference/components/module.file.md b/docs/sources/flow/reference/components/module.file.md index a16d5b8d23f7..0e8c8b056b75 100644 --- a/docs/sources/flow/reference/components/module.file.md +++ b/docs/sources/flow/reference/components/module.file.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/module.file/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.file/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.file/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/module.file/ + - /docs/grafana-cloud/agent/flow/reference/components/module.file/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.file/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.file/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/module.file/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/module.file/ description: Learn about module.file labels: @@ -20,7 +20,7 @@ Starting with release v0.40, `module.string` is deprecated and is replaced by `i {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} -`module.file` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}} +`module.file` is a _module loader_ component. A module loader is a {{< param "PRODUCT_NAME" >}} component which retrieves a [module][] and runs the components defined inside of it. `module.file` simplifies the configurations for modules loaded from a file by embedding @@ -49,12 +49,12 @@ module.file "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`filename` | `string` | Path of the file on disk to watch | | yes -`detector` | `string` | Which file change detector to use (fsnotify, poll) | `"fsnotify"` | no -`poll_frequency` | `duration` | How often to poll for file changes | `"1m"` | no -`is_secret` | `bool` | Marks the file as containing a [secret][] | `false` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | -------------------------------------------------- | ------------ | -------- | +| `filename` | `string` | Path of the file on disk to watch | | yes | +| `detector` | `string` | Which file change detector to use (fsnotify, poll) | `"fsnotify"` | no | +| `poll_frequency` | `duration` | How often to poll for file changes | `"1m"` | no | +| `is_secret` | `bool` | Marks the file as containing a [secret][] | `false` | no | [secret]: {{< relref "../../concepts/config-language/expressions/types_and_values.md#secrets" >}} @@ -64,9 +64,9 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `module.file`: -Hierarchy | Block | Description | Required ----------------- | ---------- | ----------- | -------- -arguments | [arguments][] | Arguments to pass to the module. | no +| Hierarchy | Block | Description | Required | +| --------- | ------------- | -------------------------------- | -------- | +| arguments | [arguments][] | Arguments to pass to the module. | no | [arguments]: #arguments-block @@ -78,10 +78,10 @@ module. The attributes provided in the `arguments` block are validated based on the [argument blocks][] defined in the module source: -* If a module source marks one of its arguments as required, it must be +- If a module source marks one of its arguments as required, it must be provided as an attribute in the `arguments` block of the module loader. -* Attributes in the `argument` block of the module loader will be rejected if +- Attributes in the `argument` block of the module loader will be rejected if they are not defined in the module source. [argument blocks]: {{< relref "../config-blocks/argument.md" >}} @@ -90,9 +90,9 @@ The attributes provided in the `arguments` block are validated based on the The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`exports` | `map(any)` | The exports of the Module loader. +| Name | Type | Description | +| --------- | ---------- | --------------------------------- | +| `exports` | `map(any)` | The exports of the Module loader. | `exports` exposes the `export` config block inside a module. It can be accessed from the parent config via `module.file.LABEL.exports.EXPORT_LABEL`. diff --git a/docs/sources/flow/reference/components/module.git.md b/docs/sources/flow/reference/components/module.git.md index 98544a13f2fe..bcf28142a058 100644 --- a/docs/sources/flow/reference/components/module.git.md +++ b/docs/sources/flow/reference/components/module.git.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/module.git/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.git/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.git/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/module.git/ + - /docs/grafana-cloud/agent/flow/reference/components/module.git/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.git/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.git/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/module.git/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/module.git/ description: Learn about module.git labels: @@ -20,7 +20,7 @@ Starting with release v0.40, `module.git` is deprecated and is replaced by `impo {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} -`module.git` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}} +`module.git` is a _module loader_ component. A module loader is a {{< param "PRODUCT_NAME" >}} component which retrieves a [module][] and runs the components defined inside of it. `module.git` retrieves a module source from a file in a Git repository. @@ -46,12 +46,12 @@ module.git "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------|------------|---------------------------------------------------------|----------|--------- -`repository` | `string` | The Git repository address to retrieve the module from. | | yes -`revision` | `string` | The Git revision to retrieve the module from. | `"HEAD"` | no -`path` | `string` | The path in the repository where the module is stored. | | yes -`pull_frequency` | `duration` | The frequency to pull the repository for updates. | `"60s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | ------------------------------------------------------- | -------- | -------- | +| `repository` | `string` | The Git repository address to retrieve the module from. | | yes | +| `revision` | `string` | The Git revision to retrieve the module from. | `"HEAD"` | no | +| `path` | `string` | The path in the repository where the module is stored. | | yes | +| `pull_frequency` | `duration` | The frequency to pull the repository for updates. | `"60s"` | no | The `repository` attribute must be set to a repository address that would be recognized by Git with a `git clone REPOSITORY_ADDRESS` command, such as @@ -71,11 +71,11 @@ the retrieved changes. The following blocks are supported inside the definition of `module.git`: -Hierarchy | Block | Description | Required ----------------- | ---------- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the repo. | no -ssh_key | [ssh_key][] | Configure a SSH Key for authenticating to the repo. | no -arguments | [arguments][] | Arguments to pass to the module. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ---------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the repo. | no | +| ssh_key | [ssh_key][] | Configure a SSH Key for authenticating to the repo. | no | +| arguments | [arguments][] | Arguments to pass to the module. | no | [basic_auth]: #basic_auth-block [ssh_key]: #ssh_key-block @@ -87,12 +87,12 @@ arguments | [arguments][] | Arguments to pass to the module. | no ### ssh_key block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | SSH username. | | yes -`key` | `secret` | SSH private key | | no -`key_file` | `string` | SSH private key path. | | no -`passphrase` | `secret` | Passphrase for SSH key if needed. | | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | --------------------------------- | ------- | -------- | +| `username` | `string` | SSH username. | | yes | +| `key` | `secret` | SSH private key | | no | +| `key_file` | `string` | SSH private key path. | | no | +| `passphrase` | `secret` | Passphrase for SSH key if needed. | | no | ### arguments block @@ -102,10 +102,10 @@ module. The attributes provided in the `arguments` block are validated based on the [argument blocks][] defined in the module source: -* If a module source marks one of its arguments as required, it must be +- If a module source marks one of its arguments as required, it must be provided as an attribute in the `arguments` block of the module loader. -* Attributes in the `argument` block of the module loader will be rejected if +- Attributes in the `argument` block of the module loader will be rejected if they are not defined in the module source. [argument blocks]: {{< relref "../config-blocks/argument.md" >}} @@ -114,9 +114,9 @@ The attributes provided in the `arguments` block are validated based on the The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`exports` | `map(any)` | The exports of the Module loader. +| Name | Type | Description | +| --------- | ---------- | --------------------------------- | +| `exports` | `map(any)` | The exports of the Module loader. | `exports` exposes the `export` config block inside a module. It can be accessed from the parent config via `module.git.COMPONENT_LABEL.exports.EXPORT_LABEL`. @@ -135,8 +135,8 @@ and most recent load of the module was successful. `module.git` includes debug information for: -* The full SHA of the currently checked out revision. -* The most recent error when trying to fetch the repository, if any. +- The full SHA of the currently checked out revision. +- The most recent error when trying to fetch the repository, if any. ## Debug metrics @@ -160,6 +160,7 @@ module.git "add" { ``` The same example as above using basic auth: + ```river module.git "add" { repository = "https://github.com/rfratto/agent-modules.git" @@ -179,6 +180,7 @@ module.git "add" { ``` Using SSH Key from another component: + ```river local.file "ssh_key" { filename = "PATH/TO/SSH.KEY" @@ -203,6 +205,7 @@ module.git "add" { ``` The same example as above using SSH Key auth: + ```river module.git "add" { repository = "github.com:rfratto/agent-modules.git" diff --git a/docs/sources/flow/reference/components/module.http.md b/docs/sources/flow/reference/components/module.http.md index fde6e883ac56..8ab678edabe7 100644 --- a/docs/sources/flow/reference/components/module.http.md +++ b/docs/sources/flow/reference/components/module.http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/module.http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/module.http/ + - /docs/grafana-cloud/agent/flow/reference/components/module.http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/module.http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/module.http/ description: Learn about module.http labels: @@ -48,14 +48,14 @@ module.http "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`url` | `string` | URL to poll. | | yes -`method` | `string` | Define HTTP method for the request | `"GET"` | no -`headers` | `map(string)` | Custom headers for the request. | `{}` | no -`poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no -`poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no -`is_secret` | `bool` | Whether the response body should be treated as a secret. | false | no +| Name | Type | Description | Default | Required | +| ---------------- | ------------- | -------------------------------------------------------- | ------- | -------- | +| `url` | `string` | URL to poll. | | yes | +| `method` | `string` | Define HTTP method for the request | `"GET"` | no | +| `headers` | `map(string)` | Custom headers for the request. | `{}` | no | +| `poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no | +| `poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no | +| `is_secret` | `bool` | Whether the response body should be treated as a secret. | false | no | [secret]: {{< relref "../../concepts/config-language/expressions/types_and_values.md#secrets" >}} @@ -63,9 +63,9 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `module.http`: -Hierarchy | Block | Description | Required ----------------- | ---------- | ----------- | -------- -arguments | [arguments][] | Arguments to pass to the module. | no +| Hierarchy | Block | Description | Required | +| --------- | ------------- | -------------------------------- | -------- | +| arguments | [arguments][] | Arguments to pass to the module. | no | [arguments]: #arguments-block @@ -77,10 +77,10 @@ module. The attributes provided in the `arguments` block are validated based on the [argument blocks][] defined in the module source: -* If a module source marks one of its arguments as required, it must be +- If a module source marks one of its arguments as required, it must be provided as an attribute in the `arguments` block of the module loader. -* Attributes in the `argument` block of the module loader are rejected if +- Attributes in the `argument` block of the module loader are rejected if they are not defined in the module source. [argument blocks]: {{< relref "../config-blocks/argument.md" >}} @@ -89,9 +89,9 @@ The attributes provided in the `arguments` block are validated based on the The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`exports` | `map(any)` | The exports of the Module loader. +| Name | Type | Description | +| --------- | ---------- | --------------------------------- | +| `exports` | `map(any)` | The exports of the Module loader. | `exports` exposes the `export` config block inside a module. It can be accessed from the parent config via `module.http.LABEL.exports.EXPORT_LABEL`. @@ -127,7 +127,6 @@ HTTP server, polling for changes once every minute. The module sets up a Redis exporter and exports the list of targets to the parent config to scrape and remote write. - Parent: ```river @@ -155,10 +154,12 @@ prometheus.remote_write "default" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. Module: @@ -172,6 +173,8 @@ export "redis_targets" { value = prometheus.exporter.redis.local_redis.targets } ``` + Replace the following: - - `REDIS_ADDR`: The address of your Redis instance. - - `REDIS_PASSWORD_FILE`: The path to a file containing the password for your Redis instance. + +- `REDIS_ADDR`: The address of your Redis instance. +- `REDIS_PASSWORD_FILE`: The path to a file containing the password for your Redis instance. diff --git a/docs/sources/flow/reference/components/module.string.md b/docs/sources/flow/reference/components/module.string.md index 62198d6ec2b8..7f9c4174e29d 100644 --- a/docs/sources/flow/reference/components/module.string.md +++ b/docs/sources/flow/reference/components/module.string.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/module.string/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.string/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.string/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/module.string/ + - /docs/grafana-cloud/agent/flow/reference/components/module.string/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/module.string/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/module.string/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/module.string/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/module.string/ description: Learn about module.string labels: @@ -20,7 +20,7 @@ Starting with release v0.40, `module.string` is deprecated and is replaced by `i {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} -`module.string` is a *module loader* component. A module loader is a {{< param "PRODUCT_NAME" >}} +`module.string` is a _module loader_ component. A module loader is a {{< param "PRODUCT_NAME" >}} component which retrieves a [module][] and runs the components defined inside of it. [module]: {{< relref "../../concepts/modules.md" >}} @@ -43,9 +43,9 @@ module.string "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`content` | `secret` or `string` | The contents of the module to load as a secret or string. | | yes +| Name | Type | Description | Default | Required | +| --------- | -------------------- | --------------------------------------------------------- | ------- | -------- | +| `content` | `secret` or `string` | The contents of the module to load as a secret or string. | | yes | `content` is a string that contains the configuration of the module to load. `content` is typically loaded by using the exports of another component. For example, @@ -58,9 +58,9 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `module.string`: -Hierarchy | Block | Description | Required ----------------- | ---------- | ----------- | -------- -arguments | [arguments][] | Arguments to pass to the module. | no +| Hierarchy | Block | Description | Required | +| --------- | ------------- | -------------------------------- | -------- | +| arguments | [arguments][] | Arguments to pass to the module. | no | [arguments]: #arguments-block @@ -72,10 +72,10 @@ module. The attributes provided in the `arguments` block are validated based on the [argument blocks][] defined in the module source: -* If a module source marks one of its arguments as required, it must be +- If a module source marks one of its arguments as required, it must be provided as an attribute in the `arguments` block of the module loader. -* Attributes in the `argument` block of the module loader will be rejected if +- Attributes in the `argument` block of the module loader will be rejected if they are not defined in the module source. [argument blocks]: {{< relref "../config-blocks/argument.md" >}} @@ -84,9 +84,9 @@ The attributes provided in the `arguments` block are validated based on the The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`exports` | `map(any)` | The exports of the Module loader. +| Name | Type | Description | +| --------- | ---------- | --------------------------------- | +| `exports` | `map(any)` | The exports of the Module loader. | `exports` exposes the `export` config block inside a module. It can be accessed from the parent config via `module.string.LABEL.exports.EXPORT_LABEL`. diff --git a/docs/sources/flow/reference/components/otelcol.auth.basic.md b/docs/sources/flow/reference/components/otelcol.auth.basic.md index 885eb53f09fa..8474a81e982e 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.basic.md +++ b/docs/sources/flow/reference/components/otelcol.auth.basic.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.basic/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.basic/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.basic/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.basic/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.basic/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.basic/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.basic/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.basic/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.basic/ description: Learn about otelcol.auth.basic title: otelcol.auth.basic @@ -36,18 +36,18 @@ otelcol.auth.basic "LABEL" { `otelcol.auth.basic` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | Username to use for basic authentication requests. | | yes -`password` | `secret` | Password to use for basic authentication requests. | | yes +| Name | Type | Description | Default | Required | +| ---------- | -------- | -------------------------------------------------- | ------- | -------- | +| `username` | `string` | Username to use for basic authentication requests. | | yes | +| `password` | `secret` | Password to use for basic authentication requests. | | yes | ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. +| Name | Type | Description | +| --------- | -------------------------- | --------------------------------------------------------------- | +| `handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. | ## Component health diff --git a/docs/sources/flow/reference/components/otelcol.auth.bearer.md b/docs/sources/flow/reference/components/otelcol.auth.bearer.md index 718789603b49..898d584a0447 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.bearer.md +++ b/docs/sources/flow/reference/components/otelcol.auth.bearer.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.bearer/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.bearer/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.bearer/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.bearer/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.bearer/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.bearer/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.bearer/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.bearer/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.bearer/ description: Learn about otelcol.auth.bearer title: otelcol.auth.bearer @@ -35,10 +35,10 @@ otelcol.auth.bearer "LABEL" { `otelcol.auth.bearer` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`token` | `secret` | Bearer token to use for authenticating requests. | | yes -`scheme` | `string` | Authentication scheme name. | "Bearer" | no +| Name | Type | Description | Default | Required | +| -------- | -------- | ------------------------------------------------ | -------- | -------- | +| `token` | `secret` | Bearer token to use for authenticating requests. | | yes | +| `scheme` | `string` | Authentication scheme name. | "Bearer" | no | When sending the token, the value of `scheme` is prepended to the `token` value. The string is then sent out as either a header (in case of HTTP) or as metadata (in case of gRPC). @@ -47,9 +47,9 @@ The string is then sent out as either a header (in case of HTTP) or as metadata The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. +| Name | Type | Description | +| --------- | -------------------------- | --------------------------------------------------------------- | +| `handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. | ## Component health @@ -66,7 +66,7 @@ configuration. The example below configures [otelcol.exporter.otlp][] to use a bearer token authentication. -If we assume that the value of the `API_KEY` environment variable is `SECRET_API_KEY`, then +If we assume that the value of the `API_KEY` environment variable is `SECRET_API_KEY`, then the `Authorization` RPC metadata is set to `Bearer SECRET_API_KEY`. ```river @@ -86,7 +86,7 @@ otelcol.auth.bearer "creds" { The example below configures [otelcol.exporter.otlphttp][] to use a bearer token authentication. -If we assume that the value of the `API_KEY` environment variable is `SECRET_API_KEY`, then +If we assume that the value of the `API_KEY` environment variable is `SECRET_API_KEY`, then the `Authorization` HTTP header is set to `MyScheme SECRET_API_KEY`. ```river diff --git a/docs/sources/flow/reference/components/otelcol.auth.headers.md b/docs/sources/flow/reference/components/otelcol.auth.headers.md index 6b70a021de35..fac2ae8a0728 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.headers.md +++ b/docs/sources/flow/reference/components/otelcol.auth.headers.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.headers/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.headers/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.headers/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.headers/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.headers/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.headers/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.headers/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.headers/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.headers/ description: Learn about otelcol.auth.headers title: otelcol.auth.headers @@ -42,9 +42,9 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.auth.headers`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -header | [header][] | Custom header to attach to requests. | no +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ------------------------------------ | -------- | +| header | [header][] | Custom header to attach to requests. | no | [header]: #header-block @@ -53,18 +53,19 @@ header | [header][] | Custom header to attach to requests. | no The `header` block defines a custom header to attach to requests. It is valid to provide multiple `header` blocks to set more than one header. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | Name of the header to set. | | yes -`value` | `string` or `secret` | Value of the header. | | no -`from_context` | `string` | Metadata name to get header value from. | | no -`action` | `string` | An action to perform on the header | "upsert" | no +| Name | Type | Description | Default | Required | +| -------------- | -------------------- | --------------------------------------- | -------- | -------- | +| `key` | `string` | Name of the header to set. | | yes | +| `value` | `string` or `secret` | Value of the header. | | no | +| `from_context` | `string` | Metadata name to get header value from. | | no | +| `action` | `string` | An action to perform on the header | "upsert" | no | The supported values for `action` are: -* `insert`: Inserts the new header if it does not exist. -* `update`: Updates the header value if it exists. -* `upsert`: Inserts a header if it does not exist and updates the header if it exists. -* `delete`: Deletes the header. + +- `insert`: Inserts the new header if it does not exist. +- `update`: Updates the header value if it exists. +- `upsert`: Inserts a header if it does not exist and updates the header if it exists. +- `delete`: Deletes the header. Exactly one of `value` or `from_context` must be provided for each `header` block. @@ -73,17 +74,18 @@ The `value` attribute sets the value of the header directly. Alternatively, `from_context` can be used to dynamically retrieve the header value from request metadata. For `from_context` to work, other components in the pipeline also need to be configured appropriately: -* If an `otelcol.processor.batch` is present in the pipeline, it must be configured to preserve client metadata. + +- If an `otelcol.processor.batch` is present in the pipeline, it must be configured to preserve client metadata. Do this by adding the value that `from_context` needs to the `metadata_keys` of the batch processor. -* `otelcol` receivers must be configured with `include_metadata` set to `true` so that metadata keys are available to the pipeline. +- `otelcol` receivers must be configured with `include_metadata` set to `true` so that metadata keys are available to the pipeline. ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. +| Name | Type | Description | +| --------- | -------------------------- | --------------------------------------------------------------- | +| `handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. | ## Component health diff --git a/docs/sources/flow/reference/components/otelcol.auth.oauth2.md b/docs/sources/flow/reference/components/otelcol.auth.oauth2.md index c58d93e56df2..eed1a86767d1 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.oauth2.md +++ b/docs/sources/flow/reference/components/otelcol.auth.oauth2.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.oauth2/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.oauth2/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.oauth2/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.oauth2/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.oauth2/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.oauth2/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.oauth2/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.oauth2/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.oauth2/ description: Learn about otelcol.auth.oauth2 title: otelcol.auth.oauth2 @@ -37,16 +37,16 @@ otelcol.auth.oauth2 "LABEL" { ## Arguments -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`client_id` | `string` | The client identifier issued to the client. | | no -`client_id_file` | `string` | The file path to retrieve the client identifier issued to the client. | | no -`client_secret` | `secret` | The secret string associated with the client identifier. | | no -`client_secret_file` | `secret` | The file path to retrieve the secret string associated with the client identifier. | | no -`token_url` | `string` | The server endpoint URL from which to get tokens. | | yes -`endpoint_params` | `map(list(string))` | Additional parameters that are sent to the token endpoint. | `{}` | no -`scopes` | `list(string)` | Requested permissions associated for the client. | `[]` | no -`timeout` | `duration` | The timeout on the client connecting to `token_url`. | `"0s"` | no +| Name | Type | Description | Default | Required | +| -------------------- | ------------------- | ---------------------------------------------------------------------------------- | ------- | -------- | +| `client_id` | `string` | The client identifier issued to the client. | | no | +| `client_id_file` | `string` | The file path to retrieve the client identifier issued to the client. | | no | +| `client_secret` | `secret` | The secret string associated with the client identifier. | | no | +| `client_secret_file` | `secret` | The file path to retrieve the secret string associated with the client identifier. | | no | +| `token_url` | `string` | The server endpoint URL from which to get tokens. | | yes | +| `endpoint_params` | `map(list(string))` | Additional parameters that are sent to the token endpoint. | `{}` | no | +| `scopes` | `list(string)` | Requested permissions associated for the client. | `[]` | no | +| `timeout` | `duration` | The timeout on the client connecting to `token_url`. | `"0s"` | no | The `timeout` argument is used both for requesting initial tokens and for refreshing tokens. `"0s"` implies no timeout. @@ -62,15 +62,15 @@ precedence. The following blocks are supported inside the definition of `otelcol.auth.oauth2`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls | [tls][] | TLS settings for the token client. | no +| Hierarchy | Block | Description | Required | +| --------- | ------- | ---------------------------------- | -------- | +| tls | [tls][] | TLS settings for the token client. | no | [tls]: #tls-block ### tls block -The `tls` block configures TLS settings used for connecting to the token client. If the `tls` block isn't provided, +The `tls` block configures TLS settings used for connecting to the token client. If the `tls` block isn't provided, TLS won't be used for communication. {{< docs/shared lookup="flow/reference/components/otelcol-tls-config-block.md" source="agent" version="" >}} @@ -79,9 +79,9 @@ TLS won't be used for communication. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. +| Name | Type | Description | +| --------- | -------------------------- | --------------------------------------------------------------- | +| `handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. | ## Component health @@ -112,6 +112,7 @@ otelcol.auth.oauth2 "creds" { ``` Here is another example with some optional attributes specified: + ```river otelcol.exporter.otlp "example" { client { diff --git a/docs/sources/flow/reference/components/otelcol.auth.sigv4.md b/docs/sources/flow/reference/components/otelcol.auth.sigv4.md index e4fc91df2832..bd20483aa1e5 100644 --- a/docs/sources/flow/reference/components/otelcol.auth.sigv4.md +++ b/docs/sources/flow/reference/components/otelcol.auth.sigv4.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.sigv4/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.sigv4/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.sigv4/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.sigv4/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.auth.sigv4/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.auth.sigv4/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.auth.sigv4/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.auth.sigv4/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.auth.sigv4/ description: Learn about otelcol.auth.sigv4 title: otelcol.auth.sigv4 @@ -12,8 +12,8 @@ title: otelcol.auth.sigv4 # otelcol.auth.sigv4 `otelcol.auth.sigv4` exposes a `handler` that can be used by other `otelcol` -components to authenticate requests to AWS services using the AWS Signature Version 4 (SigV4) protocol. -For more information about SigV4 see the AWS documentation about +components to authenticate requests to AWS services using the AWS Signature Version 4 (SigV4) protocol. +For more information about SigV4 see the AWS documentation about [Signing AWS API requests](https://docs.aws.amazon.com/general/latest/gr/signing-aws-api-requests.html) . > **NOTE**: `otelcol.auth.sigv4` is a wrapper over the upstream OpenTelemetry @@ -23,8 +23,8 @@ For more information about SigV4 see the AWS documentation about Multiple `otelcol.auth.sigv4` components can be specified by giving them different labels. -> **NOTE**: The Agent must have valid AWS credentials as used by the -[AWS SDK for Go](https://aws.github.io/aws-sdk-go-v2/docs/configuring-sdk/#specifying-credentials). +> **NOTE**: The Agent must have valid AWS credentials as used by the +> [AWS SDK for Go](https://aws.github.io/aws-sdk-go-v2/docs/configuring-sdk/#specifying-credentials). ## Usage @@ -35,22 +35,22 @@ otelcol.auth.sigv4 "LABEL" { ## Arguments -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`region` | `string` | The AWS region to sign with. | "" | no -`service` | `string` | The AWS service to sign with. | "" | no +| Name | Type | Description | Default | Required | +| --------- | -------- | ----------------------------- | ------- | -------- | +| `region` | `string` | The AWS region to sign with. | "" | no | +| `service` | `string` | The AWS service to sign with. | "" | no | If `region` and `service` are left empty, their values are inferred from the URL of the exporter using the following rules: -* If the exporter URL starts with `aps-workspaces` and `service` is empty, `service` will be set to `aps`. -* If the exporter URL starts with `search-` and `service` is empty, `service` will be set to `es`. -* If the exporter URL starts with either `aps-workspaces` or `search-` and `region` is empty, `region` . -will be set to the value between the first and second `.` character in the exporter URL. +- If the exporter URL starts with `aps-workspaces` and `service` is empty, `service` will be set to `aps`. +- If the exporter URL starts with `search-` and `service` is empty, `service` will be set to `es`. +- If the exporter URL starts with either `aps-workspaces` or `search-` and `region` is empty, `region` . + will be set to the value between the first and second `.` character in the exporter URL. If none of the above rules apply, then `region` and `service` must be specified. -A list of valid AWS regions can be found on Amazon's documentation for +A list of valid AWS regions can be found on Amazon's documentation for [Regions, Availability Zones, and Local Zones](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html). ## Blocks @@ -58,9 +58,9 @@ A list of valid AWS regions can be found on Amazon's documentation for The following blocks are supported inside the definition of `otelcol.auth.sigv4`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -assume_role | [assume_role][] | Configuration for assuming a role. | no +| Hierarchy | Block | Description | Required | +| ----------- | --------------- | ---------------------------------- | -------- | +| assume_role | [assume_role][] | Configuration for assuming a role. | no | [assume_role]: #assume_role-block @@ -68,13 +68,13 @@ assume_role | [assume_role][] | Configuration for assuming a role. | no The `assume_role` block specifies the configuration needed to assume a role. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`arn` | `string` | The Amazon Resource Name (ARN) of a role to assume. | "" | no -`session_name` | `string` | The name of a role session. | "" | no -`sts_region` | `string` | The AWS region where STS is used to assume the configured role. | "" | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | --------------------------------------------------------------- | ------- | -------- | +| `arn` | `string` | The Amazon Resource Name (ARN) of a role to assume. | "" | no | +| `session_name` | `string` | The name of a role session. | "" | no | +| `sts_region` | `string` | The AWS region where STS is used to assume the configured role. | "" | no | -If the `assume_role` block is specified in the config and `sts_region` is not set, then `sts_region` +If the `assume_role` block is specified in the config and `sts_region` is not set, then `sts_region` will default to the value for `region`. For cross region authentication, `region` and `sts_region` can be set different to different values. @@ -83,9 +83,9 @@ For cross region authentication, `region` and `sts_region` can be set different The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. +| Name | Type | Description | +| --------- | -------------------------- | --------------------------------------------------------------- | +| `handler` | `capsule(otelcol.Handler)` | A value that other components can use to authenticate requests. | ## Component health @@ -153,7 +153,7 @@ otelcol.auth.sigv4 "creds" { ### Specifying "region" and "service" explicitly and adding a "role" to assume -In this example we have also specified configuration to assume a role. `sts_region` has not been +In this example we have also specified configuration to assume a role. `sts_region` has not been provided, so it will default to the value of `region` which is `example_region`. ```river @@ -167,7 +167,7 @@ otelcol.exporter.otlp "example" { otelcol.auth.sigv4 "creds" { region = "example_region" service = "example_service" - + assume_role { session_name = "role_session_name" } diff --git a/docs/sources/flow/reference/components/otelcol.connector.servicegraph.md b/docs/sources/flow/reference/components/otelcol.connector.servicegraph.md index a37a300111e7..04bb38fdb123 100644 --- a/docs/sources/flow/reference/components/otelcol.connector.servicegraph.md +++ b/docs/sources/flow/reference/components/otelcol.connector.servicegraph.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.servicegraph/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.servicegraph/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.servicegraph/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.servicegraph/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.connector.servicegraph/ description: Learn about otelcol.connector.servicegraph labels: @@ -13,10 +13,10 @@ title: otelcol.connector.servicegraph {{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} -`otelcol.connector.servicegraph` accepts span data from other `otelcol` components and +`otelcol.connector.servicegraph` accepts span data from other `otelcol` components and outputs metrics representing the relationship between various services in a system. A metric represents an edge in the service graph. -Those metrics can then be used by a data visualization application (e.g. +Those metrics can then be used by a data visualization application (e.g. [Grafana](/docs/grafana/latest/explore/trace-integration/#service-graph)) to draw the service graph. @@ -31,11 +31,11 @@ This component is based on [Grafana Tempo's service graph processor](https://git Service graphs are useful for a number of use-cases: -* Infer the topology of a distributed system. As distributed systems grow, they become more complex. +- Infer the topology of a distributed system. As distributed systems grow, they become more complex. Service graphs can help you understand the structure of the system. -* Provide a high level overview of the health of your system. +- Provide a high level overview of the health of your system. Service graphs show error rates, latencies, and other relevant data. -* Provide a historic view of a system’s topology. +- Provide a historic view of a system’s topology. Distributed systems change very frequently, and service graphs offer a way of seeing how these systems have evolved over time. @@ -59,36 +59,37 @@ otelcol.connector.servicegraph "LABEL" { `otelcol.connector.servicegraph` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`latency_histogram_buckets` | `list(duration)` | Buckets for latency histogram metrics. | `["2ms", "4ms", "6ms", "8ms", "10ms", "50ms", "100ms", "200ms", "400ms", "800ms", "1s", "1400ms", "2s", "5s", "10s", "15s"]` | no -`dimensions` | `list(string)` | A list of dimensions to add with the default dimensions. | `[]` | no -`cache_loop` | `duration` | Configures how often to delete series which have not been updated. | `"1m"` | no -`store_expiration_loop` | `duration` | The time to expire old entries from the store periodically. | `"2s"` | no -`metrics_flush_interval` | `duration` | The interval at which metrics are flushed to downstream components. | `"0s"` | no +| Name | Type | Description | Default | Required | +| --------------------------- | ---------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | -------- | +| `latency_histogram_buckets` | `list(duration)` | Buckets for latency histogram metrics. | `["2ms", "4ms", "6ms", "8ms", "10ms", "50ms", "100ms", "200ms", "400ms", "800ms", "1s", "1400ms", "2s", "5s", "10s", "15s"]` | no | +| `dimensions` | `list(string)` | A list of dimensions to add with the default dimensions. | `[]` | no | +| `cache_loop` | `duration` | Configures how often to delete series which have not been updated. | `"1m"` | no | +| `store_expiration_loop` | `duration` | The time to expire old entries from the store periodically. | `"2s"` | no | +| `metrics_flush_interval` | `duration` | The interval at which metrics are flushed to downstream components. | `"0s"` | no | -Service graphs work by inspecting traces and looking for spans with +Service graphs work by inspecting traces and looking for spans with parent-children relationship that represent a request. -`otelcol.connector.servicegraph` uses OpenTelemetry semantic conventions +`otelcol.connector.servicegraph` uses OpenTelemetry semantic conventions to detect a myriad of requests. The following requests are currently supported: -* A direct request between two services, where the outgoing and the incoming span +- A direct request between two services, where the outgoing and the incoming span must have a [Span Kind][] value of `client` and `server` respectively. -* A request across a messaging system, where the outgoing and the incoming span +- A request across a messaging system, where the outgoing and the incoming span must have a [Span Kind][] value of `producer` and `consumer` respectively. -* A database request, where spans have a [Span Kind][] with a value of `client`, +- A database request, where spans have a [Span Kind][] with a value of `client`, as well as an attribute with a key of `db.name`. Every span which can be paired up to form a request is kept in an in-memory store: -* If the TTL of the span expires before it can be paired, it is deleted from the store. + +- If the TTL of the span expires before it can be paired, it is deleted from the store. TTL is configured in the [store][] block. -* If the span is paired prior to its expiration, a metric is recorded and the span is deleted from the store. +- If the span is paired prior to its expiration, a metric is recorded and the span is deleted from the store. The following metrics are emitted by the processor: | Metric | Type | Labels | Description | -|---------------------------------------------|-----------|---------------------------------|--------------------------------------------------------------| +| ------------------------------------------- | --------- | ------------------------------- | ------------------------------------------------------------ | | traces_service_graph_request_total | Counter | client, server, connection_type | Total count of requests between two nodes | | traces_service_graph_request_failed_total | Counter | client, server, connection_type | Total count of failed requests between two nodes | | traces_service_graph_request_server_seconds | Histogram | client, server, connection_type | Time for a request between two nodes as seen from the server | @@ -98,20 +99,21 @@ The following metrics are emitted by the processor: Duration is measured both from the client and the server sides. -The `latency_histogram_buckets` argument controls the buckets for +The `latency_histogram_buckets` argument controls the buckets for `traces_service_graph_request_server_seconds` and `traces_service_graph_request_client_seconds`. -Each emitted metrics series have a `client` and a `server` label corresponding with the -service doing the request and the service receiving the request. The value of the label +Each emitted metrics series have a `client` and a `server` label corresponding with the +service doing the request and the service receiving the request. The value of the label is derived from the `service.name` resource attribute of the two spans. The `connection_type` label may not be set. If it is set, its value will be either `messaging_system` or `database`. Additional labels can be included using the `dimensions` configuration option: -* Those labels will have a prefix to mark where they originate (client or server span kinds). + +- Those labels will have a prefix to mark where they originate (client or server span kinds). The `client_` prefix relates to the dimensions coming from spans with a [Span Kind][] of `client`. The `server_` prefix relates to the dimensions coming from spans with a [Span Kind][] of `server`. -* Firstly the resource attributes will be searched. If the attribute is not found, +- Firstly the resource attributes will be searched. If the attribute is not found, the span attributes will be searched. When `metrics_flush_interval` is set to `0s`, metrics will be flushed on every received batch of traces. @@ -123,10 +125,10 @@ When `metrics_flush_interval` is set to `0s`, metrics will be flushed on every r The following blocks are supported inside the definition of `otelcol.connector.servicegraph`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -store | [store][] | Configures the in-memory store for spans. | no -output | [output][] | Configures where to send telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ----------------------------------------- | -------- | +| store | [store][] | Configures the in-memory store for spans. | no | +| output | [output][] | Configures where to send telemetry data. | yes | [store]: #store-block [output]: #output-block @@ -135,10 +137,10 @@ output | [output][] | Configures where to send telemetry data. | yes The `store` block configures the in-memory store for spans. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_items` | `number` | Maximum number of items to keep in the store. | `1000` | no -`ttl` | `duration` | The time to live for spans in the store. | `"2s"` | no +| Name | Type | Description | Default | Required | +| ----------- | ---------- | --------------------------------------------- | ------- | -------- | +| `max_items` | `number` | Maximum number of items to keep in the store. | `1000` | no | +| `ttl` | `duration` | The time to live for spans in the store. | `"2s"` | no | ### output block @@ -148,9 +150,9 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` traces telemetry data. It does not accept metrics and logs. @@ -177,7 +179,7 @@ otelcol.receiver.otlp "default" { grpc { endpoint = "0.0.0.0:4320" } - + output { traces = [otelcol.connector.servicegraph.default.input,otelcol.exporter.otlp.grafana_cloud_tempo.input] } @@ -197,7 +199,7 @@ otelcol.exporter.prometheus "default" { prometheus.remote_write "mimir" { endpoint { url = "https://prometheus-xxx.grafana.net/api/prom/push" - + basic_auth { username = env("PROMETHEUS_USERNAME") password = env("GRAFANA_CLOUD_API_KEY") @@ -219,7 +221,8 @@ otelcol.auth.basic "grafana_cloud_tempo" { ``` Some of the metrics in Mimir may look like this: -``` + +```` traces_service_graph_request_total{client="shop-backend",failed="false",server="article-service",client_http_method="DELETE",server_http_method="DELETE"} traces_service_graph_request_failed_total{client="shop-backend",client_http_method="POST",failed="false",server="auth-service",server_http_method="POST"} ``` @@ -240,3 +243,4 @@ Refer to the linked documentation for more details. {{< /admonition >}} +```` diff --git a/docs/sources/flow/reference/components/otelcol.connector.spanlogs.md b/docs/sources/flow/reference/components/otelcol.connector.spanlogs.md index ec49e0509c7a..9fa919609bfb 100644 --- a/docs/sources/flow/reference/components/otelcol.connector.spanlogs.md +++ b/docs/sources/flow/reference/components/otelcol.connector.spanlogs.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.connector.spanlogs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.spanlogs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.connector.spanlogs/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.spanlogs/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.connector.spanlogs/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.spanlogs/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.connector.spanlogs/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.spanlogs/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.connector.spanlogs/ description: Learn about otelcol.connector.spanlogs title: otelcol.connector.spanlogs @@ -279,6 +279,7 @@ For an input trace like this... ] } ``` + ## Compatible components @@ -296,4 +297,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.connector.spanmetrics.md b/docs/sources/flow/reference/components/otelcol.connector.spanmetrics.md index bfbcec6a129a..a96532dfaf31 100644 --- a/docs/sources/flow/reference/components/otelcol.connector.spanmetrics.md +++ b/docs/sources/flow/reference/components/otelcol.connector.spanmetrics.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.connector.spanmetrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.spanmetrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.connector.spanmetrics/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.spanmetrics/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.connector.spanmetrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.connector.spanmetrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.connector.spanmetrics/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.connector.spanmetrics/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.connector.spanmetrics/ description: Learn about otelcol.connector.spanmetrics labels: @@ -23,24 +23,27 @@ aggregates Request, Error and Duration (R.E.D) OpenTelemetry metrics from the sp view call counts just on `service.name` and `span.name`. Requests are tracked using a `calls` metric with a `status.code` datapoint attribute set to `Ok`: + ``` calls { service.name="shipping", span.name="get_shipping/{shippingId}", span.kind="SERVER", status.code="Ok" } ``` - **Error** counts are computed from the number of spans with an `Error` status code. - Errors are tracked using a `calls` metric with a `status.code` datapoint attribute set to `Error`: - ``` - calls { service.name="shipping", span.name="get_shipping/{shippingId}, span.kind="SERVER", status.code="Error" } - ``` + Errors are tracked using a `calls` metric with a `status.code` datapoint attribute set to `Error`: + + ``` + calls { service.name="shipping", span.name="get_shipping/{shippingId}, span.kind="SERVER", status.code="Error" } + ``` - **Duration** is computed from the difference between the span start and end times and inserted - into the relevant duration histogram time bucket for each unique set dimensions. + into the relevant duration histogram time bucket for each unique set dimensions. - Span durations are tracked using a `duration` histogram metric: - ``` - duration { service.name="shipping", span.name="get_shipping/{shippingId}", span.kind="SERVER", status.code="Ok" } - ``` + Span durations are tracked using a `duration` histogram metric: + + ``` + duration { service.name="shipping", span.name="get_shipping/{shippingId}", span.kind="SERVER", status.code="Ok" } + ``` > **NOTE**: `otelcol.connector.spanmetrics` is a wrapper over the upstream > OpenTelemetry Collector `spanmetrics` connector. Bug reports or feature requests @@ -67,15 +70,15 @@ otelcol.connector.spanmetrics "LABEL" { `otelcol.connector.spanmetrics` supports the following arguments: -| Name | Type | Description | Default | Required | -| --------------------------------- | -------------- | ------------------------------------------------------------------------------------------ | -------------- | -------- | -| `aggregation_temporality` | `string` | Configures whether to reset the metrics after flushing. | `"CUMULATIVE"` | no | -| `dimensions_cache_size` | `number` | How many dimensions to cache. | `1000` | no | -| `exclude_dimensions` | `list(string)` | List of dimensions to be excluded from the default set of dimensions. | `[]` | no | -| `metrics_flush_interval` | `duration` | How often to flush generated metrics. | `"15s"` | no | -| `namespace` | `string` | Metric namespace. | `""` | no | -| `resource_metrics_cache_size` | `number` | The size of the cache holding metrics for a service. | `1000` | no | -| `resource_metrics_key_attributes` | `list(string)` | Limits the resource attributes used to create the metrics. | `[]` | no | +| Name | Type | Description | Default | Required | +| --------------------------------- | -------------- | --------------------------------------------------------------------- | -------------- | -------- | +| `aggregation_temporality` | `string` | Configures whether to reset the metrics after flushing. | `"CUMULATIVE"` | no | +| `dimensions_cache_size` | `number` | How many dimensions to cache. | `1000` | no | +| `exclude_dimensions` | `list(string)` | List of dimensions to be excluded from the default set of dimensions. | `[]` | no | +| `metrics_flush_interval` | `duration` | How often to flush generated metrics. | `"15s"` | no | +| `namespace` | `string` | Metric namespace. | `""` | no | +| `resource_metrics_cache_size` | `number` | The size of the cache holding metrics for a service. | `1000` | no | +| `resource_metrics_key_attributes` | `list(string)` | Limits the resource attributes used to create the metrics. | `[]` | no | Adjusting `dimensions_cache_size` can improve the Agent process' memory usage. @@ -88,8 +91,8 @@ If `namespace` is set, the generated metric name will be added a `namespace.` pr `resource_metrics_cache_size` is mostly relevant for cumulative temporality. It helps avoid issues with increasing memory and with incorrect metric timestamp resets. -`resource_metrics_key_attributes` can be used to avoid situations where resource attributes may change across service restarts, -causing metric counters to break (and duplicate). A resource does not need to have all of the attributes. +`resource_metrics_key_attributes` can be used to avoid situations where resource attributes may change across service restarts, +causing metric counters to break (and duplicate). A resource does not need to have all of the attributes. The list must include enough attributes to properly identify unique resources or risk aggregating data from more than one service and span. For example, `["service.name", "telemetry.sdk.language", "telemetry.sdk.name"]`. @@ -98,16 +101,16 @@ For example, `["service.name", "telemetry.sdk.language", "telemetry.sdk.name"]`. The following blocks are supported inside the definition of `otelcol.connector.spanmetrics`: -| Hierarchy | Block | Description | Required | -| ----------------------- | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| dimension | [dimension][] | Dimensions to be added in addition to the default ones. | no | -| events | [events][] | Configures the events metric. | no | +| Hierarchy | Block | Description | Required | +| ----------------------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| dimension | [dimension][] | Dimensions to be added in addition to the default ones. | no | +| events | [events][] | Configures the events metric. | no | | events > dimension | [dimension][] | Span event attributes to add as dimensions to the events metric, _on top of_ the default ones and the ones configured in the top-level `dimension` block. | no | -| exemplars | [exemplars][] | Configures how to attach exemplars to histograms. | no | -| histogram | [histogram][] | Configures the histogram derived from spans durations. | yes | -| histogram > explicit | [explicit][] | Configuration for a histogram with explicit buckets. | no | -| histogram > exponential | [exponential][] | Configuration for a histogram with exponential buckets. | no | -| output | [output][] | Configures where to send telemetry data. | yes | +| exemplars | [exemplars][] | Configures how to attach exemplars to histograms. | no | +| histogram | [histogram][] | Configures the histogram derived from spans durations. | yes | +| histogram > explicit | [explicit][] | Configuration for a histogram with explicit buckets. | no | +| histogram > exponential | [exponential][] | Configuration for a histogram with exponential buckets. | no | +| output | [output][] | Configures where to send telemetry data. | yes | It is necessary to specify either a "[exponential][]" or an "[explicit][]" block: @@ -656,6 +659,7 @@ This problem can be solved by doing **either** of the following: - **Recommended approach:** Prior to `otelcol.connector.spanmetrics`, remove all resource attributes from the incoming spans which are not needed by `otelcol.connector.spanmetrics`. {{< collapse title="Example River configuration to remove unnecessary resource attributes." >}} + ```river otelcol.receiver.otlp "default" { http {} @@ -716,14 +720,16 @@ This problem can be solved by doing **either** of the following: } } ``` + {{< /collapse >}} - Or, after `otelcol.connector.spanmetrics`, copy each of the resource attributes as a metric datapoint attribute. -This has the advantage that the resource attributes will be visible as metric labels. -However, the {{< term "cardinality" >}}cardinality{{< /term >}} of the metrics may be much higher, which could increase the cost of storing and querying them. -The example below uses the [merge_maps][] OTTL function. + This has the advantage that the resource attributes will be visible as metric labels. + However, the {{< term "cardinality" >}}cardinality{{< /term >}} of the metrics may be much higher, which could increase the cost of storing and querying them. + The example below uses the [merge_maps][] OTTL function. {{< collapse title="Example River configuration to add all resource attributes as metric datapoint attributes." >}} + ```river otelcol.receiver.otlp "default" { http {} @@ -775,6 +781,7 @@ The example below uses the [merge_maps][] OTTL function. } } ``` + {{< /collapse >}} If the resource attributes are not treated in either of the ways described above, an error such as this one could be logged by `prometheus.remote_write`: diff --git a/docs/sources/flow/reference/components/otelcol.exporter.debug.md b/docs/sources/flow/reference/components/otelcol.exporter.debug.md deleted file mode 100644 index a3006d9d9bf2..000000000000 --- a/docs/sources/flow/reference/components/otelcol.exporter.debug.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.debug/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.debug/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.debug/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.debug/ -canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.debug/ -description: Learn about otelcol.exporter.debug -labels: - stage: experimental -title: otelcol.exporter.debug ---- - -# otelcol.exporter.debug - -`otelcol.exporter.debug` accepts telemetry data from other `otelcol` components and writes them to the console (stderr). -You can control the verbosity of the logs. - -{{< admonition type="note" >}} -`otelcol.exporter.debug` is a wrapper over the upstream OpenTelemetry Collector `debug` exporter. -If necessary, bug reports or feature requests are redirected to the upstream repository. -{{< /admonition >}} - -Multiple `otelcol.exporter.debug` components can be specified by giving them different labels. - -## Usage - -```river -otelcol.exporter.debug "LABEL" { } -``` - -## Arguments - -`otelcol.exporter.debug` supports the following arguments: - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`verbosity` | `string` | Verbosity of the generated logs. | `"normal"` | no -`sampling_initial` | `int` | Number of messages initially logged each second. | `2` | no -`sampling_thereafter` | `int` | Sampling rate after the initial messages are logged. | `500` | no - -The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`. - -## Exported fields - -The following fields are exported and can be referenced by other components: - -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. - -`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, -logs, or traces). - -## Component health - -`otelcol.exporter.debug` is only reported as unhealthy if given an invalid -configuration. - -## Debug information - -`otelcol.exporter.debug` does not expose any component-specific debug -information. - -## Example - -This example scrapes Prometheus UNIX metrics and writes them to the console: - -```river -prometheus.exporter.unix "default" { } - -prometheus.scrape "default" { - targets = prometheus.exporter.unix.default.targets - forward_to = [otelcol.receiver.prometheus.default.receiver] -} - -otelcol.receiver.prometheus "default" { - output { - metrics = [otelcol.exporter.debug.default.input] - } -} - -otelcol.exporter.debug "default" { - verbosity = "detailed" - sampling_initial = 1 - sampling_thereafter = 1 -} -``` - - -## Compatible components - -`otelcol.exporter.debug` has exports that can be consumed by the following components: - -- Components that consume [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-consumers) - -{{< admonition type="note" >}} -Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. -Refer to the linked documentation for more details. -{{< /admonition >}} - - \ No newline at end of file diff --git a/docs/sources/flow/reference/components/otelcol.exporter.loadbalancing.md b/docs/sources/flow/reference/components/otelcol.exporter.loadbalancing.md index f25e28bfa345..9272baef51b1 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.loadbalancing.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.loadbalancing.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.loadbalancing/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.loadbalancing/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.loadbalancing/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.loadbalancing/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.loadbalancing/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.loadbalancing/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.loadbalancing/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.loadbalancing/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.loadbalancing/ description: Learn about otelcol.exporter.loadbalancing labels: @@ -18,7 +18,7 @@ title: otelcol.exporter.loadbalancing `otelcol.exporter.loadbalancing` accepts logs and traces from other `otelcol` components -and writes them over the network using the OpenTelemetry Protocol (OTLP) protocol. +and writes them over the network using the OpenTelemetry Protocol (OTLP) protocol. > **NOTE**: `otelcol.exporter.loadbalancing` is a wrapper over the upstream > OpenTelemetry Collector `loadbalancing` exporter. Bug reports or feature requests will @@ -27,20 +27,20 @@ and writes them over the network using the OpenTelemetry Protocol (OTLP) protoco Multiple `otelcol.exporter.loadbalancing` components can be specified by giving them different labels. -The decision which backend to use depends on the trace ID or the service name. -The backend load doesn't influence the choice. Even though this load-balancer won't do -round-robin balancing of the batches, the load distribution should be very similar among backends, +The decision which backend to use depends on the trace ID or the service name. +The backend load doesn't influence the choice. Even though this load-balancer won't do +round-robin balancing of the batches, the load distribution should be very similar among backends, with a standard deviation under 5% at the current configuration. `otelcol.exporter.loadbalancing` is especially useful for backends configured with tail-based samplers which choose a backend based on the view of the full trace. -When a list of backends is updated, some of the signals will be rerouted to different backends. +When a list of backends is updated, some of the signals will be rerouted to different backends. Around R/N of the "routes" will be rerouted differently, where: -* A "route" is either a trace ID or a service name mapped to a certain backend. -* "R" is the total number of routes. -* "N" is the total number of backends. +- A "route" is either a trace ID or a service name mapped to a certain backend. +- "R" is the total number of routes. +- "N" is the total number of backends. This should be stable enough for most cases, and the larger the number of backends, the less disruption it should cause. @@ -63,35 +63,36 @@ otelcol.exporter.loadbalancing "LABEL" { `otelcol.exporter.loadbalancing` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`routing_key` | `string` | Routing strategy for load balancing. | `"traceID"` | no +| Name | Type | Description | Default | Required | +| ------------- | -------- | ------------------------------------ | ----------- | -------- | +| `routing_key` | `string` | Routing strategy for load balancing. | `"traceID"` | no | The `routing_key` attribute determines how to route signals across endpoints. Its value could be one of the following: -* `"service"`: spans with the same `service.name` will be exported to the same backend. -This is useful when using processors like the span metrics, so all spans for each service are sent to consistent Agent instances -for metric collection. Otherwise, metrics for the same services would be sent to different Agents, making aggregations inaccurate. -* `"traceID"`: spans belonging to the same traceID will be exported to the same backend. + +- `"service"`: spans with the same `service.name` will be exported to the same backend. + This is useful when using processors like the span metrics, so all spans for each service are sent to consistent Agent instances + for metric collection. Otherwise, metrics for the same services would be sent to different Agents, making aggregations inaccurate. +- `"traceID"`: spans belonging to the same traceID will be exported to the same backend. ## Blocks The following blocks are supported inside the definition of `otelcol.exporter.loadbalancing`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -resolver | [resolver][] | Configures discovering the endpoints to export to. | yes -resolver > static | [static][] | Static list of endpoints to export to. | no -resolver > dns | [dns][] | DNS-sourced list of endpoints to export to. | no -resolver > kubernetes | [kubernetes][] | Kubernetes-sourced list of endpoints to export to. | no -protocol | [protocol][] | Protocol settings. Only OTLP is supported at the moment. | no -protocol > otlp | [otlp][] | Configures an OTLP exporter. | no -protocol > otlp > client | [client][] | Configures the exporter gRPC client. | no -protocol > otlp > client > tls | [tls][] | Configures TLS for the gRPC client. | no -protocol > otlp > client > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no -protocol > otlp > queue | [queue][] | Configures batching of data before sending. | no -protocol > otlp > retry | [retry][] | Configures retry mechanism for failed requests. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------ | ----------------- | -------------------------------------------------------------------------- | -------- | +| resolver | [resolver][] | Configures discovering the endpoints to export to. | yes | +| resolver > static | [static][] | Static list of endpoints to export to. | no | +| resolver > dns | [dns][] | DNS-sourced list of endpoints to export to. | no | +| resolver > kubernetes | [kubernetes][] | Kubernetes-sourced list of endpoints to export to. | no | +| protocol | [protocol][] | Protocol settings. Only OTLP is supported at the moment. | no | +| protocol > otlp | [otlp][] | Configures an OTLP exporter. | no | +| protocol > otlp > client | [client][] | Configures the exporter gRPC client. | no | +| protocol > otlp > client > tls | [tls][] | Configures TLS for the gRPC client. | no | +| protocol > otlp > client > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no | +| protocol > otlp > queue | [queue][] | Configures batching of data before sending. | no | +| protocol > otlp > retry | [retry][] | Configures retry mechanism for failed requests. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | The `>` symbol indicates deeper levels of nesting. For example, `resolver > static` refers to a `static` block defined inside a `resolver` block. @@ -113,7 +114,7 @@ refers to a `static` block defined inside a `resolver` block. The `resolver` block configures how to retrieve the endpoint to which this exporter will send data. -Inside the `resolver` block, either the [dns][] block or the [static][] block +Inside the `resolver` block, either the [dns][] block or the [static][] block should be specified. If both `dns` and `static` are specified, `dns` takes precedence. ### static block @@ -122,42 +123,42 @@ The `static` block configures a list of endpoints which this exporter will send The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`hostnames` | `list(string)` | List of endpoints to export to. | | yes +| Name | Type | Description | Default | Required | +| ----------- | -------------- | ------------------------------- | ------- | -------- | +| `hostnames` | `list(string)` | List of endpoints to export to. | | yes | ### dns block -The `dns` block periodically resolves an IP address via the DNS `hostname` attribute. This IP address -and the port specified via the `port` attribute will then be used by the gRPC exporter +The `dns` block periodically resolves an IP address via the DNS `hostname` attribute. This IP address +and the port specified via the `port` attribute will then be used by the gRPC exporter as the endpoint to which to export data to. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`hostname` | `string` | DNS hostname to resolve. | | yes -`interval` | `duration` | Resolver interval. | `"5s"` | no -`timeout` | `duration` | Resolver timeout. | `"1s"` | no -`port` | `string` | Port to be used with the IP addresses resolved from the DNS hostname. | `"4317"` | no +| Name | Type | Description | Default | Required | +| ---------- | ---------- | --------------------------------------------------------------------- | -------- | -------- | +| `hostname` | `string` | DNS hostname to resolve. | | yes | +| `interval` | `duration` | Resolver interval. | `"5s"` | no | +| `timeout` | `duration` | Resolver timeout. | `"1s"` | no | +| `port` | `string` | Port to be used with the IP addresses resolved from the DNS hostname. | `"4317"` | no | ### kubernetes block -You can use the `kubernetes` block to load balance across the pods of a Kubernetes service. -The Kubernetes API notifies {{< param "PRODUCT_NAME" >}} whenever a new pod is added or removed from the service. +You can use the `kubernetes` block to load balance across the pods of a Kubernetes service. +The Kubernetes API notifies {{< param "PRODUCT_NAME" >}} whenever a new pod is added or removed from the service. The `kubernetes` resolver has a much faster response time than the `dns` resolver because it doesn't require polling. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`service` | `string` | Kubernetes service to resolve. | | yes -`ports` | `list(number)` | Ports to use with the IP addresses resolved from `service`. | `[4317]` | no +| Name | Type | Description | Default | Required | +| --------- | -------------- | ----------------------------------------------------------- | -------- | -------- | +| `service` | `string` | Kubernetes service to resolve. | | yes | +| `ports` | `list(number)` | Ports to use with the IP addresses resolved from `service`. | `[4317]` | no | -If no namespace is specified inside `service`, an attempt will be made to infer the namespace for this Agent. +If no namespace is specified inside `service`, an attempt will be made to infer the namespace for this Agent. If this fails, the `default` namespace will be used. -Each of the ports listed in `ports` will be used with each of the IPs resolved from `service`. +Each of the ports listed in `ports` will be used with each of the IPs resolved from `service`. The "get", "list", and "watch" [roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-example) must be granted in Kubernetes for the resolver to work. @@ -173,21 +174,21 @@ The `otlp` block configures OTLP-related settings for exporting. ### client block -The `client` block configures the gRPC client used by the component. +The `client` block configures the gRPC client used by the component. The endpoints used by the client block are the ones from the `resolver` block The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no -`wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no -`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no -`balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no -`authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no -`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------------------- | -------------------------------------------------------------------------------- | ------------ | -------- | +| `compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no | +| `wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no | +| `headers` | `map(string)` | Additional headers to send with the request. | `{}` | no | +| `balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no | +| `authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no | +| `auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no | {{< docs/shared lookup="flow/reference/components/otelcol-compression-field.md" source="agent" version="" >}} @@ -197,8 +198,8 @@ Name | Type | Description | Default | Required You can configure an HTTP proxy with the following environment variables: -* `HTTPS_PROXY` -* `NO_PROXY` +- `HTTPS_PROXY` +- `NO_PROXY` The `HTTPS_PROXY` environment variable specifies a URL to use for proxying requests. Connections to the proxy are established via [the `HTTP CONNECT` @@ -231,11 +232,11 @@ connections. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`ping_wait` | `duration` | How often to ping the server after no activity. | | no -`ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no -`ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ------------------------------------------------------------------------------------------ | ------- | -------- | +| `ping_wait` | `duration` | How often to ping the server after no activity. | | no | +| `ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no | +| `ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no | ### queue block @@ -259,13 +260,14 @@ retried. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` OTLP-formatted data for telemetry signals of these types: -* logs -* traces + +- logs +- traces ## Choose a load balancing strategy @@ -278,39 +280,46 @@ The use of `otelcol.exporter.loadbalancing` is only necessary for [stateful Flow [stateful-and-stateless-components]: {{< relref "../../get-started/deploy-agent.md#stateful-and-stateless-components" >}} ### otelcol.processor.tail_sampling + + All spans for a given trace ID must go to the same tail sampling {{< param "PRODUCT_ROOT_NAME" >}} instance. -* This can be done by configuring `otelcol.exporter.loadbalancing` with `routing_key = "traceID"`. -* If you do not configure `routing_key = "traceID"`, the sampling decision may be incorrect. + +- This can be done by configuring `otelcol.exporter.loadbalancing` with `routing_key = "traceID"`. +- If you do not configure `routing_key = "traceID"`, the sampling decision may be incorrect. The tail sampler must have a full view of the trace when making a sampling decision. - For example, a `rate_limiting` tail sampling strategy may incorrectly pass through + For example, a `rate_limiting` tail sampling strategy may incorrectly pass through more spans than expected if the spans for the same trace are spread out to more than one {{< param "PRODUCT_NAME" >}} instance. ### otelcol.connector.spanmetrics + All spans for a given `service.name` must go to the same spanmetrics {{< param "PRODUCT_ROOT_NAME" >}}. -* This can be done by configuring `otelcol.exporter.loadbalancing` with `routing_key = "service"`. -* If you do not configure `routing_key = "service"`, metrics generated from spans might be incorrect. -For example, if similar spans for the same `service.name` end up on different {{< param "PRODUCT_ROOT_NAME" >}} instances, the two {{< param "PRODUCT_ROOT_NAME" >}}s will have identical metric series for calculating span latency, errors, and number of requests. -When both {{< param "PRODUCT_ROOT_NAME" >}} instances attempt to write the metrics to a database such as Mimir, the series may clash with each other. -At best, this will lead to an error in {{< param "PRODUCT_ROOT_NAME" >}} and a rejected write to the metrics database. -At worst, it could lead to inaccurate data due to overlapping samples for the metric series. + +- This can be done by configuring `otelcol.exporter.loadbalancing` with `routing_key = "service"`. +- If you do not configure `routing_key = "service"`, metrics generated from spans might be incorrect. + For example, if similar spans for the same `service.name` end up on different {{< param "PRODUCT_ROOT_NAME" >}} instances, the two {{< param "PRODUCT_ROOT_NAME" >}}s will have identical metric series for calculating span latency, errors, and number of requests. + When both {{< param "PRODUCT_ROOT_NAME" >}} instances attempt to write the metrics to a database such as Mimir, the series may clash with each other. + At best, this will lead to an error in {{< param "PRODUCT_ROOT_NAME" >}} and a rejected write to the metrics database. + At worst, it could lead to inaccurate data due to overlapping samples for the metric series. However, there are ways to scale `otelcol.connector.spanmetrics` without the need for a load balancer: + 1. Each {{< param "PRODUCT_ROOT_NAME" >}} could add an attribute such as `collector.id` in order to make its series unique. Then, for example, you could use a `sum by` PromQL query to aggregate the metrics from different {{< param "PRODUCT_ROOT_NAME" >}}s. Unfortunately, an extra `collector.id` attribute has a downside that the metrics stored in the database will have higher {{< term "cardinality" >}}cardinality{{< /term >}}. 2. Spanmetrics could be generated in the backend database instead of in {{< param "PRODUCT_ROOT_NAME" >}}. - For example, span metrics can be [generated][tempo-spanmetrics] in Grafana Cloud by the Tempo traces database. + For example, span metrics can be [generated][tempo-spanmetrics] in Grafana Cloud by the Tempo traces database. [tempo-spanmetrics]: https://grafana.com/docs/tempo/latest/metrics-generator/span_metrics/ ### otelcol.connector.servicegraph + It is challenging to scale `otelcol.connector.servicegraph` over multiple {{< param "PRODUCT_ROOT_NAME" >}} instances. For `otelcol.connector.servicegraph` to work correctly, each "client" span must be paired with a "server" span to calculate metrics such as span duration. -If a "client" span goes to one {{< param "PRODUCT_ROOT_NAME" >}}, but a "server" span goes to another {{< param "PRODUCT_ROOT_NAME" >}}, then no single {{< param "PRODUCT_ROOT_NAME" >}} will be able to pair the spans and a metric won't be generated. +If a "client" span goes to one {{< param "PRODUCT_ROOT_NAME" >}}, but a "server" span goes to another {{< param "PRODUCT_ROOT_NAME" >}}, then no single {{< param "PRODUCT_ROOT_NAME" >}} will be able to pair the spans and a metric won't be generated. `otelcol.exporter.loadbalancing` can solve this problem partially if it is configured with `routing_key = "traceID"`. Each {{< param "PRODUCT_ROOT_NAME" >}} will then be able to calculate a service graph for each "client"/"server" pair in a trace. @@ -321,26 +330,28 @@ You could differentiate the series by adding an attribute such as `"collector.id The series from different {{< param "PRODUCT_ROOT_NAME" >}}s can be aggregated using PromQL queries on the backed metrics database. If the metrics are stored in Grafana Mimir, cardinality issues due to `"collector.id"` labels can be solved using [Adaptive Metrics][adaptive-metrics]. -A simpler, more scalable alternative to generating service graph metrics in {{< param "PRODUCT_ROOT_NAME" >}} is to generate them entirely in the backend database. +A simpler, more scalable alternative to generating service graph metrics in {{< param "PRODUCT_ROOT_NAME" >}} is to generate them entirely in the backend database. For example, service graphs can be [generated][tempo-servicegraphs] in Grafana Cloud by the Tempo traces database. [tempo-servicegraphs]: https://grafana.com/docs/tempo/latest/metrics-generator/service_graphs/ [adaptive-metrics]: https://grafana.com/docs/grafana-cloud/cost-management-and-billing/reduce-costs/metrics-costs/control-metrics-usage-via-adaptive-metrics/ ### Mixing stateful components + + Different {{< param "PRODUCT_NAME" >}} components may require a different `routing_key` for `otelcol.exporter.loadbalancing`. For example, `otelcol.processor.tail_sampling` requires `routing_key = "traceID"` whereas `otelcol.connector.spanmetrics` requires `routing_key = "service"`. To load balance both types of components, two different sets of load balancers have to be set up: -* One set of `otelcol.exporter.loadbalancing` with `routing_key = "traceID"`, sending spans to {{< param "PRODUCT_ROOT_NAME" >}}s doing tail sampling and no span metrics. -* Another set of `otelcol.exporter.loadbalancing` with `routing_key = "service"`, sending spans to {{< param "PRODUCT_ROOT_NAME" >}}s doing span metrics and no service graphs. +- One set of `otelcol.exporter.loadbalancing` with `routing_key = "traceID"`, sending spans to {{< param "PRODUCT_ROOT_NAME" >}}s doing tail sampling and no span metrics. +- Another set of `otelcol.exporter.loadbalancing` with `routing_key = "service"`, sending spans to {{< param "PRODUCT_ROOT_NAME" >}}s doing span metrics and no service graphs. Unfortunately, this can also lead to side effects. For example, if `otelcol.connector.spanmetrics` is configured to generate exemplars, the tail sampling {{< param "PRODUCT_ROOT_NAME" >}}s might drop the trace that the exemplar points to. There is no coordination between the tail sampling {{< param "PRODUCT_ROOT_NAME" >}}s and the span metrics {{< param "PRODUCT_ROOT_NAME" >}}s to make sure trace IDs for exemplars are kept. - + ```bash k3d cluster create grafana-agent-lb-test kubectl apply -f kubernetes_config.yaml @@ -658,7 +672,7 @@ k3d cluster delete grafana-agent-lb-test ### Kubernetes resolver -When you configure `otelcol.exporter.loadbalancing` with a `kubernetes` resolver, the Kubernetes API notifies {{< param "PRODUCT_NAME" >}} whenever a new pod is added or removed from the service. +When you configure `otelcol.exporter.loadbalancing` with a `kubernetes` resolver, the Kubernetes API notifies {{< param "PRODUCT_NAME" >}} whenever a new pod is added or removed from the service. Spans are exported to the addresses from the Kubernetes API, combined with all the possible `ports`. ```river @@ -678,18 +692,20 @@ otelcol.exporter.loadbalancing "default" { ``` The following example shows a Kubernetes configuration that sets up two sets of {{< param "PRODUCT_ROOT_NAME" >}}s: -* A pool of load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s: - * Spans are received from instrumented applications via `otelcol.receiver.otlp` - * Spans are exported via `otelcol.exporter.loadbalancing`. - * The load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s will get notified by the Kubernetes API any time a pod + +- A pool of load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s: + - Spans are received from instrumented applications via `otelcol.receiver.otlp` + - Spans are exported via `otelcol.exporter.loadbalancing`. + - The load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s will get notified by the Kubernetes API any time a pod is added or removed from the pool of sampling {{< param "PRODUCT_ROOT_NAME" >}}s. -* A pool of sampling {{< param "PRODUCT_ROOT_NAME" >}}s: - * The sampling {{< param "PRODUCT_ROOT_NAME" >}}s do not need to run behind a headless service. - * Spans are received from the load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s via `otelcol.receiver.otlp` - * Traces are sampled via `otelcol.processor.tail_sampling`. - * The traces are exported via `otelcol.exporter.otlp` to a an OTLP-compatible database such as Tempo. +- A pool of sampling {{< param "PRODUCT_ROOT_NAME" >}}s: + - The sampling {{< param "PRODUCT_ROOT_NAME" >}}s do not need to run behind a headless service. + - Spans are received from the load-balancer {{< param "PRODUCT_ROOT_NAME" >}}s via `otelcol.receiver.otlp` + - Traces are sampled via `otelcol.processor.tail_sampling`. + - The traces are exported via `otelcol.exporter.otlp` to a an OTLP-compatible database such as Tempo. + {{< collapse title="Example Kubernetes configuration" >}} ```yaml @@ -710,14 +726,14 @@ metadata: name: agent-traces-role namespace: grafana-cloud-monitoring rules: -- apiGroups: - - "" - resources: - - endpoints - verbs: - - list - - watch - - get + - apiGroups: + - "" + resources: + - endpoints + verbs: + - list + - watch + - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding @@ -729,9 +745,9 @@ roleRef: kind: Role name: agent-traces-role subjects: -- kind: ServiceAccount - name: agent-traces - namespace: grafana-cloud-monitoring + - kind: ServiceAccount + name: agent-traces + namespace: grafana-cloud-monitoring --- apiVersion: apps/v1 kind: Deployment @@ -751,12 +767,12 @@ spec: name: k6-trace-generator spec: containers: - - env: - - name: ENDPOINT - value: agent-traces-lb.grafana-cloud-monitoring.svc.cluster.local:9411 - image: ghcr.io/grafana/xk6-client-tracing:v0.0.2 - imagePullPolicy: IfNotPresent - name: k6-trace-generator + - env: + - name: ENDPOINT + value: agent-traces-lb.grafana-cloud-monitoring.svc.cluster.local:9411 + image: ghcr.io/grafana/xk6-client-tracing:v0.0.2 + imagePullPolicy: IfNotPresent + name: k6-trace-generator --- apiVersion: v1 kind: Service @@ -766,10 +782,10 @@ metadata: spec: clusterIP: None ports: - - name: agent-traces-otlp-grpc - port: 9411 - protocol: TCP - targetPort: 9411 + - name: agent-traces-otlp-grpc + port: 9411 + protocol: TCP + targetPort: 9411 selector: name: agent-traces-lb type: ClusterIP @@ -792,29 +808,29 @@ spec: name: agent-traces-lb spec: containers: - - args: - - run - - /etc/agent/agent_lb.river - command: - - /bin/grafana-agent - env: - - name: AGENT_MODE - value: flow - image: grafana/agent:v0.38.0 - imagePullPolicy: IfNotPresent - name: agent-traces - ports: - - containerPort: 9411 - name: otlp-grpc - protocol: TCP - volumeMounts: - - mountPath: /etc/agent + - args: + - run + - /etc/agent/agent_lb.river + command: + - /bin/grafana-agent + env: + - name: AGENT_MODE + value: flow + image: grafana/agent:v0.38.0 + imagePullPolicy: IfNotPresent name: agent-traces + ports: + - containerPort: 9411 + name: otlp-grpc + protocol: TCP + volumeMounts: + - mountPath: /etc/agent + name: agent-traces serviceAccount: agent-traces volumes: - - configMap: + - configMap: + name: agent-traces name: agent-traces - name: agent-traces --- apiVersion: v1 kind: Service @@ -823,10 +839,10 @@ metadata: namespace: grafana-cloud-monitoring spec: ports: - - name: agent-lb - port: 34621 - protocol: TCP - targetPort: agent-lb + - name: agent-lb + port: 34621 + protocol: TCP + targetPort: agent-lb selector: name: agent-traces-sampling type: ClusterIP @@ -849,28 +865,28 @@ spec: name: agent-traces-sampling spec: containers: - - args: - - run - - /etc/agent/agent_sampling.river - command: - - /bin/grafana-agent - env: - - name: AGENT_MODE - value: flow - image: grafana/agent:v0.38.0 - imagePullPolicy: IfNotPresent - name: agent-traces - ports: - - containerPort: 34621 - name: agent-lb - protocol: TCP - volumeMounts: - - mountPath: /etc/agent + - args: + - run + - /etc/agent/agent_sampling.river + command: + - /bin/grafana-agent + env: + - name: AGENT_MODE + value: flow + image: grafana/agent:v0.38.0 + imagePullPolicy: IfNotPresent name: agent-traces + ports: + - containerPort: 34621 + name: agent-lb + protocol: TCP + volumeMounts: + - mountPath: /etc/agent + name: agent-traces volumes: - - configMap: + - configMap: + name: agent-traces name: agent-traces - name: agent-traces --- apiVersion: v1 kind: ConfigMap diff --git a/docs/sources/flow/reference/components/otelcol.exporter.logging.md b/docs/sources/flow/reference/components/otelcol.exporter.logging.md index 51a044b130e6..b9576c011d8d 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.logging.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.logging.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.logging/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.logging/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.logging/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.logging/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.logging/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.logging/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.logging/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.logging/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.logging/ description: Learn about otelcol.exporter.logging title: otelcol.exporter.logging @@ -36,11 +36,11 @@ otelcol.exporter.logging "LABEL" { } `otelcol.exporter.logging` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`verbosity` | `string` | Verbosity of the generated logs. | `"normal"` | no -`sampling_initial` | `int` | Number of messages initially logged each second. | `2` | no -`sampling_thereafter` | `int` | Sampling rate after the initial messages are logged. | `500` | no +| Name | Type | Description | Default | Required | +| --------------------- | -------- | ---------------------------------------------------- | ---------- | -------- | +| `verbosity` | `string` | Verbosity of the generated logs. | `"normal"` | no | +| `sampling_initial` | `int` | Number of messages initially logged each second. | `2` | no | +| `sampling_thereafter` | `int` | Sampling rate after the initial messages are logged. | `500` | no | The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`. @@ -49,9 +49,9 @@ The `verbosity` argument must be one of `"basic"`, `"normal"`, or `"detailed"`. The following blocks are supported inside the definition of `otelcol.exporter.logging`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no +| Hierarchy | Block | Description | Required | +| ------------- | ----------------- | -------------------------------------------------------------------------- | -------- | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > tls` refers to a `tls` block defined inside a `client` block. @@ -66,9 +66,9 @@ refers to a `tls` block defined inside a `client` block. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -107,6 +107,7 @@ otelcol.exporter.logging "default" { sampling_thereafter = 1 } ``` + ## Compatible components @@ -120,4 +121,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.exporter.loki.md b/docs/sources/flow/reference/components/otelcol.exporter.loki.md index 8fe0d1ec8368..631ab80f62d0 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.loki.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.loki.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.loki/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.loki/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.loki/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.loki/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.loki/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.loki/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.loki/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.loki/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.loki/ description: Learn about otelcol.exporter.loki title: otelcol.exporter.loki @@ -20,13 +20,14 @@ to `loki` components. The attributes of the OTLP log are not converted to Loki attributes by default. To convert them, the OTLP log should contain special "hint" attributes: -* To convert OTLP resource attributes to Loki labels, + +- To convert OTLP resource attributes to Loki labels, use the `loki.resource.labels` hint attribute. -* To convert OTLP log attributes to Loki labels, +- To convert OTLP log attributes to Loki labels, use the `loki.attribute.labels` hint attribute. -Labels will be translated to a [Prometheus format][], which is more constrained -than the OTLP format. For examples on label translation, see the +Labels will be translated to a [Prometheus format][], which is more constrained +than the OTLP format. For examples on label translation, see the [Converting OTLP attributes to Loki labels][] section. Multiple `otelcol.exporter.loki` components can be specified by giving them @@ -46,17 +47,17 @@ otelcol.exporter.loki "LABEL" { `otelcol.exporter.loki` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(receiver)` | Where to forward converted Loki logs. | | yes +| Name | Type | Description | Default | Required | +| ------------ | ---------------- | ------------------------------------- | ------- | -------- | +| `forward_to` | `list(receiver)` | Where to forward converted Loki logs. | | yes | ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for logs. Other telemetry signals are ignored. @@ -103,21 +104,22 @@ loki.write "local" { ### Converting OTLP attributes to Loki labels The example below will convert the following attributes to Loki labels: -* The `service.name` and `service.namespace` OTLP resource attributes. -* The `event.domain` and `event.name` OTLP log attributes. + +- The `service.name` and `service.namespace` OTLP resource attributes. +- The `event.domain` and `event.name` OTLP log attributes. Labels will be translated to a [Prometheus format][]. For example: -| OpenTelemetry Attribute | Prometheus Label | -|---|---| -| `name` | `name` | -| `host.name` | `host_name` | -| `host_name` | `host_name` | -| `name (of the host)` | `name__of_the_host_` | -| `2 cents` | `key_2_cents` | -| `__name` | `__name` | -| `_name` | `key_name` | -| `_name` | `_name` (if `PermissiveLabelSanitization` is enabled) | +| OpenTelemetry Attribute | Prometheus Label | +| ----------------------- | ----------------------------------------------------- | +| `name` | `name` | +| `host.name` | `host_name` | +| `host_name` | `host_name` | +| `name (of the host)` | `name__of_the_host_` | +| `2 cents` | `key_2_cents` | +| `__name` | `__name` | +| `_name` | `key_name` | +| `_name` | `_name` (if `PermissiveLabelSanitization` is enabled) | ```river otelcol.receiver.otlp "default" { @@ -134,13 +136,13 @@ otelcol.processor.attributes "default" { action = "insert" value = "event.domain, event.name" } - + action { key = "loki.resource.labels" action = "insert" value = "service.name, service.namespace" } - + output { logs = [otelcol.exporter.loki.default.input] } diff --git a/docs/sources/flow/reference/components/otelcol.exporter.otlp.md b/docs/sources/flow/reference/components/otelcol.exporter.otlp.md index 58b428070367..793277f41fdf 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.otlp.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.otlp.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.otlp/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.otlp/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.otlp/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.otlp/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.otlp/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.otlp/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.otlp/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.otlp/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.otlp/ description: Learn about otelcol.exporter.otlp title: otelcol.exporter.otlp @@ -35,23 +35,23 @@ otelcol.exporter.otlp "LABEL" { `otelcol.exporter.otlp` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`timeout` | `duration` | Time to wait before marking a request as failed. | `"5s"` | no +| Name | Type | Description | Default | Required | +| --------- | ---------- | ------------------------------------------------ | ------- | -------- | +| `timeout` | `duration` | Time to wait before marking a request as failed. | `"5s"` | no | ## Blocks The following blocks are supported inside the definition of `otelcol.exporter.otlp`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures the gRPC server to send telemetry data to. | yes -client > tls | [tls][] | Configures TLS for the gRPC client. | no -client > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no -sending_queue | [sending_queue][] | Configures batching of data before sending. | no -retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no +| Hierarchy | Block | Description | Required | +| ------------------ | -------------------- | -------------------------------------------------------------------------- | -------- | +| client | [client][] | Configures the gRPC server to send telemetry data to. | yes | +| client > tls | [tls][] | Configures TLS for the gRPC client. | no | +| client > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no | +| sending_queue | [sending_queue][] | Configures batching of data before sending. | no | +| retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > tls` refers to a `tls` block defined inside a `client` block. @@ -69,17 +69,17 @@ The `client` block configures the gRPC client used by the component. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to send telemetry data to. | | yes -`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no -`wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no -`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no -`balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no -`authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no -`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------------------- | -------------------------------------------------------------------------------- | ------------ | -------- | +| `endpoint` | `string` | `host:port` to send telemetry data to. | | yes | +| `compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no | +| `wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no | +| `headers` | `map(string)` | Additional headers to send with the request. | `{}` | no | +| `balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no | +| `authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no | +| `auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no | {{< docs/shared lookup="flow/reference/components/otelcol-compression-field.md" source="agent" version="" >}} @@ -89,8 +89,8 @@ Name | Type | Description | Default | Required An HTTP proxy can be configured through the following environment variables: -* `HTTPS_PROXY` -* `NO_PROXY` +- `HTTPS_PROXY` +- `NO_PROXY` The `HTTPS_PROXY` environment variable specifies a URL to use for proxying requests. Connections to the proxy are established via [the `HTTP CONNECT` @@ -128,11 +128,11 @@ connections. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`ping_wait` | `duration` | How often to ping the server after no activity. | | no -`ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no -`ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ------------------------------------------------------------------------------------------ | ------- | -------- | +| `ping_wait` | `duration` | How often to ping the server after no activity. | | no | +| `ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no | +| `ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no | ### sending_queue block @@ -156,9 +156,9 @@ retried. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -175,15 +175,15 @@ information. ## Debug metrics -* `exporter_sent_spans_ratio_total` (counter): Number of spans successfully sent to destination. -* `exporter_send_failed_spans_ratio_total` (counter): Number of spans in failed attempts to send to destination. -* `exporter_queue_capacity_ratio` (gauge): Fixed capacity of the retry queue (in batches) -* `exporter_queue_size_ratio` (gauge): Current size of the retry queue (in batches) -* `rpc_client_duration_milliseconds` (histogram): Measures the duration of inbound RPC. -* `rpc_client_request_size_bytes` (histogram): Measures size of RPC request messages (uncompressed). -* `rpc_client_requests_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. -* `rpc_client_response_size_bytes` (histogram): Measures size of RPC response messages (uncompressed). -* `rpc_client_responses_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. +- `exporter_sent_spans_ratio_total` (counter): Number of spans successfully sent to destination. +- `exporter_send_failed_spans_ratio_total` (counter): Number of spans in failed attempts to send to destination. +- `exporter_queue_capacity_ratio` (gauge): Fixed capacity of the retry queue (in batches) +- `exporter_queue_size_ratio` (gauge): Current size of the retry queue (in batches) +- `rpc_client_duration_milliseconds` (histogram): Measures the duration of inbound RPC. +- `rpc_client_request_size_bytes` (histogram): Measures size of RPC request messages (uncompressed). +- `rpc_client_requests_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. +- `rpc_client_response_size_bytes` (histogram): Measures size of RPC response messages (uncompressed). +- `rpc_client_responses_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. ## Examples @@ -221,6 +221,7 @@ otelcol.auth.basic "grafana_cloud_tempo" { password = env("GRAFANA_CLOUD_API_KEY") } ``` + ## Compatible components diff --git a/docs/sources/flow/reference/components/otelcol.exporter.otlphttp.md b/docs/sources/flow/reference/components/otelcol.exporter.otlphttp.md index bf743e547912..b3bab82cb5e2 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.otlphttp.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.otlphttp.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.otlphttp/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.otlphttp/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.otlphttp/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.otlphttp/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.otlphttp/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.otlphttp/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.otlphttp/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.otlphttp/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.otlphttp/ description: Learn about otelcol.exporter.otlphttp title: otelcol.exporter.otlphttp @@ -35,11 +35,11 @@ otelcol.exporter.otlphttp "LABEL" { `otelcol.exporter.otlphttp` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`metrics_endpoint` | `string` | The endpoint to send metrics to. | `client.endpoint + "/v1/metrics"` | no -`logs_endpoint` | `string` | The endpoint to send logs to. | `client.endpoint + "/v1/logs"` | no -`traces_endpoint` | `string` | The endpoint to send traces to. | `client.endpoint + "/v1/traces"` | no +| Name | Type | Description | Default | Required | +| ------------------ | -------- | -------------------------------- | --------------------------------- | -------- | +| `metrics_endpoint` | `string` | The endpoint to send metrics to. | `client.endpoint + "/v1/metrics"` | no | +| `logs_endpoint` | `string` | The endpoint to send logs to. | `client.endpoint + "/v1/logs"` | no | +| `traces_endpoint` | `string` | The endpoint to send traces to. | `client.endpoint + "/v1/traces"` | no | The default value depends on the `endpoint` field set in the required `client` block. If set, these arguments override the `client.endpoint` field for the @@ -50,13 +50,13 @@ corresponding signal. The following blocks are supported inside the definition of `otelcol.exporter.otlphttp`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures the HTTP server to send telemetry data to. | yes -client > tls | [tls][] | Configures TLS for the HTTP client. | no -sending_queue | [sending_queue][] | Configures batching of data before sending. | no -retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no +| Hierarchy | Block | Description | Required | +| ---------------- | -------------------- | -------------------------------------------------------------------------- | -------- | +| client | [client][] | Configures the HTTP server to send telemetry data to. | yes | +| client > tls | [tls][] | Configures TLS for the HTTP client. | no | +| sending_queue | [sending_queue][] | Configures batching of data before sending. | no | +| retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > tls` refers to a `tls` block defined inside a `client` block. @@ -73,27 +73,28 @@ The `client` block configures the HTTP client used by the component. The following arguments are supported: -Name | Type | Description | Default | Required -------------------------- | -------------------------- | ----------- | ------- | -------- -`endpoint` | `string` | The target URL to send telemetry data to. | | yes -`encoding` | `string` | The encoding to use for messages. Should be either `"proto"` or `"json"`. | `"proto"` | no -`read_buffer_size` | `string` | Size of the read buffer the HTTP client uses for reading server responses. | `0` | no -`write_buffer_size` | `string` | Size of the write buffer the HTTP client uses for writing requests. | `"512KiB"` | no -`timeout` | `duration` | Time to wait before marking a request as failed. | `"30s"` | no -`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no -`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no -`max_idle_conns` | `int` | Limits the number of idle HTTP connections the client can keep open. | `100` | no -`max_idle_conns_per_host` | `int` | Limits the number of idle HTTP connections the host can keep open. | `0` | no -`max_conns_per_host` | `int` | Limits the total (dialing,active, and idle) number of connections per host. | `0` | no -`idle_conn_timeout` | `duration` | Time to wait before an idle connection closes itself. | `"90s"` | no -`disable_keep_alives` | `bool` | Disable HTTP keep-alive. | `false` | no -`http2_read_idle_timeout` | `duration` | Timeout after which a health check using ping frame will be carried out if no frame is received on the connection. | `0s` | no -`http2_ping_timeout` | `duration` | Timeout after which the connection will be closed if a response to Ping is not received. | `15s` | no -`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no +| Name | Type | Description | Default | Required | +| ------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------ | ---------- | -------- | +| `endpoint` | `string` | The target URL to send telemetry data to. | | yes | +| `encoding` | `string` | The encoding to use for messages. Should be either `"proto"` or `"json"`. | `"proto"` | no | +| `read_buffer_size` | `string` | Size of the read buffer the HTTP client uses for reading server responses. | `0` | no | +| `write_buffer_size` | `string` | Size of the write buffer the HTTP client uses for writing requests. | `"512KiB"` | no | +| `timeout` | `duration` | Time to wait before marking a request as failed. | `"30s"` | no | +| `headers` | `map(string)` | Additional headers to send with the request. | `{}` | no | +| `compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no | +| `max_idle_conns` | `int` | Limits the number of idle HTTP connections the client can keep open. | `100` | no | +| `max_idle_conns_per_host` | `int` | Limits the number of idle HTTP connections the host can keep open. | `0` | no | +| `max_conns_per_host` | `int` | Limits the total (dialing,active, and idle) number of connections per host. | `0` | no | +| `idle_conn_timeout` | `duration` | Time to wait before an idle connection closes itself. | `"90s"` | no | +| `disable_keep_alives` | `bool` | Disable HTTP keep-alive. | `false` | no | +| `http2_read_idle_timeout` | `duration` | Timeout after which a health check using ping frame will be carried out if no frame is received on the connection. | `0s` | no | +| `http2_ping_timeout` | `duration` | Timeout after which the connection will be closed if a response to Ping is not received. | `15s` | no | +| `auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no | When setting `headers`, note that: - - Certain headers such as `Content-Length` and `Connection` are automatically written when needed and values in `headers` may be ignored. - - The `Host` header is automatically derived from the `endpoint` value. However, this automatic assignment can be overridden by explicitly setting a `Host` header in `headers`. + +- Certain headers such as `Content-Length` and `Connection` are automatically written when needed and values in `headers` may be ignored. +- The `Host` header is automatically derived from the `endpoint` value. However, this automatic assignment can be overridden by explicitly setting a `Host` header in `headers`. Setting `disable_keep_alives` to `true` will result in significant overhead establishing a new HTTP(s) connection for every request. Before enabling this option, consider whether changes to idle connection settings can achieve your goal. @@ -133,9 +134,9 @@ retried. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -166,6 +167,7 @@ otelcol.exporter.otlphttp "tempo" { } } ``` + ## Compatible components diff --git a/docs/sources/flow/reference/components/otelcol.exporter.prometheus.md b/docs/sources/flow/reference/components/otelcol.exporter.prometheus.md index 33328e6d2a5c..66919041adcc 100644 --- a/docs/sources/flow/reference/components/otelcol.exporter.prometheus.md +++ b/docs/sources/flow/reference/components/otelcol.exporter.prometheus.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.prometheus/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.prometheus/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.prometheus/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.prometheus/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.exporter.prometheus/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.exporter.prometheus/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.exporter.prometheus/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.exporter.prometheus/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.exporter.prometheus/ description: Learn about otelcol.exporter.prometheus title: otelcol.exporter.prometheus @@ -38,32 +38,32 @@ otelcol.exporter.prometheus "LABEL" { `otelcol.exporter.prometheus` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- |-----------------------------------------------------------| ------- | -------- -`include_target_info` | `boolean` | Whether to include `target_info` metrics. | `true` | no -`include_scope_info` | `boolean` | Whether to include `otel_scope_info` metrics. | `false` | no -`include_scope_labels` | `boolean` | Whether to include additional OTLP labels in all metrics. | `true` | no -`add_metric_suffixes` | `boolean` | Whether to add type and unit suffixes to metrics names. | `true` | no -`gc_frequency` | `duration` | How often to clean up stale metrics from memory. | `"5m"` | no -`forward_to` | `list(MetricsReceiver)` | Where to forward converted Prometheus metrics. | | yes -`resource_to_telemetry_conversion` | `boolean` | Whether to convert OTel resource attributes to Prometheus labels. | `false` | no - -By default, OpenTelemetry resources are converted into `target_info` metrics. +| Name | Type | Description | Default | Required | +| ---------------------------------- | ----------------------- | ----------------------------------------------------------------- | ------- | -------- | +| `include_target_info` | `boolean` | Whether to include `target_info` metrics. | `true` | no | +| `include_scope_info` | `boolean` | Whether to include `otel_scope_info` metrics. | `false` | no | +| `include_scope_labels` | `boolean` | Whether to include additional OTLP labels in all metrics. | `true` | no | +| `add_metric_suffixes` | `boolean` | Whether to add type and unit suffixes to metrics names. | `true` | no | +| `gc_frequency` | `duration` | How often to clean up stale metrics from memory. | `"5m"` | no | +| `forward_to` | `list(MetricsReceiver)` | Where to forward converted Prometheus metrics. | | yes | +| `resource_to_telemetry_conversion` | `boolean` | Whether to convert OTel resource attributes to Prometheus labels. | `false` | no | + +By default, OpenTelemetry resources are converted into `target_info` metrics. OpenTelemetry instrumentation scopes are converted into `otel_scope_info` metrics. Set the `include_scope_info` and `include_target_info` arguments to `false`, respectively, to disable the custom metrics. -When `include_scope_labels` is `true` the `otel_scope_name` and +When `include_scope_labels` is `true` the `otel_scope_name` and `otel_scope_version` labels are added to every converted metric sample. When `include_target_info` is true, OpenTelemetry Collector resources are converted into `target_info` metrics. {{< admonition type="note" >}} -OTLP metrics can have a lot of resource attributes. +OTLP metrics can have a lot of resource attributes. Setting `resource_to_telemetry_conversion` to `true` would convert all of them to Prometheus labels, which may not be what you want. -Instead of using `resource_to_telemetry_conversion`, most users need to use `otelcol.processor.transform` -to convert OTLP resource attributes to OTLP metric datapoint attributes before using `otelcol.exporter.prometheus`. +Instead of using `resource_to_telemetry_conversion`, most users need to use `otelcol.processor.transform` +to convert OTLP resource attributes to OTLP metric datapoint attributes before using `otelcol.exporter.prometheus`. See [Creating Prometheus labels from OTLP resource attributes][] for an example. [Creating Prometheus labels from OTLP resource attributes]: #creating-prometheus-labels-from-otlp-resource-attributes @@ -74,9 +74,9 @@ See [Creating Prometheus labels from OTLP resource attributes][] for an example. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for metrics. Other telemetry signals are ignored. @@ -85,7 +85,7 @@ are forwarded to the `forward_to` argument. The following are dropped during the conversion process: -* Metrics that use the delta aggregation temporality +- Metrics that use the delta aggregation temporality ## Component health @@ -127,7 +127,7 @@ prometheus.remote_write "mimir" { ## Create Prometheus labels from OTLP resource attributes This example uses `otelcol.processor.transform` to add extra `key1` and `key2` OTLP metric datapoint attributes from the -`key1` and `key2` OTLP resource attributes. +`key1` and `key2` OTLP resource attributes. `otelcol.exporter.prometheus` then converts `key1` and `key2` to Prometheus labels along with any other OTLP metric datapoint attributes. @@ -188,4 +188,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.extension.jaeger_remote_sampling.md b/docs/sources/flow/reference/components/otelcol.extension.jaeger_remote_sampling.md index f229db000c05..238cc815de8e 100644 --- a/docs/sources/flow/reference/components/otelcol.extension.jaeger_remote_sampling.md +++ b/docs/sources/flow/reference/components/otelcol.extension.jaeger_remote_sampling.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.extension.jaeger_remote_sampling/ description: Learn about otelcol.extension.jaeger_remote_sampling label: @@ -44,20 +44,20 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.extension.jaeger_remote_sampling`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -http | [http][] | Configures the http server to serve Jaeger remote sampling. | no -http > tls | [tls][] | Configures TLS for the HTTP server. | no -http > cors | [cors][] | Configures CORS for the HTTP server. | no -grpc | [grpc][] | Configures the grpc server to serve Jaeger remote sampling. | no -grpc > tls | [tls][] | Configures TLS for the gRPC server. | no -grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no -grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no -grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no -source | [source][] | Configures the Jaeger remote sampling document. | yes -source > remote | [remote][] | Configures the gRPC client used to retrieve the Jaeger remote sampling document. | no -source > remote > tls | [tls][] | Configures TLS for the gRPC client. | no -source > remote > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------- | ---------------------- | -------------------------------------------------------------------------------- | -------- | +| http | [http][] | Configures the http server to serve Jaeger remote sampling. | no | +| http > tls | [tls][] | Configures TLS for the HTTP server. | no | +| http > cors | [cors][] | Configures CORS for the HTTP server. | no | +| grpc | [grpc][] | Configures the grpc server to serve Jaeger remote sampling. | no | +| grpc > tls | [tls][] | Configures TLS for the gRPC server. | no | +| grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no | +| grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no | +| grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no | +| source | [source][] | Configures the Jaeger remote sampling document. | yes | +| source > remote | [remote][] | Configures the gRPC client used to retrieve the Jaeger remote sampling document. | no | +| source > remote > tls | [tls][] | Configures TLS for the gRPC client. | no | +| source > remote > keepalive | [keepalive][] | Configures keepalive settings for the gRPC client. | no | The `>` symbol indicates deeper levels of nesting. For example, `grpc > tls` refers to a `tls` block defined inside a `grpc` block. @@ -76,16 +76,16 @@ refers to a `tls` block defined inside a `grpc` block. ### http block -The `http` block configures an HTTP server which serves the Jaeger remote +The `http` block configures an HTTP server which serves the Jaeger remote sampling document. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:5778"` | no -`max_request_body_size` | `string` | Maximum request body size the server will allow. No limit when unset. | | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | --------- | --------------------------------------------------------------------- | ---------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:5778"` | no | +| `max_request_body_size` | `string` | Maximum request body size the server will allow. No limit when unset. | | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | ### tls block @@ -100,38 +100,38 @@ The `cors` block configures CORS settings for an HTTP server. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no -`allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no -`max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | -------------------------------------------------------- | ---------------------- | -------- | +| `allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no | +| `allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no | +| `max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no | The `allowed_headers` specifies which headers are acceptable from a CORS request. The following headers are always implicitly allowed: -* `Accept` -* `Accept-Language` -* `Content-Type` -* `Content-Language` +- `Accept` +- `Accept-Language` +- `Content-Type` +- `Content-Language` If `allowed_headers` includes `"*"`, all headers will be permitted. ### grpc block The `grpc` block configures a gRPC server which serves the Jaeger remote - sampling document. +sampling document. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14250"` | no -`transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no -`max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no -`max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | --------- | -------------------------------------------------------------------------- | ----------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14250"` | no | +| `transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no | +| `max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no | +| `max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | ### keepalive block @@ -148,13 +148,13 @@ servers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no -`max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no -`max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no -`time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no -`timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no +| Name | Type | Description | Default | Required | +| -------------------------- | ---------- | ------------------------------------------------------------------------------------ | ------------ | -------- | +| `max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no | +| `max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no | +| `max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no | +| `time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no | +| `timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no | ### enforcement_policy block @@ -164,10 +164,10 @@ configured policy. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no -`permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ----------------------------------------------------------------------- | ------- | -------- | +| `min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no | +| `permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no | ### source block @@ -176,13 +176,13 @@ that is served by the servers specified in the `grpc` and `http` blocks. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`file` | `string` | A local file containing a Jaeger remote sampling document. | `""` | no -`reload_interval` | `duration` | The interval at which to reload the specified file. Leave at 0 to never reload. | `0` | no -`content` | `string` | A string containing the Jaeger remote sampling contents directly. | `""` | no +| Name | Type | Description | Default | Required | +| ----------------- | ---------- | ------------------------------------------------------------------------------- | ------- | -------- | +| `file` | `string` | A local file containing a Jaeger remote sampling document. | `""` | no | +| `reload_interval` | `duration` | The interval at which to reload the specified file. Leave at 0 to never reload. | `0` | no | +| `content` | `string` | A string containing the Jaeger remote sampling contents directly. | `""` | no | -Exactly one of the `file` argument, `content` argument or `remote` block must be specified. +Exactly one of the `file` argument, `content` argument or `remote` block must be specified. ### remote block @@ -190,17 +190,17 @@ The `remote` block configures the gRPC client used by the component. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to send telemetry data to. | | yes -`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no -`wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no -`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no -`balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no -`authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no -`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------------------- | -------------------------------------------------------------------------------- | ------------ | -------- | +| `endpoint` | `string` | `host:port` to send telemetry data to. | | yes | +| `compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC client to use for reading server responses. | | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC client to use for writing requests. | `"512KiB"` | no | +| `wait_for_ready` | `boolean` | Waits for gRPC connection to be in the `READY` state before sending data. | `false` | no | +| `headers` | `map(string)` | Additional headers to send with the request. | `{}` | no | +| `balancer_name` | `string` | Which gRPC client-side load balancer to use for requests. | `pick_first` | no | +| `authority` | `string` | Overrides the default `:authority` header in gRPC requests from the gRPC client. | | no | +| `auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no | {{< docs/shared lookup="flow/reference/components/otelcol-compression-field.md" source="agent" version="" >}} @@ -210,8 +210,8 @@ Name | Type | Description | Default | Required An HTTP proxy can be configured through the following environment variables: -* `HTTPS_PROXY` -* `NO_PROXY` +- `HTTPS_PROXY` +- `NO_PROXY` The `HTTPS_PROXY` environment variable specifies a URL to use for proxying requests. Connections to the proxy are established via [the `HTTP CONNECT` @@ -244,11 +244,11 @@ connections. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`ping_wait` | `duration` | How often to ping the server after no activity. | | no -`ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no -`ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ------------------------------------------------------------------------------------------ | ------- | -------- | +| `ping_wait` | `duration` | How often to ping the server after no activity. | | no | +| `ping_response_timeout` | `duration` | Time to wait before closing inactive connections if the server does not respond to a ping. | | no | +| `ping_without_stream` | `boolean` | Send pings even if there is no active stream request. | | no | ## Component health @@ -280,9 +280,8 @@ otelcol.extension.jaeger_remote_sampling "example" { ### Serving from another component - This example uses the output of a component to determine what sampling -rules to serve: +rules to serve: ```river local.file "sampling" { diff --git a/docs/sources/flow/reference/components/otelcol.processor.attributes.md b/docs/sources/flow/reference/components/otelcol.processor.attributes.md index 6c07d1c713e0..ac0d469c82ee 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.attributes.md +++ b/docs/sources/flow/reference/components/otelcol.processor.attributes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.attributes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.attributes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.attributes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.attributes/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.attributes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.attributes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.attributes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.attributes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.attributes/ description: Learn about otelcol.processor.attributes title: otelcol.processor.attributes @@ -13,7 +13,7 @@ title: otelcol.processor.attributes `otelcol.processor.attributes` accepts telemetry data from other `otelcol` components and modifies attributes of a span, log, or metric. -It also supports the ability to filter and match input data to determine if +It also supports the ability to filter and match input data to determine if it should be included or excluded for attribute modifications. > **NOTE**: `otelcol.processor.attributes` is a wrapper over the upstream @@ -82,68 +82,68 @@ The `action` block configures how to modify the span, log, or metric. The following attributes are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The attribute that the action relates to. | | yes -`action` | `string` | The type of action performed. | | yes -`value` | `any` | The value to populate for the key. | | no -`pattern` | `string` | A regex pattern. | `""` | no -`from_attribute` | `string` | The attribute from the input data used to populate the attribute value. | `""` | no -`from_context` | `string` | The context value used to populate the attribute value. | `""` | no -`converted_type` | `string` | The type to convert the attribute value to. | `""` | no +| Name | Type | Description | Default | Required | +| ---------------- | -------- | ----------------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | The attribute that the action relates to. | | yes | +| `action` | `string` | The type of action performed. | | yes | +| `value` | `any` | The value to populate for the key. | | no | +| `pattern` | `string` | A regex pattern. | `""` | no | +| `from_attribute` | `string` | The attribute from the input data used to populate the attribute value. | `""` | no | +| `from_context` | `string` | The context value used to populate the attribute value. | `""` | no | +| `converted_type` | `string` | The type to convert the attribute value to. | `""` | no | The `value` data type must be either a number, string, or boolean. The supported values for `action` are: -* `insert`: Inserts a new attribute in input data where the key does not already exist. +- `insert`: Inserts a new attribute in input data where the key does not already exist. - * The `key` attribute is required. It specifies the attribute to act upon. - * One of the `value`, `from_attribute` or `from_context` attributes is required. + - The `key` attribute is required. It specifies the attribute to act upon. + - One of the `value`, `from_attribute` or `from_context` attributes is required. -* `update`: Updates an attribute in input data where the key does exist. +- `update`: Updates an attribute in input data where the key does exist. - * The `key`attribute is required. It specifies the attribute to act upon. - * One of the `value`, `from_attribute` or `from_context` attributes is required. + - The `key`attribute is required. It specifies the attribute to act upon. + - One of the `value`, `from_attribute` or `from_context` attributes is required. -* `upsert`: Either inserts a new attribute in input data where the key does not already exist - or updates an attribute in input data where the key does exist. +- `upsert`: Either inserts a new attribute in input data where the key does not already exist + or updates an attribute in input data where the key does exist. - * The `key`attribute is required. It specifies the attribute to act upon. - * One of the `value`, `from_attribute` or `from_context`attributes is required: - * `value` specifies the value to populate for the key. - * `from_attribute` specifies the attribute from the input data to use to populate - the value. If the attribute doesn't exist, no action is performed. - * `from_context` specifies the context value used to populate the attribute value. - If the key is prefixed with `metadata.`, the values are searched - in the receiver's transport protocol for additional information like gRPC Metadata or HTTP Headers. - If the key is prefixed with `auth.`, the values are searched - in the authentication information set by the server authenticator. - Refer to the server authenticator's documentation part of your pipeline - for more information about which attributes are available. - If the key doesn't exist, no action is performed. - If the key has multiple values the values will be joined with a `;` separator. + - The `key`attribute is required. It specifies the attribute to act upon. + - One of the `value`, `from_attribute` or `from_context`attributes is required: + - `value` specifies the value to populate for the key. + - `from_attribute` specifies the attribute from the input data to use to populate + the value. If the attribute doesn't exist, no action is performed. + - `from_context` specifies the context value used to populate the attribute value. + If the key is prefixed with `metadata.`, the values are searched + in the receiver's transport protocol for additional information like gRPC Metadata or HTTP Headers. + If the key is prefixed with `auth.`, the values are searched + in the authentication information set by the server authenticator. + Refer to the server authenticator's documentation part of your pipeline + for more information about which attributes are available. + If the key doesn't exist, no action is performed. + If the key has multiple values the values will be joined with a `;` separator. -* `hash`: Hashes (SHA1) an existing attribute value. +- `hash`: Hashes (SHA1) an existing attribute value. - * The `key` attribute and/or the `pattern` attributes is required. + - The `key` attribute and/or the `pattern` attributes is required. -* `extract`: Extracts values using a regular expression rule from the input key to target keys specified in the rule. - If a target key already exists, it will be overridden. Note: It behaves similarly to the Span Processor `to_attributes` +- `extract`: Extracts values using a regular expression rule from the input key to target keys specified in the rule. + If a target key already exists, it will be overridden. Note: It behaves similarly to the Span Processor `to_attributes` setting with the existing attribute as the source. - * The `key` attribute is required. It specifies the attribute to extract values from. The value of `key` is NOT altered. - * The `pattern` attribute is required. It is the regex pattern used to extract attributes from the value of `key`. - The submatchers must be named. If attributes already exist, they will be overwritten. + - The `key` attribute is required. It specifies the attribute to extract values from. The value of `key` is NOT altered. + - The `pattern` attribute is required. It is the regex pattern used to extract attributes from the value of `key`. + The submatchers must be named. If attributes already exist, they will be overwritten. -* `convert`: Converts an existing attribute to a specified type. +- `convert`: Converts an existing attribute to a specified type. - * The `key` attribute is required. It specifies the attribute to act upon. - * The `converted_type` attribute is required and must be one of int, double or string. + - The `key` attribute is required. It specifies the attribute to act upon. + - The `converted_type` attribute is required and must be one of int, double or string. -* `delete`: Deletes an attribute from the input data. +- `delete`: Deletes an attribute from the input data. - * The `key` attribute and/or the `pattern` attribute is required. It specifies the attribute to act upon. + - The `key` attribute and/or the `pattern` attribute is required. It specifies the attribute to act upon. ### include block @@ -152,11 +152,12 @@ The `include` block provides an option to include data being fed into the [actio {{< docs/shared lookup="flow/reference/components/match-properties-block.md" source="agent" version="" >}} One of the following is also required: -* For spans, one of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified + +- For spans, one of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. The `log_bodies`, `log_severity_texts`, `log_severity`, and `metric_names` attributes are invalid. -* For logs, one of `log_bodies`, `log_severity_texts`, `log_severity`, [attribute][], [resource][], or [library][] must be +- For logs, one of `log_bodies`, `log_severity_texts`, `log_severity`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. The `span_names`, `span_kinds`, `metric_names`, and `services` attributes are invalid. -* For metrics, `metric_names` must be specified with a valid non-empty value for a valid configuration. The `span_names`, +- For metrics, `metric_names` must be specified with a valid non-empty value for a valid configuration. The `span_names`, `span_kinds`, `log_bodies`, `log_severity_texts`, `log_severity`, `services`, [attribute][], [resource][], and [library][] attributes are invalid. If the configuration includes filters which are specific to a particular signal type, it is best to include only that signal type in the component's output. @@ -175,11 +176,12 @@ consider a processor such as [otelcol.processor.tail_sampling]({{< relref "./ote {{< docs/shared lookup="flow/reference/components/match-properties-block.md" source="agent" version="" >}} One of the following is also required: -* For spans, one of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified + +- For spans, one of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. The `log_bodies`, `log_severity_texts`, `log_severity`, and `metric_names` attributes are invalid. -* For logs, one of `log_bodies`, `log_severity_texts`, `log_severity`, [attribute][], [resource][], or [library][] must be +- For logs, one of `log_bodies`, `log_severity_texts`, `log_severity`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. The `span_names`, `span_kinds`, `metric_names`, and `services` attributes are invalid. -* For metrics, `metric_names` must be specified with a valid non-empty value for a valid configuration. The `span_names`, +- For metrics, `metric_names` must be specified with a valid non-empty value for a valid configuration. The `span_names`, `span_kinds`, `log_bodies`, `log_severity_texts`, `log_severity`, `services`, [attribute][], [resource][], and [library][] attributes are invalid. If the configuration includes filters which are specific to a particular signal type, it is best to include only that signal type in the component's output. @@ -213,9 +215,9 @@ For example, adding a `span_names` filter could cause the component to error if The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -337,12 +339,14 @@ otelcol.exporter.otlp "default" { ### Excluding spans based on attributes For example, the following spans match the properties and won't be processed by the processor: -* Span1 Name: "svcB", Attributes: {env: "dev", test_request: 123, credit_card: 1234} -* Span2 Name: "svcA", Attributes: {env: "dev", test_request: false} + +- Span1 Name: "svcB", Attributes: {env: "dev", test_request: 123, credit_card: 1234} +- Span2 Name: "svcA", Attributes: {env: "dev", test_request: false} The following spans do not match the properties and the processor actions are applied to it: -* Span3 Name: "svcB", Attributes: {env: 1, test_request: "dev", credit_card: 1234} -* Span4 Name: "svcC", Attributes: {env: "dev", test_request: false} + +- Span3 Name: "svcB", Attributes: {env: 1, test_request: "dev", credit_card: 1234} +- Span4 Name: "svcC", Attributes: {env: "dev", test_request: false} Note that due to the presence of the `services` attribute, this configuration works only for trace signals. This is why only traces are configured in the `output` block. @@ -438,10 +442,10 @@ otelcol.processor.attributes "default" { ### Including and excluding spans based on regex and services -This processor will remove the "token" attribute and will obfuscate the "password" attribute +This processor will remove the "token" attribute and will obfuscate the "password" attribute in spans where the service name matches `"auth.*"` and where the span name does not match `"login.*"`. -Note that due to the presence of the `services` and `span_names` attributes, this configuration +Note that due to the presence of the `services` and `span_names` attributes, this configuration works only for trace signals. This is why only traces are configured in the `output` block. ```river @@ -487,11 +491,11 @@ matches a regex pattern. ```river otelcol.processor.attributes "default" { include { - // "match_type" of "regexp" defines that the "value" attributes + // "match_type" of "regexp" defines that the "value" attributes // in the "attribute" blocks are regexp-es. match_type = "regexp" - // This attribute ('db.statement') must exist in the span and match + // This attribute ('db.statement') must exist in the span and match // the regex ('SELECT \* FROM USERS.*') for a match. attribute { key = "db.statement" @@ -516,7 +520,7 @@ otelcol.processor.attributes "default" { ### Including spans based on regex of log body This processor will remove the "token" attribute and will obfuscate the "password" -attribute in spans where the log body matches "AUTH.*". +attribute in spans where the log body matches "AUTH.\*". Note that due to the presence of the `log_bodies` attribute, this configuration works only for log signals. This is why only logs are configured in the `output` block. @@ -546,7 +550,7 @@ otelcol.processor.attributes "default" { ### Including spans based on regex of log severity The following demonstrates how to process logs that have a severity level which is equal to or higher than -the level specified in the `log_severity` block. This processor will remove the "token" attribute and will +the level specified in the `log_severity` block. This processor will remove the "token" attribute and will obfuscate the "password" attribute in logs where the severity is at least "INFO". Note that due to the presence of the `log_severity` attribute, this configuration works only for @@ -634,6 +638,7 @@ otelcol.processor.attributes "default" { } } ``` + ## Compatible components @@ -651,4 +656,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.batch.md b/docs/sources/flow/reference/components/otelcol.processor.batch.md index 7b461c1168bc..94ff64be6bb1 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.batch.md +++ b/docs/sources/flow/reference/components/otelcol.processor.batch.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.batch/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.batch/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.batch/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.batch/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.batch/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.batch/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.batch/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.batch/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.batch/ description: Learn about otelcol.processor.batch title: otelcol.processor.batch @@ -17,9 +17,9 @@ data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. We strongly recommend that you configure the batch processor on every Agent that -uses OpenTelemetry (otelcol) Flow components. The batch processor should be -defined in the pipeline after the `otelcol.processor.memory_limiter` as well -as any sampling processors. This is because batching should happen after any +uses OpenTelemetry (otelcol) Flow components. The batch processor should be +defined in the pipeline after the `otelcol.processor.memory_limiter` as well +as any sampling processors. This is because batching should happen after any data drops such as sampling. > **NOTE**: `otelcol.processor.batch` is a wrapper over the upstream @@ -45,33 +45,35 @@ otelcol.processor.batch "LABEL" { `otelcol.processor.batch` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`timeout` | `duration` | How long to wait before flushing the batch. | `"200ms"` | no -`send_batch_size` | `number` | Amount of data to buffer before flushing the batch. | `8192` | no -`send_batch_max_size` | `number` | Upper limit of a batch size. | `0` | no -`metadata_keys` | `list(string)` | Creates a different batcher for each key/value combination of metadata. | `[]` | no -`metadata_cardinality_limit` | `number` | Limit of the unique metadata key/value combinations. | `1000` | no +| Name | Type | Description | Default | Required | +| ---------------------------- | -------------- | ----------------------------------------------------------------------- | --------- | -------- | +| `timeout` | `duration` | How long to wait before flushing the batch. | `"200ms"` | no | +| `send_batch_size` | `number` | Amount of data to buffer before flushing the batch. | `8192` | no | +| `send_batch_max_size` | `number` | Upper limit of a batch size. | `0` | no | +| `metadata_keys` | `list(string)` | Creates a different batcher for each key/value combination of metadata. | `[]` | no | +| `metadata_cardinality_limit` | `number` | Limit of the unique metadata key/value combinations. | `1000` | no | `otelcol.processor.batch` accumulates data into a batch until one of the following events happens: -* The duration specified by `timeout` elapses since the time the last batch was +- The duration specified by `timeout` elapses since the time the last batch was sent. -* The number of spans, log lines, or metric samples processed is greater than +- The number of spans, log lines, or metric samples processed is greater than or equal to the number specified by `send_batch_size`. Logs, traces, and metrics are processed independently. For example, if `send_batch_size` is set to `1000`: -* The processor may, at the same time, buffer 1,000 spans, + +- The processor may, at the same time, buffer 1,000 spans, 1,000 log lines, and 1,000 metric samples before flushing them. -* If there are enough spans for a batch of spans (1,000 or more), but not enough for a +- If there are enough spans for a batch of spans (1,000 or more), but not enough for a batch of metric samples (less than 1,000) then only the spans will be flushed. Use `send_batch_max_size` to limit the amount of data contained in a single batch: -* When set to `0`, batches can be any size. -* When set to a non-zero value, `send_batch_max_size` must be greater than or equal to `send_batch_size`. + +- When set to `0`, batches can be any size. +- When set to a non-zero value, `send_batch_max_size` must be greater than or equal to `send_batch_size`. Every batch will contain up to the `send_batch_max_size` number of spans, log lines, or metric samples. The excess spans, log lines, or metric samples will not be lost - instead, they will be added to the next batch. @@ -79,22 +81,23 @@ Use `send_batch_max_size` to limit the amount of data contained in a single batc For example, assume `send_batch_size` is set to the default `8192` and there are currently 8,000 batched spans. If the batch processor receives 8,000 more spans at once, its behavior depends on how `send_batch_max_size` is configured: -* If `send_batch_max_size` is set to `0`, the total batch size would be 16,000 - which would then be flushed as a single batch. -* If `send_batch_max_size` is set to `10000`, then the total batch size will be + +- If `send_batch_max_size` is set to `0`, the total batch size would be 16,000 + which would then be flushed as a single batch. +- If `send_batch_max_size` is set to `10000`, then the total batch size will be 10,000 and the remaining 6,000 spans will be flushed in a subsequent batch. `metadata_cardinality_limit` applies for the lifetime of the process. -Receivers should be configured with `include_metadata = true` so that metadata +Receivers should be configured with `include_metadata = true` so that metadata keys are available to the processor. -Each distinct combination of metadata triggers the allocation of a new -background task in the Agent that runs for the lifetime of the process, and each -background task holds one pending batch of up to `send_batch_size` records. Batching +Each distinct combination of metadata triggers the allocation of a new +background task in the Agent that runs for the lifetime of the process, and each +background task holds one pending batch of up to `send_batch_size` records. Batching by metadata can therefore substantially increase the amount of memory dedicated to batching. -The maximum number of distinct combinations is limited to the configured `metadata_cardinality_limit`, +The maximum number of distinct combinations is limited to the configured `metadata_cardinality_limit`, which defaults to 1000 to limit memory impact. ## Blocks @@ -102,9 +105,9 @@ which defaults to 1000 to limit memory impact. The following blocks are supported inside the definition of `otelcol.processor.batch`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ------------------------------------------------- | -------- | +| output | [output][] | Configures where to send received telemetry data. | yes | [output]: #output-block @@ -116,9 +119,9 @@ output | [output][] | Configures where to send received telemetry data. | yes The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -135,10 +138,10 @@ information. ## Debug metrics -* `processor_batch_batch_send_size_ratio` (histogram): Number of units in the batch. -* `processor_batch_metadata_cardinality_ratio` (gauge): Number of distinct metadata value combinations being processed. -* `processor_batch_timeout_trigger_send_ratio_total` (counter): Number of times the batch was sent due to a timeout trigger. -* `processor_batch_batch_size_trigger_send_ratio_total` (counter): Number of times the batch was sent due to a size trigger. +- `processor_batch_batch_send_size_ratio` (histogram): Number of units in the batch. +- `processor_batch_metadata_cardinality_ratio` (gauge): Number of distinct metadata value combinations being processed. +- `processor_batch_timeout_trigger_send_ratio_total` (counter): Number of times the batch was sent due to a timeout trigger. +- `processor_batch_batch_size_trigger_send_ratio_total` (counter): Number of times the batch was sent due to a size trigger. ## Examples @@ -189,7 +192,7 @@ otelcol.exporter.otlp "production" { ### Batching based on metadata -Batching by metadata enables support for multi-tenant OpenTelemetry pipelines +Batching by metadata enables support for multi-tenant OpenTelemetry pipelines with batching over groups of data having the same authorization metadata. ```river @@ -227,6 +230,7 @@ otelcol.exporter.otlp "production" { ``` [otelcol.exporter.otlp]: {{< relref "./otelcol.exporter.otlp.md" >}} + ## Compatible components @@ -244,4 +248,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.discovery.md b/docs/sources/flow/reference/components/otelcol.processor.discovery.md index a294c8440d9c..e1977e0492dc 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.discovery.md +++ b/docs/sources/flow/reference/components/otelcol.processor.discovery.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.discovery/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.discovery/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.discovery/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.discovery/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.discovery/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.discovery/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.discovery/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.discovery/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.discovery/ description: Learn about otelcol.processor.discovery title: otelcol.processor.discovery @@ -28,11 +28,12 @@ different labels. {{< admonition type="note" >}} It can be difficult to follow [OpenTelemetry semantic conventions][OTEL sem conv] when adding resource attributes via `otelcol.processor.discovery`: -* `discovery.relabel` and most `discovery.*` processes such as `discovery.kubernetes` + +- `discovery.relabel` and most `discovery.*` processes such as `discovery.kubernetes` can only emit [Prometheus-compatible labels][Prometheus data model]. -* Prometheus labels use underscores (`_`) in labels names, whereas +- Prometheus labels use underscores (`_`) in labels names, whereas [OpenTelemetry semantic conventions][OTEL sem conv] use dots (`.`). -* Although `otelcol.processor.discovery` is able to work with non-Prometheus labels +- Although `otelcol.processor.discovery` is able to work with non-Prometheus labels such as ones containing dots, the fact that `discovery.*` components are generally only compatible with Prometheus naming conventions makes it hard to follow OpenTelemetry semantic conventions in `otelcol.processor.discovery`. @@ -40,12 +41,14 @@ adding resource attributes via `otelcol.processor.discovery`: If your use case is to add resource attributes which contain Kubernetes metadata, consider using `otelcol.processor.k8sattributes` instead. ------- +--- + The main use case for `otelcol.processor.discovery` is for users who migrate to {{< param "PRODUCT_NAME" >}} from Static mode's `prom_sd_operation_type`/`prom_sd_pod_associations` [configuration options][Traces]. [Prometheus data model]: https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels [OTEL sem conv]: https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md + [Traces]: {{< relref "../../../static/configuration/traces-config.md" >}} {{< /admonition >}} @@ -64,33 +67,36 @@ otelcol.processor.discovery "LABEL" { `otelcol.processor.discovery` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | List of target labels to apply to the spans. | | yes -`operation_type` | `string` | Configures whether to update a span's attribute if it already exists. | `upsert` | no -`pod_associations` | `list(string)` | Configures how to decide the hostname of the span. | `["ip", "net.host.ip", "k8s.pod.ip", "hostname", "connection"]` | no +| Name | Type | Description | Default | Required | +| ------------------ | ------------------- | --------------------------------------------------------------------- | --------------------------------------------------------------- | -------- | +| `targets` | `list(map(string))` | List of target labels to apply to the spans. | | yes | +| `operation_type` | `string` | Configures whether to update a span's attribute if it already exists. | `upsert` | no | +| `pod_associations` | `list(string)` | Configures how to decide the hostname of the span. | `["ip", "net.host.ip", "k8s.pod.ip", "hostname", "connection"]` | no | `targets` could come from `discovery.*` components: + 1. The `__address__` label will be matched against the IP address of incoming spans. - * If `__address__` contains a port, it is ignored. + - If `__address__` contains a port, it is ignored. 2. If a match is found, then relabeling rules are applied. - * Note that labels starting with `__` will not be added to the spans. + - Note that labels starting with `__` will not be added to the spans. The supported values for `operation_type` are: -* `insert`: Inserts a new resource attribute if the key does not already exist. -* `update`: Updates a resource attribute if the key already exists. -* `upsert`: Either inserts a new resource attribute if the key does not already exist, - or updates a resource attribute if the key does exist. + +- `insert`: Inserts a new resource attribute if the key does not already exist. +- `update`: Updates a resource attribute if the key already exists. +- `upsert`: Either inserts a new resource attribute if the key does not already exist, + or updates a resource attribute if the key does exist. The supported values for `pod_associations` are: -* `ip`: The hostname will be sourced from an `ip` resource attribute. -* `net.host.ip`: The hostname will be sourced from a `net.host.ip` resource attribute. -* `k8s.pod.ip`: The hostname will be sourced from a `k8s.pod.ip` resource attribute. -* `hostname`: The hostname will be sourced from a `host.name` resource attribute. -* `connection`: The hostname will be sourced from the context from the incoming requests (gRPC and HTTP). - -If multiple `pod_associations` methods are enabled, the order of evaluation is honored. -For example, when `pod_associations` is `["ip", "net.host.ip"]`, `"net.host.ip"` may be matched + +- `ip`: The hostname will be sourced from an `ip` resource attribute. +- `net.host.ip`: The hostname will be sourced from a `net.host.ip` resource attribute. +- `k8s.pod.ip`: The hostname will be sourced from a `k8s.pod.ip` resource attribute. +- `hostname`: The hostname will be sourced from a `host.name` resource attribute. +- `connection`: The hostname will be sourced from the context from the incoming requests (gRPC and HTTP). + +If multiple `pod_associations` methods are enabled, the order of evaluation is honored. +For example, when `pod_associations` is `["ip", "net.host.ip"]`, `"net.host.ip"` may be matched only if `"ip"` has not already matched. ## Blocks @@ -98,9 +104,9 @@ only if `"ip"` has not already matched. The following blocks are supported inside the definition of `otelcol.processor.discovery`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ------------------------------------------------- | -------- | +| output | [output][] | Configures where to send received telemetry data. | yes | [output]: #output-block @@ -112,12 +118,13 @@ output | [output][] | Configures where to send received telemetry data. | yes The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` OTLP-formatted data for telemetry signals of these types: -* traces + +- traces ## Component health @@ -132,6 +139,7 @@ information. ## Examples ### Basic usage + ```river discovery.http "dynamic_targets" { url = "https://example.com/scrape_targets" @@ -173,15 +181,15 @@ otelcol.processor.discovery "default" { ### Using a preconfigured list of attributes -It is not necessary to use a discovery component. In the example below, both a `test_label` and -a `test.label.with.dots` resource attributes will be added to a span if its IP address is -"1.2.2.2". The `__internal_label__` will be not be added to the span, because it begins with +It is not necessary to use a discovery component. In the example below, both a `test_label` and +a `test.label.with.dots` resource attributes will be added to a span if its IP address is +"1.2.2.2". The `__internal_label__` will be not be added to the span, because it begins with a double underscore (`__`). ```river otelcol.processor.discovery "default" { targets = [{ - "__address__" = "1.2.2.2", + "__address__" = "1.2.2.2", "__internal_label__" = "test_val", "test_label" = "test_val2", "test.label.with.dots" = "test.val2.with.dots"}] diff --git a/docs/sources/flow/reference/components/otelcol.processor.filter.md b/docs/sources/flow/reference/components/otelcol.processor.filter.md index c5392a6037eb..6715606d3f33 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.filter.md +++ b/docs/sources/flow/reference/components/otelcol.processor.filter.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.filter/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.filter/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.filter/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.filter/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.filter/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.filter/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.filter/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.filter/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.filter/ description: Learn about otelcol.processor.filter labels: @@ -21,27 +21,30 @@ If any of the OTTL statements evaluates to true, the telemetry data is dropped. OTTL statements consist of [OTTL Converter functions][], which act on paths. A path is a reference to a telemetry data such as: -* Resource attributes. -* Instrumentation scope name. -* Span attributes. -In addition to the [standard OTTL Converter functions][OTTL Converter functions], +- Resource attributes. +- Instrumentation scope name. +- Span attributes. + +In addition to the [standard OTTL Converter functions][OTTL Converter functions], the following metrics-only functions are used exclusively by the processor: -* [HasAttrKeyOnDataPoint][] -* [HasAttrOnDataPoint][] + +- [HasAttrKeyOnDataPoint][] +- [HasAttrOnDataPoint][] [OTTL][] statements used in `otelcol.processor.filter` mostly contain constructs such as: -* [Booleans][OTTL booleans]: - * `not true` - * `not IsMatch(name, "http_.*")` -* [Math expressions][OTTL math expressions]: - * `1 + 1` - * `end_time_unix_nano - start_time_unix_nano` - * `sum([1, 2, 3, 4]) + (10 / 1) - 1` + +- [Booleans][OTTL booleans]: + - `not true` + - `not IsMatch(name, "http_.*")` +- [Math expressions][OTTL math expressions]: + - `1 + 1` + - `end_time_unix_nano - start_time_unix_nano` + - `sum([1, 2, 3, 4]) + (10 / 1) - 1` {{< admonition type="note" >}} Raw River strings can be used to write OTTL statements. -For example, the OTTL statement `attributes["grpc"] == true` +For example, the OTTL statement `attributes["grpc"] == true` is written in River as \`attributes["grpc"] == true\` {{< /admonition >}} @@ -57,13 +60,14 @@ You can specify multiple `otelcol.processor.filter` components by giving them di {{< admonition type="warning" >}} Exercise caution when using `otelcol.processor.filter`: -- Make sure you understand schema/format of the incoming data and test the configuration thoroughly. +- Make sure you understand schema/format of the incoming data and test the configuration thoroughly. In general, use a configuration that is as specific as possible ensure you retain only the data you want to keep. -- [Orphaned Telemetry][]: The processor allows dropping spans. Dropping a span may lead to - orphaned spans if the dropped span is a parent. Dropping a span may lead to orphaned logs +- [Orphaned Telemetry][]: The processor allows dropping spans. Dropping a span may lead to + orphaned spans if the dropped span is a parent. Dropping a span may lead to orphaned logs if the log references the dropped span. [Orphaned Telemetry]: https://github.com/open-telemetry/opentelemetry-collector/blob/v0.85.0/docs/standard-warnings.md#orphaned-telemetry + {{< /admonition >}} ## Usage @@ -82,48 +86,50 @@ otelcol.processor.filter "LABEL" { `otelcol.processor.filter` supports the following arguments: -Name | Type | Description | Default | Required ------------- | -------- | ------------------------------------------------------------------ | ------------- | -------- -`error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | ------------------------------------------------------------------ | ------------- | -------- | +| `error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no | The supported values for `error_mode` are: -* `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. -* `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. -* `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. + +- `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. +- `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. +- `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. ## Blocks The following blocks are supported inside the definition of `otelcol.processor.filter`: -Hierarchy | Block | Description | Required ---------- | ----------- | ------------------------------------------------- | -------- -traces | [traces][] | Statements which filter traces. | no -metrics | [metrics][] | Statements which filter metrics. | no -logs | [logs][] | Statements which filter logs. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ----------- | ------------------------------------------------- | -------- | +| traces | [traces][] | Statements which filter traces. | no | +| metrics | [metrics][] | Statements which filter metrics. | no | +| logs | [logs][] | Statements which filter logs. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | [traces]: #traces-block [metrics]: #metrics-block [logs]: #logs-block [output]: #output-block - ### traces block The `traces` block specifies statements that filter trace telemetry signals. Only one `traces` block can be specified. -Name | Type | Description | Default | Required ------------ | -------------- | --------------------------------------------------- | ------- | -------- -`span` | `list(string)` | List of OTTL statements filtering OTLP spans. | | no -`spanevent` | `list(string)` | List of OTTL statements filtering OTLP span events. | | no +| Name | Type | Description | Default | Required | +| ----------- | -------------- | --------------------------------------------------- | ------- | -------- | +| `span` | `list(string)` | List of OTTL statements filtering OTLP spans. | | no | +| `spanevent` | `list(string)` | List of OTTL statements filtering OTLP span events. | | no | The syntax of OTTL statements depends on the OTTL context. See the OpenTelemetry documentation for more information: -* [OTTL span context][] -* [OTTL spanevent context][] + +- [OTTL span context][] +- [OTTL spanevent context][] Statements are checked in order from "high level" to "low level" telemetry, in this order: + 1. `span` 2. `spanevent` @@ -134,20 +140,22 @@ If all span events for a span are dropped, the span will be left intact. ### metrics block -The `metrics` block specifies statements that filter metric telemetry signals. +The `metrics` block specifies statements that filter metric telemetry signals. Only one `metrics` blocks can be specified. -Name | Type | Description | Default | Required ------------ | -------------- | --------------------------------------------------------- | ------- | -------- -`metric` | `list(string)` | List of OTTL statements filtering OTLP metric. | | no -`datapoint` | `list(string)` | List of OTTL statements filtering OTLP metric datapoints. | | no +| Name | Type | Description | Default | Required | +| ----------- | -------------- | --------------------------------------------------------- | ------- | -------- | +| `metric` | `list(string)` | List of OTTL statements filtering OTLP metric. | | no | +| `datapoint` | `list(string)` | List of OTTL statements filtering OTLP metric datapoints. | | no | -The syntax of OTTL statements depends on the OTTL context. See the OpenTelemetry +The syntax of OTTL statements depends on the OTTL context. See the OpenTelemetry documentation for more information: -* [OTTL metric context][] -* [OTTL datapoint context][] + +- [OTTL metric context][] +- [OTTL datapoint context][] Statements are checked in order from "high level" to "low level" telemetry, in this order: + 1. `metric` 2. `datapoint` @@ -158,19 +166,19 @@ If all datapoints for a metric are dropped, the metric will also be dropped. ### logs block -The `logs` block specifies statements that filter log telemetry signals. +The `logs` block specifies statements that filter log telemetry signals. Only `logs` blocks can be specified. -Name | Type | Description | Default | Required ---------------- | -------------- | ---------------------------------------------- | ------- | -------- -`log_record` | `list(string)` | List of OTTL statements filtering OTLP metric. | | no +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------- | ------- | -------- | +| `log_record` | `list(string)` | List of OTTL statements filtering OTLP metric. | | no | -The syntax of OTTL statements depends on the OTTL context. See the OpenTelemetry +The syntax of OTTL statements depends on the OTTL context. See the OpenTelemetry documentation for more information: -* [OTTL log context][] -Only one of the statements inside the list of statements has to be satisfied. +- [OTTL log context][] +Only one of the statements inside the list of statements has to be satisfied. ### output block @@ -180,9 +188,9 @@ Only one of the statements inside the list of statements has to be satisfied. The following fields are exported and can be referenced by other components: -Name | Type | Description -------- | ------------------ | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -230,8 +238,9 @@ Each `"` is [escaped][river-strings] with `\"` inside the River string. ### Drop metrics based on either of two criteria This example drops metrics which satisfy at least one of two OTTL statements: -* The metric name is `my.metric` and there is a `my_label` resource attribute with a value of `abc123 `. -* The metric is a histogram. + +- The metric name is `my.metric` and there is a `my_label` resource attribute with a value of `abc123 `. +- The metric is a histogram. ```river otelcol.processor.filter "default" { @@ -252,10 +261,10 @@ otelcol.processor.filter "default" { } ``` - Some values in the River string are [escaped][river-strings]: -* `\` is escaped with `\\` -* `"` is escaped with `\"` + +- `\` is escaped with `\\` +- `"` is escaped with `\"` ### Drop non-HTTP spans and sensitive logs @@ -286,15 +295,15 @@ otelcol.processor.filter "default" { Each `"` is [escaped][river-strings] with `\"` inside the River string. - Some values in the River strings are [escaped][river-strings]: -* `\` is escaped with `\\` -* `"` is escaped with `\"` -[river-strings]: {{< relref "../../concepts/config-language/expressions/types_and_values.md/#strings" >}} +- `\` is escaped with `\\` +- `"` is escaped with `\"` +[river-strings]: {{< relref "../../concepts/config-language/expressions/types_and_values.md/#strings" >}} [OTTL]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.85.0/pkg/ottl/README.md + [OTTL span context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottlspan/README.md [OTTL spanevent context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottlspanevent/README.md [OTTL metric context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottlmetric/README.md @@ -305,6 +314,7 @@ Some values in the River strings are [escaped][river-strings]: [HasAttrOnDataPoint]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README.md#hasattrondatapoint [OTTL booleans]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/pkg/ottl#booleans [OTTL math expressions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.85.0/pkg/ottl#math-expressions + ## Compatible components @@ -322,4 +332,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md b/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md index 4328a9746ecf..0285cbfb1dfe 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md +++ b/docs/sources/flow/reference/components/otelcol.processor.k8sattributes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.k8sattributes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.k8sattributes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.k8sattributes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.k8sattributes/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.k8sattributes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.k8sattributes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.k8sattributes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.k8sattributes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.k8sattributes/ description: Learn about otelcol.processor.k8sattributes title: otelcol.processor.k8sattributes @@ -39,27 +39,29 @@ otelcol.processor.k8sattributes "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- |--------------------------------------------|-----------------| -------- -`auth_type` | `string` | Authentication method when connecting to the Kubernetes API. | `serviceAccount` | no -`passthrough` | `bool` | Passthrough signals as-is, only adding a `k8s.pod.ip` resource attribute. | `false` | no +| Name | Type | Description | Default | Required | +| ------------- | -------- | ------------------------------------------------------------------------- | ---------------- | -------- | +| `auth_type` | `string` | Authentication method when connecting to the Kubernetes API. | `serviceAccount` | no | +| `passthrough` | `bool` | Passthrough signals as-is, only adding a `k8s.pod.ip` resource attribute. | `false` | no | The supported values for `auth_type` are: -* `none`: No authentication is required. -* `serviceAccount`: Use the built-in service account that Kubernetes automatically provisions for each pod. -* `kubeConfig`: Use local credentials like those used by kubectl. -* `tls`: Use client TLS authentication. + +- `none`: No authentication is required. +- `serviceAccount`: Use the built-in service account that Kubernetes automatically provisions for each pod. +- `kubeConfig`: Use local credentials like those used by kubectl. +- `tls`: Use client TLS authentication. Setting `passthrough` to `true` enables the "passthrough mode" of `otelcol.processor.k8sattributes`: -* Only a `k8s.pod.ip` resource attribute will be added. -* No other metadata will be added. -* The Kubernetes API will not be accessed. -* To correctly detect the pod IPs, {{< param "PRODUCT_ROOT_NAME" >}} must receive spans directly from services. -* The `passthrough` setting is useful when configuring the Agent as a Kubernetes Deployment. -A {{< param "PRODUCT_ROOT_NAME" >}} running as a Deployment cannot detect the IP addresses of pods generating telemetry -data without any of the well-known IP attributes. If the Deployment {{< param "PRODUCT_ROOT_NAME" >}} receives telemetry from -{{< param "PRODUCT_ROOT_NAME" >}}s deployed as DaemonSet, then some of those attributes might be missing. As a workaround, -you can configure the DaemonSet {{< param "PRODUCT_ROOT_NAME" >}}s with `passthrough` set to `true`. + +- Only a `k8s.pod.ip` resource attribute will be added. +- No other metadata will be added. +- The Kubernetes API will not be accessed. +- To correctly detect the pod IPs, {{< param "PRODUCT_ROOT_NAME" >}} must receive spans directly from services. +- The `passthrough` setting is useful when configuring the Agent as a Kubernetes Deployment. + A {{< param "PRODUCT_ROOT_NAME" >}} running as a Deployment cannot detect the IP addresses of pods generating telemetry + data without any of the well-known IP attributes. If the Deployment {{< param "PRODUCT_ROOT_NAME" >}} receives telemetry from + {{< param "PRODUCT_ROOT_NAME" >}}s deployed as DaemonSet, then some of those attributes might be missing. As a workaround, + you can configure the DaemonSet {{< param "PRODUCT_ROOT_NAME" >}}s with `passthrough` set to `true`. ## Blocks @@ -79,7 +81,6 @@ pod_association > source | [source][] | Source information to identify a pod. | exclude | [exclude][] | Exclude pods from being processed. | no exclude > pod | [pod][] | Pod information. | no - The `>` symbol indicates deeper levels of nesting. For example, `extract > annotation` refers to an `annotation` block defined inside an `extract` block. @@ -101,43 +102,43 @@ The `extract` block configures which metadata, annotations, and labels to extrac The following attributes are supported: -Name | Type | Description | Default | Required ----- |----------------|--------------------------------------|-------------| -------- -`metadata` | `list(string)` | Pre-configured metadata keys to add. | _See below_ | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ------------------------------------ | ----------- | -------- | +| `metadata` | `list(string)` | Pre-configured metadata keys to add. | _See below_ | no | The currently supported `metadata` keys are: -* `k8s.pod.name` -* `k8s.pod.uid` -* `k8s.deployment.name` -* `k8s.node.name` -* `k8s.namespace.name` -* `k8s.pod.start_time` -* `k8s.replicaset.name` -* `k8s.replicaset.uid` -* `k8s.daemonset.name` -* `k8s.daemonset.uid` -* `k8s.job.name` -* `k8s.job.uid` -* `k8s.cronjob.name` -* `k8s.statefulset.name` -* `k8s.statefulset.uid` -* `k8s.container.name` -* `container.image.name` -* `container.image.tag` -* `container.id` +- `k8s.pod.name` +- `k8s.pod.uid` +- `k8s.deployment.name` +- `k8s.node.name` +- `k8s.namespace.name` +- `k8s.pod.start_time` +- `k8s.replicaset.name` +- `k8s.replicaset.uid` +- `k8s.daemonset.name` +- `k8s.daemonset.uid` +- `k8s.job.name` +- `k8s.job.uid` +- `k8s.cronjob.name` +- `k8s.statefulset.name` +- `k8s.statefulset.uid` +- `k8s.container.name` +- `container.image.name` +- `container.image.tag` +- `container.id` By default, if `metadata` is not specified, the following fields are extracted and added to spans, metrics, and logs as resource attributes: -* `k8s.pod.name` -* `k8s.pod.uid` -* `k8s.pod.start_time` -* `k8s.namespace.name` -* `k8s.node.name` -* `k8s.deployment.name` (if the pod is controlled by a deployment) -* `k8s.container.name` (requires an additional attribute to be set: `container.id`) -* `container.image.name` (requires one of the following additional attributes to be set: `container.id` or `k8s.container.name`) -* `container.image.tag` (requires one of the following additional attributes to be set: `container.id` or `k8s.container.name`) +- `k8s.pod.name` +- `k8s.pod.uid` +- `k8s.pod.start_time` +- `k8s.namespace.name` +- `k8s.node.name` +- `k8s.deployment.name` (if the pod is controlled by a deployment) +- `k8s.container.name` (requires an additional attribute to be set: `container.id`) +- `container.image.name` (requires one of the following additional attributes to be set: `container.id` or `k8s.container.name`) +- `container.image.tag` (requires one of the following additional attributes to be set: `container.id` or `k8s.container.name`) ### annotation block @@ -157,10 +158,10 @@ The `filter` block configures which nodes to get data from and which fields and The following attributes are supported: -Name | Type | Description | Default | Required ----- |----------|-------------------------------------------------------------------------| ------- | -------- -`node` | `string` | Configures a Kubernetes node name or host name. | `""` | no -`namespace` | `string` | Filters all pods by the provided namespace. All other pods are ignored. | `""` | no +| Name | Type | Description | Default | Required | +| ----------- | -------- | ----------------------------------------------------------------------- | ------- | -------- | +| `node` | `string` | Configures a Kubernetes node name or host name. | `""` | no | +| `namespace` | `string` | Filters all pods by the provided namespace. All other pods are ignored. | `""` | no | If `node` is specified, then any pods not running on the specified node will be ignored by `otelcol.processor.k8sattributes`. @@ -186,6 +187,7 @@ fully through child blocks. The `pod_association` block can be repeated multiple times, to configure additional rules. Example: + ```river pod_association { source { @@ -215,11 +217,10 @@ pod to be associated with the telemetry signal. The following attributes are supported: -Name | Type | Description | Default | Required ----- |----------|----------------------------------------------------------------------------------| ------- | -------- -`from` | `string` | The association method. Currently supports `resource_attribute` and `connection` | | yes -`name` | `string` | Name represents extracted key name. For example, `ip`, `pod_uid`, `k8s.pod.ip` | | no - +| Name | Type | Description | Default | Required | +| ------ | -------- | -------------------------------------------------------------------------------- | ------- | -------- | +| `from` | `string` | The association method. Currently supports `resource_attribute` and `connection` | | yes | +| `name` | `string` | Name represents extracted key name. For example, `ip`, `pod_uid`, `k8s.pod.ip` | | no | ### exclude block @@ -235,9 +236,9 @@ The `pod` block configures a pod to be excluded from the processor. The following attributes are supported: -Name | Type | Description | Default | Required ----- |----------|---------------------| ------- | -------- -`name` | `string` | The name of the pod | | yes +| Name | Type | Description | Default | Required | +| ------ | -------- | ------------------- | ------- | -------- | +| `name` | `string` | The name of the pod | | yes | ### output block @@ -247,9 +248,9 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -266,14 +267,15 @@ information. ## Examples ### Basic usage + In most cases, this is enough to get started. It'll add these resource attributes to all logs, metrics, and traces: -* `k8s.namespace.name` -* `k8s.pod.name` -* `k8s.pod.uid` -* `k8s.pod.start_time` -* `k8s.deployment.name` -* `k8s.node.name` +- `k8s.namespace.name` +- `k8s.pod.name` +- `k8s.pod.uid` +- `k8s.pod.start_time` +- `k8s.deployment.name` +- `k8s.node.name` Example: @@ -414,6 +416,7 @@ prometheus.remote_write "mimir" { } } ``` + ## Compatible components @@ -431,4 +434,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.memory_limiter.md b/docs/sources/flow/reference/components/otelcol.processor.memory_limiter.md index a7c5a90ab39c..7b9ef9cf41ed 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.memory_limiter.md +++ b/docs/sources/flow/reference/components/otelcol.processor.memory_limiter.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.memory_limiter/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.memory_limiter/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.memory_limiter/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.memory_limiter/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.memory_limiter/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.memory_limiter/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.memory_limiter/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.memory_limiter/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.memory_limiter/ description: Learn about otelcol.processor.memory_limiter title: otelcol.processor.memory_limiter @@ -36,7 +36,7 @@ giving them different labels. ```river otelcol.processor.memory_limiter "LABEL" { check_interval = "1s" - + limit = "50MiB" // alternatively, set `limit_percentage` and `spike_limit_percentage` output { @@ -51,14 +51,13 @@ otelcol.processor.memory_limiter "LABEL" { `otelcol.processor.memory_limiter` supports the following arguments: - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`check_interval` | `duration` | How often to check memory usage. | | yes -`limit` | `string` | Maximum amount of memory targeted to be allocated by the process heap. | `"0MiB"` | no -`spike_limit` | `string` | Maximum spike expected between the measurements of memory usage. | 20% of `limit` | no -`limit_percentage` | `int` | Maximum amount of total available memory targeted to be allocated by the process heap. | `0` | no -`spike_limit_percentage` |` int` | Maximum spike expected between the measurements of memory usage. | `0` | no +| Name | Type | Description | Default | Required | +| ------------------------ | ---------- | -------------------------------------------------------------------------------------- | -------------- | -------- | +| `check_interval` | `duration` | How often to check memory usage. | | yes | +| `limit` | `string` | Maximum amount of memory targeted to be allocated by the process heap. | `"0MiB"` | no | +| `spike_limit` | `string` | Maximum spike expected between the measurements of memory usage. | 20% of `limit` | no | +| `limit_percentage` | `int` | Maximum amount of total available memory targeted to be allocated by the process heap. | `0` | no | +| `spike_limit_percentage` | ` int` | Maximum spike expected between the measurements of memory usage. | `0` | no | The arguments must define either `limit` or the `limit_percentage, spike_limit_percentage` pair, but not both. @@ -79,9 +78,9 @@ The `limit` and `spike_limit` values must be larger than 1 MiB. The following blocks are supported inside the definition of `otelcol.processor.memory_limiter`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ------------------------------------------------- | -------- | +| output | [output][] | Configures where to send received telemetry data. | yes | [output]: #output-block @@ -93,9 +92,9 @@ output | [output][] | Configures where to send received telemetry data. | yes The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -109,6 +108,7 @@ configuration. `otelcol.processor.memory_limiter` does not expose any component-specific debug information. + ## Compatible components @@ -126,4 +126,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.probabilistic_sampler.md b/docs/sources/flow/reference/components/otelcol.processor.probabilistic_sampler.md index 70dfbf8ba6e7..a105ad90688d 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.probabilistic_sampler.md +++ b/docs/sources/flow/reference/components/otelcol.processor.probabilistic_sampler.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.probabilistic_sampler/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.probabilistic_sampler/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.probabilistic_sampler/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.probabilistic_sampler/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.probabilistic_sampler/ description: Learn about telcol.processor.probabilistic_sampler labels: @@ -17,7 +17,7 @@ title: otelcol.processor.probabilistic_sampler {{< admonition type="note" >}} `otelcol.processor.probabilistic_sampler` is a wrapper over the upstream -OpenTelemetry Collector Contrib `probabilistic_sampler` processor. If necessary, +OpenTelemetry Collector Contrib `probabilistic_sampler` processor. If necessary, bug reports or feature requests will be redirected to the upstream repository. {{< /admonition >}} @@ -39,28 +39,29 @@ otelcol.processor.probabilistic_sampler "LABEL" { `otelcol.processor.probabilistic_sampler` supports the following arguments: -Name | Type | Description | Default | Required ----- |-----------|----------------------------------------------------------------------------------------------------------------------|-------------| -------- -`hash_seed` | `uint32` | An integer used to compute the hash algorithm. | `0` | no -`sampling_percentage` | `float32` | Percentage of traces or logs sampled. | `0` | no -`attribute_source` | `string` | Defines where to look for the attribute in `from_attribute`. | `"traceID"` | no -`from_attribute` | `string` | The name of a log record attribute used for sampling purposes. | `""` | no -`sampling_priority` | `string` | The name of a log record attribute used to set a different sampling priority from the `sampling_percentage` setting. | `""` | no +| Name | Type | Description | Default | Required | +| --------------------- | --------- | -------------------------------------------------------------------------------------------------------------------- | ----------- | -------- | +| `hash_seed` | `uint32` | An integer used to compute the hash algorithm. | `0` | no | +| `sampling_percentage` | `float32` | Percentage of traces or logs sampled. | `0` | no | +| `attribute_source` | `string` | Defines where to look for the attribute in `from_attribute`. | `"traceID"` | no | +| `from_attribute` | `string` | The name of a log record attribute used for sampling purposes. | `""` | no | +| `sampling_priority` | `string` | The name of a log record attribute used to set a different sampling priority from the `sampling_percentage` setting. | `""` | no | `hash_seed` determines an integer to compute the hash algorithm. This argument could be used for both traces and logs. When used for logs, it computes the hash of a log record. -For hashing to work, all collectors for a given tier, for example, behind the same load balancer, must have the same `hash_seed`. -It is also possible to leverage a different `hash_seed` at different collector tiers to support additional sampling requirements. +For hashing to work, all collectors for a given tier, for example, behind the same load balancer, must have the same `hash_seed`. +It is also possible to leverage a different `hash_seed` at different collector tiers to support additional sampling requirements. `sampling_percentage` determines the percentage at which traces or logs are sampled. All traces or logs are sampled if you set this argument to a value greater than or equal to 100. -`attribute_source` (logs only) determines where to look for the attribute in `from_attribute`. The allowed values are `traceID` or `record`. +`attribute_source` (logs only) determines where to look for the attribute in `from_attribute`. The allowed values are `traceID` or `record`. `from_attribute` (logs only) determines the name of a log record attribute used for sampling purposes, such as a unique log record ID. The value of the attribute is only used if the trace ID is absent or if `attribute_source` is set to `record`. `sampling_priority` (logs only) determines the name of a log record attribute used to set a different sampling priority from the `sampling_percentage` setting. 0 means to never sample the log record, and greater than or equal to 100 means to always sample the log record. The `probabilistic_sampler` supports two types of sampling for traces: + 1. `sampling.priority` [semantic convention](https://github.com/opentracing/specification/blob/master/semantic_conventions.md#span-tags-table) as defined by OpenTracing. 2. Trace ID hashing. @@ -74,13 +75,14 @@ The `probabilistic_sampler` supports sampling logs according to their trace ID, The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` OTLP-formatted data for any telemetry signal of these types: -* logs -* traces + +- logs +- traces ## Component health @@ -133,7 +135,7 @@ otelcol.processor.probabilistic_sampler "default" { } ``` -### Sample logs according to a "priority" attribute +### Sample logs according to a "priority" attribute ```river otelcol.processor.probabilistic_sampler "default" { @@ -145,6 +147,7 @@ otelcol.processor.probabilistic_sampler "default" { } } ``` + ## Compatible components @@ -162,4 +165,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.resourcedetection.md b/docs/sources/flow/reference/components/otelcol.processor.resourcedetection.md index 9f4f5d882e68..7e755150b0f0 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.resourcedetection.md +++ b/docs/sources/flow/reference/components/otelcol.processor.resourcedetection.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.resourcedetection/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.resourcedetection/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.resourcedetection/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.resourcedetection/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.resourcedetection/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.resourcedetection/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.resourcedetection/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.resourcedetection/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.resourcedetection/ labels: stage: beta @@ -44,28 +44,29 @@ otelcol.processor.resourcedetection "LABEL" { `otelcol.processor.resourcedetection` supports the following arguments: -Name | Type | Description | Default | Required ------------ | -------------- | ----------------------------------------------------------------------------------- |---------- | -------- -`detectors` | `list(string)` | An ordered list of named detectors used to detect resource information. | `["env"]` | no -`override` | `bool` | Configures whether existing resource attributes should be overridden or preserved. | `true` | no -`timeout` | `duration` | Timeout by which all specified detectors must complete. | `"5s"` | no +| Name | Type | Description | Default | Required | +| ----------- | -------------- | ---------------------------------------------------------------------------------- | --------- | -------- | +| `detectors` | `list(string)` | An ordered list of named detectors used to detect resource information. | `["env"]` | no | +| `override` | `bool` | Configures whether existing resource attributes should be overridden or preserved. | `true` | no | +| `timeout` | `duration` | Timeout by which all specified detectors must complete. | `"5s"` | no | `detectors` could contain the following values: -* `env` -* `ec2` -* `ecs` -* `eks` -* `elasticbeanstalk` -* `lambda` -* `azure` -* `aks` -* `consul` -* `docker` -* `gcp` -* `heroku` -* `system` -* `openshift` -* `kubernetes_node` + +- `env` +- `ec2` +- `ecs` +- `eks` +- `elasticbeanstalk` +- `lambda` +- `azure` +- `aks` +- `consul` +- `docker` +- `gcp` +- `heroku` +- `system` +- `openshift` +- `kubernetes_node` `env` is the only detector that is not configured through a River block. The `env` detector reads resource information from the `OTEL_RESOURCE_ATTRIBUTES` environment variable. @@ -80,33 +81,34 @@ If multiple detectors are inserting the same attribute name, the first detector For example, if you had `detectors = ["eks", "ec2"]` then `cloud.platform` will be `aws_eks` instead of `ec2`. The following order is recommended for AWS: - 1. [lambda][] - 1. [elasticbeanstalk][] - 1. [eks][] - 1. [ecs][] - 1. [ec2][] + +1. [lambda][] +1. [elasticbeanstalk][] +1. [eks][] +1. [ecs][] +1. [ec2][] ## Blocks The following blocks are supported inside the definition of `otelcol.processor.resourcedetection`: -Hierarchy | Block | Description | Required ------------------ | --------------------- | ------------------------------------------------- | -------- -output | [output][] | Configures where to send received telemetry data. | yes -ec2 | [ec2][] | | no -ecs | [ecs][] | | no -eks | [eks][] | | no -elasticbeanstalk | [elasticbeanstalk][] | | no -lambda | [lambda][] | | no -azure | [azure][] | | no -aks | [aks][] | | no -consul | [consul][] | | no -docker | [docker][] | | no -gcp | [gcp][] | | no -heroku | [heroku][] | | no -system | [system][] | | no -openshift | [openshift][] | | no -kubernetes_node | [kubernetes_node][] | | no +| Hierarchy | Block | Description | Required | +| ---------------- | -------------------- | ------------------------------------------------- | -------- | +| output | [output][] | Configures where to send received telemetry data. | yes | +| ec2 | [ec2][] | | no | +| ecs | [ecs][] | | no | +| eks | [eks][] | | no | +| elasticbeanstalk | [elasticbeanstalk][] | | no | +| lambda | [lambda][] | | no | +| azure | [azure][] | | no | +| aks | [aks][] | | no | +| consul | [consul][] | | no | +| docker | [docker][] | | no | +| gcp | [gcp][] | | no | +| heroku | [heroku][] | | no | +| system | [system][] | | no | +| openshift | [openshift][] | | no | +| kubernetes_node | [kubernetes_node][] | | no | [output]: #output [ec2]: #ec2 @@ -123,7 +125,6 @@ kubernetes_node | [kubernetes_node][] | [system]: #system [openshift]: #openshift [kubernetes_node]: #kubernetes_node - [res-attr-cfg]: #resource-attribute-config ### output @@ -136,9 +137,9 @@ The `ec2` block reads resource information from the [EC2 instance metadata API] The `ec2` block supports the following attributes: -Attribute | Type | Description | Default | Required ------------ |----------------| --------------------------------------------------------------------------- |-------------| -------- -`tags` | `list(string)` | A list of regular expressions to match against tag keys of an EC2 instance. | `[]` | no +| Attribute | Type | Description | Default | Required | +| --------- | -------------- | --------------------------------------------------------------------------- | ------- | -------- | +| `tags` | `list(string)` | A list of regular expressions to match against tag keys of an EC2 instance. | `[]` | no | If you are using a proxy server on your EC2 instance, it's important that you exempt requests for instance metadata as described in the [AWS cli user guide][]. Failing to do so can result in proxied or missing instance data. @@ -155,25 +156,25 @@ To fetch EC2 tags, the IAM role assigned to the EC2 instance must have a policy The `ec2` block supports the following blocks: -Block | Description | Required ----------------------------------------------- | ------------------------------------------------- | -------- -[resource_attributes](#ec2--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#ec2--resource_attributes) | Configures which resource attributes to add. | no | ##### ec2 > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- -[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no -[host.image.id][res-attr-cfg] | Toggles the `host.image.id` resource attribute.
Sets `enabled` to `true` by default. | no -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no -[host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| --------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- | +| [cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.image.id][res-attr-cfg] | Toggles the `host.image.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no | ### ecs @@ -183,31 +184,31 @@ The `ecs` block queries the Task Metadata Endpoint (TMDE) to record information The `ecs` block supports the following blocks: -Block | Description | Required --------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#ecs--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#ecs--resource_attributes) | Configures which resource attributes to add. | no | #### ecs > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- -[aws.ecs.cluster.arn][res-attr-cfg] | Toggles the `aws.ecs.cluster.arn` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.ecs.launchtype][res-attr-cfg] | Toggles the `aws.ecs.launchtype` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.ecs.task.arn][res-attr-cfg] | Toggles the `aws.ecs.task.arn` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.ecs.task.family][res-attr-cfg] | Toggles the `aws.ecs.task.family` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.ecs.task.id][res-attr-cfg] | Toggles the `aws.ecs.task.id` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.ecs.task.revision][res-attr-cfg] | Toggles the `aws.ecs.task.revision` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.log.group.arns][res-attr-cfg] | Toggles the `aws.log.group.arns` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.log.stream.arns][res-attr-cfg] | Toggles the `aws.log.stream.arns` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| --------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- | +| [aws.ecs.cluster.arn][res-attr-cfg] | Toggles the `aws.ecs.cluster.arn` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.ecs.launchtype][res-attr-cfg] | Toggles the `aws.ecs.launchtype` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.ecs.task.arn][res-attr-cfg] | Toggles the `aws.ecs.task.arn` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.ecs.task.family][res-attr-cfg] | Toggles the `aws.ecs.task.family` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.ecs.task.id][res-attr-cfg] | Toggles the `aws.ecs.task.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.ecs.task.revision][res-attr-cfg] | Toggles the `aws.ecs.task.revision` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.log.group.arns][res-attr-cfg] | Toggles the `aws.log.group.arns` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.log.stream.arns][res-attr-cfg] | Toggles the `aws.log.stream.arns` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | ### eks @@ -215,23 +216,24 @@ The `eks` block adds resource attributes for Amazon EKS. The `eks` block supports the following blocks: -Block | Description | Required --------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#eks--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#eks--resource_attributes) | Configures which resource attributes to add. | no | #### eks > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required --------------------------------- | ---------------------------------------------------------------------------------------------- | -------- -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `false` by default. | no +| Block | Description | Required | +| -------------------------------- | --------------------------------------------------------------------------------------------- | -------- | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `false` by default. | no | Example values: -* `cloud.provider`: `"aws"` -* `cloud.platform`: `"aws_eks"` + +- `cloud.provider`: `"aws"` +- `cloud.platform`: `"aws_eks"` ### elasticbeanstalk @@ -241,25 +243,26 @@ The `elasticbeanstalk` block reads the AWS X-Ray configuration file available on The `elasticbeanstalk` block supports the following blocks: -Block | Description | Required ---------------------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#elasticbeanstalk--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#elasticbeanstalk--resource_attributes) | Configures which resource attributes to add. | no | #### elasticbeanstalk > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------------- | --------------------------------------------------------------------------------------------- | -------- -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[deployment.envir][res-attr-cfg] | Toggles the `deployment.envir` resource attribute.
Sets `enabled` to `true` by default. | no -[service.instance][res-attr-cfg] | Toggles the `service.instance` resource attribute.
Sets `enabled` to `true` by default. | no -[service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| -------------------------------- | -------------------------------------------------------------------------------------------- | -------- | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [deployment.envir][res-attr-cfg] | Toggles the `deployment.envir` resource attribute.
Sets `enabled` to `true` by default. | no | +| [service.instance][res-attr-cfg] | Toggles the `service.instance` resource attribute.
Sets `enabled` to `true` by default. | no | +| [service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no | Example values: -* `cloud.provider`: `"aws"` -* `cloud.platform`: `"aws_elastic_beanstalk"` + +- `cloud.provider`: `"aws"` +- `cloud.platform`: `"aws_elastic_beanstalk"` ### lambda @@ -269,40 +272,43 @@ The `lambda` block uses the AWS Lambda [runtime environment variables][lambda-en The `lambda` block supports the following blocks: -Block | Description | Required -----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#lambda--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| --------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#lambda--resource_attributes) | Configures which resource attributes to add. | no | #### lambda > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required -------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- -[aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no -[aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.max_memory][res-attr-cfg] | Toggles the `faas.max_memory` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ------------------------------------ | ------------------------------------------------------------------------------------------------ | -------- | +| [aws.log.group.names][res-attr-cfg] | Toggles the `aws.log.group.names` resource attribute.
Sets `enabled` to `true` by default. | no | +| [aws.log.stream.names][res-attr-cfg] | Toggles the `aws.log.stream.names` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.max_memory][res-attr-cfg] | Toggles the `faas.max_memory` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no | [Cloud semantic conventions][]: -* `cloud.provider`: `"aws"` -* `cloud.platform`: `"aws_lambda"` -* `cloud.region`: `$AWS_REGION` + +- `cloud.provider`: `"aws"` +- `cloud.platform`: `"aws_lambda"` +- `cloud.region`: `$AWS_REGION` [Function as a Service semantic conventions][] and [AWS Lambda semantic conventions][]: -* `faas.name`: `$AWS_LAMBDA_FUNCTION_NAME` -* `faas.version`: `$AWS_LAMBDA_FUNCTION_VERSION` -* `faas.instance`: `$AWS_LAMBDA_LOG_STREAM_NAME` -* `faas.max_memory`: `$AWS_LAMBDA_FUNCTION_MEMORY_SIZE` + +- `faas.name`: `$AWS_LAMBDA_FUNCTION_NAME` +- `faas.version`: `$AWS_LAMBDA_FUNCTION_VERSION` +- `faas.instance`: `$AWS_LAMBDA_LOG_STREAM_NAME` +- `faas.max_memory`: `$AWS_LAMBDA_FUNCTION_MEMORY_SIZE` [AWS Logs semantic conventions][]: -* `aws.log.group.names`: `$AWS_LAMBDA_LOG_GROUP_NAME` -* `aws.log.stream.names`: `$AWS_LAMBDA_LOG_STREAM_NAME` + +- `aws.log.group.names`: `$AWS_LAMBDA_LOG_GROUP_NAME` +- `aws.log.stream.names`: `$AWS_LAMBDA_LOG_STREAM_NAME` [Cloud semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/cloud.md [Function as a Service semantic conventions]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/faas.md @@ -317,30 +323,31 @@ The `azure` block queries the [Azure Instance Metadata Service][] to retrieve va The `azure` block supports the following blocks: -Block | Description | Required ----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#azure--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| -------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#azure--resource_attributes) | Configures which resource attributes to add. | no | #### azure > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ------------------------------------------|------------------------------------------------------------------------------------------------------|--------- -[azure.resourcegroup.name][res-attr-cfg] | Toggles the `azure.resourcegroup.name` resource attribute.
Sets `enabled` to `true` by default. | no -[azure.vm.name][res-attr-cfg] | Toggles the `azure.vm.name` resource attribute.
Sets `enabled` to `true` by default. | no -[azure.vm.scaleset.name][res-attr-cfg] | Toggles the `azure.vm.scaleset.name` resource attribute.
Sets `enabled` to `true` by default. | no -[azure.vm.size][res-attr-cfg] | Toggles the `azure.vm.size` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ---------------------------------------- | ---------------------------------------------------------------------------------------------------- | -------- | +| [azure.resourcegroup.name][res-attr-cfg] | Toggles the `azure.resourcegroup.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [azure.vm.name][res-attr-cfg] | Toggles the `azure.vm.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [azure.vm.scaleset.name][res-attr-cfg] | Toggles the `azure.vm.scaleset.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [azure.vm.size][res-attr-cfg] | Toggles the `azure.vm.size` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | Example values: -* `cloud.provider`: `"azure"` -* `cloud.platform`: `"azure_vm"` + +- `cloud.provider`: `"azure"` +- `cloud.platform`: `"azure_vm"` ### aks @@ -348,36 +355,38 @@ The `aks` block adds resource attributes related to Azure AKS. The `aks` block supports the following blocks: -Block | Description | Required --------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#aks--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#aks--resource_attributes) | Configures which resource attributes to add. | no | #### aks > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required --------------------------------- | ---------------------------------------------------------------------------------------------- | -------- -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `false` by default. | no +| Block | Description | Required | +| -------------------------------- | --------------------------------------------------------------------------------------------- | -------- | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `false` by default. | no | Example values: -* `cloud.provider`: `"azure"` -* `cloud.platform`: `"azure_vm"` -Azure AKS cluster name is derived from the Azure Instance Metadata Service's (IMDS) infrastructure resource group field. +- `cloud.provider`: `"azure"` +- `cloud.platform`: `"azure_vm"` + +Azure AKS cluster name is derived from the Azure Instance Metadata Service's (IMDS) infrastructure resource group field. This field contains the resource group and name of the cluster, separated by underscores. For example: `MC___`. Example: - - Resource group: `my-resource-group` - - Cluster name: `my-cluster` - - Location: `eastus` - - Generated name: `MC_my-resource-group_my-cluster_eastus` + +- Resource group: `my-resource-group` +- Cluster name: `my-cluster` +- Location: `eastus` +- Generated name: `MC_my-resource-group_my-cluster_eastus` The cluster name is detected if it does not contain underscores and if a custom infrastructure resource group name was not used. -If accurate parsing cannot be performed, the infrastructure resource group value is returned. +If accurate parsing cannot be performed, the infrastructure resource group value is returned. This value can be used to uniquely identify the cluster, because Azure will not allow users to create multiple clusters with the same infrastructure resource group name. ### consul @@ -386,13 +395,13 @@ The `consul` block queries a Consul agent and reads its configuration endpoint t The `consul` block supports the following attributes: -Attribute | Type | Description | Default | Required --------------|----------------|-----------------------------------------------------------------------------------|---------|--------- -`address` | `string` | The address of the Consul server | `""` | no -`datacenter` | `string` | Datacenter to use. If not provided, the default agent datacenter is used. | `""` | no -`token` | `secret` | A per-request ACL token which overrides the Consul agent's default (empty) token. | `""` | no -`namespace` | `string` | The name of the namespace to send along for the request. | `""` | no -`meta` | `list(string)` | Allowlist of [Consul Metadata][] keys to use as resource attributes. | `[]` | no +| Attribute | Type | Description | Default | Required | +| ------------ | -------------- | --------------------------------------------------------------------------------- | ------- | -------- | +| `address` | `string` | The address of the Consul server | `""` | no | +| `datacenter` | `string` | Datacenter to use. If not provided, the default agent datacenter is used. | `""` | no | +| `token` | `secret` | A per-request ACL token which overrides the Consul agent's default (empty) token. | `""` | no | +| `namespace` | `string` | The name of the namespace to send along for the request. | `""` | no | +| `meta` | `list(string)` | Allowlist of [Consul Metadata][] keys to use as resource attributes. | `[]` | no | `token` is only required if [Consul's ACL System][] is enabled. @@ -401,19 +410,19 @@ Attribute | Type | Description The `consul` block supports the following blocks: -Block | Description | Required -----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#consul--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| --------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#consul--resource_attributes) | Configures which resource attributes to add. | no | #### consul > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ------------------------------|------------------------------------------------------------------------------------------|--------- -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ---------------------------- | ---------------------------------------------------------------------------------------- | -------- | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | ### docker @@ -424,18 +433,18 @@ Docker detection does not work on MacOS. The `docker` block supports the following blocks: -Block | Description | Required -----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#docker--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| --------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#docker--resource_attributes) | Configures which resource attributes to add. | no | #### docker > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------|---------------------------------------------------------------------------------------|--------- -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no -[os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ------------------------- | ------------------------------------------------------------------------------------- | -------- | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no | ### gcp @@ -449,61 +458,62 @@ Use the `gcp` detector regardless of the GCP platform {{< param "PRODUCT_ROOT_NA The `gcp` block supports the following blocks: -Block | Description | Required --------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#gcp--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#gcp--resource_attributes) | Configures which resource attributes to add. | no | #### gcp > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ----------------------------------------------|----------------------------------------------------------------------------------------------------------|--------- -[cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.id][res-attr-cfg] | Toggles the `faas.id` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no -[faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no -[gcp.cloud_run.job.execution][res-attr-cfg] | Toggles the `gcp.cloud_run.job.execution` resource attribute.
Sets `enabled` to `true` by default. | no -[gcp.cloud_run.job.task_index][res-attr-cfg] | Toggles the `gcp.cloud_run.job.task_index` resource attribute.
Sets `enabled` to `true` by default. | no -[gcp.gce.instance.hostname][res-attr-cfg] | Toggles the `gcp.gce.instance.hostname` resource attribute.
Sets `enabled` to `false` by default. | no -[gcp.gce.instance.name][res-attr-cfg] | Toggles the `gcp.gce.instance.name` resource attribute.
Sets `enabled` to `false` by default. | no -[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no -[host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no -[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| -------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -------- | +| [cloud.account.id][res-attr-cfg] | Toggles the `cloud.account.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.availability_zone][res-attr-cfg] | Toggles the `cloud.availability_zone` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.id][res-attr-cfg] | Toggles the `faas.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.instance][res-attr-cfg] | Toggles the `faas.instance` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.name][res-attr-cfg] | Toggles the `faas.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [faas.version][res-attr-cfg] | Toggles the `faas.version` resource attribute.
Sets `enabled` to `true` by default. | no | +| [gcp.cloud_run.job.execution][res-attr-cfg] | Toggles the `gcp.cloud_run.job.execution` resource attribute.
Sets `enabled` to `true` by default. | no | +| [gcp.cloud_run.job.task_index][res-attr-cfg] | Toggles the `gcp.cloud_run.job.task_index` resource attribute.
Sets `enabled` to `true` by default. | no | +| [gcp.gce.instance.hostname][res-attr-cfg] | Toggles the `gcp.gce.instance.hostname` resource attribute.
Sets `enabled` to `false` by default. | no | +| [gcp.gce.instance.name][res-attr-cfg] | Toggles the `gcp.gce.instance.name` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [host.type][res-attr-cfg] | Toggles the `host.type` resource attribute.
Sets `enabled` to `true` by default. | no | +| [k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no | #### Google Compute Engine (GCE) metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_compute_engine"` -* `cloud.account.id`: project id -* `cloud.region`: e.g. `"us-central1"` -* `cloud.availability_zone`: e.g. `"us-central1-c"` -* `host.id`: instance id -* `host.name`: instance name -* `host.type`: machine type -* (optional) `gcp.gce.instance.hostname` -* (optional) `gcp.gce.instance.name` +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_compute_engine"` +- `cloud.account.id`: project id +- `cloud.region`: e.g. `"us-central1"` +- `cloud.availability_zone`: e.g. `"us-central1-c"` +- `host.id`: instance id +- `host.name`: instance name +- `host.type`: machine type +- (optional) `gcp.gce.instance.hostname` +- (optional) `gcp.gce.instance.name` #### Google Kubernetes Engine (GKE) metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_kubernetes_engine"` -* `cloud.account.id`: project id -* `cloud.region`: only for regional GKE clusters; e.g. `"us-central1"` -* `cloud.availability_zone`: only for zonal GKE clusters; e.g. `"us-central1-c"` -* `k8s.cluster.name` -* `host.id`: instance id -* `host.name`: instance name; only when workload identity is disabled +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_kubernetes_engine"` +- `cloud.account.id`: project id +- `cloud.region`: only for regional GKE clusters; e.g. `"us-central1"` +- `cloud.availability_zone`: only for zonal GKE clusters; e.g. `"us-central1-c"` +- `k8s.cluster.name` +- `host.id`: instance id +- `host.name`: instance name; only when workload identity is disabled One known issue happens when GKE workload identity is enabled. The GCE metadata endpoints won't be available, and the GKE resource detector won't be able to determine `host.name`. If this happens, you can set `host.name` from one of the following resources: + - Get the `node.name` through the [downward API][] with the `env` detector. - Get the Kubernetes node name from the Kubernetes API (with `k8s.io/client-go`). @@ -511,45 +521,45 @@ If this happens, you can set `host.name` from one of the following resources: #### Google Cloud Run Services metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_cloud_run"` -* `cloud.account.id`: project id -* `cloud.region`: e.g. `"us-central1"` -* `faas.id`: instance id -* `faas.name`: service name -* `faas.version`: service revision +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_cloud_run"` +- `cloud.account.id`: project id +- `cloud.region`: e.g. `"us-central1"` +- `faas.id`: instance id +- `faas.name`: service name +- `faas.version`: service revision #### Cloud Run Jobs metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_cloud_run"` -* `cloud.account.id`: project id -* `cloud.region`: e.g. `"us-central1"` -* `faas.id`: instance id -* `faas.name`: service name -* `gcp.cloud_run.job.execution`: e.g. `"my-service-ajg89"` -* `gcp.cloud_run.job.task_index`: e.g. `"0"` +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_cloud_run"` +- `cloud.account.id`: project id +- `cloud.region`: e.g. `"us-central1"` +- `faas.id`: instance id +- `faas.name`: service name +- `gcp.cloud_run.job.execution`: e.g. `"my-service-ajg89"` +- `gcp.cloud_run.job.task_index`: e.g. `"0"` #### Google Cloud Functions metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_cloud_functions"` -* `cloud.account.id`: project id -* `cloud.region`: e.g. `"us-central1"` -* `faas.id`: instance id -* `faas.name`: function name -* `faas.version`: function version +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_cloud_functions"` +- `cloud.account.id`: project id +- `cloud.region`: e.g. `"us-central1"` +- `faas.id`: instance id +- `faas.name`: function name +- `faas.version`: function version #### Google App Engine metadata -* `cloud.provider`: `"gcp"` -* `cloud.platform`: `"gcp_app_engine"` -* `cloud.account.id`: project id -* `cloud.region`: e.g. `"us-central1"` -* `cloud.availability_zone`: e.g. `"us-central1-c"` -* `faas.id`: instance id -* `faas.name`: service name -* `faas.version`: service version +- `cloud.provider`: `"gcp"` +- `cloud.platform`: `"gcp_app_engine"` +- `cloud.account.id`: project id +- `cloud.region`: e.g. `"us-central1"` +- `cloud.availability_zone`: e.g. `"us-central1-c"` +- `faas.id`: instance id +- `faas.name`: service name +- `faas.version`: service version ### heroku @@ -557,30 +567,30 @@ The `heroku` block adds resource attributes derived from [Heroku dyno metadata][ The `heroku` block supports the following blocks: -Block | Description | Required -----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#heroku--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| --------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#heroku--resource_attributes) | Configures which resource attributes to add. | no | #### heroku > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------------------------------|---------------------------------------------------------------------------------------------------------------|--------- -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[heroku.app.id][res-attr-cfg] | Toggles the `heroku.app.id` resource attribute.
Sets `enabled` to `true` by default. | no -[heroku.dyno.id][res-attr-cfg] | Toggles the `heroku.dyno.id` resource attribute.
Sets `enabled` to `true` by default. | no -[heroku.release.commit][res-attr-cfg] | Toggles the `heroku.release.commit` resource attribute.
Sets `enabled` to `true` by default. | no -[heroku.release.creation_timestamp][res-attr-cfg] | Toggles the `heroku.release.creation_timestamp` resource attribute.
Sets `enabled` to `true` by default. | no -[service.instance.id][res-attr-cfg] | Toggles the `service.instance.id` resource attribute.
Sets `enabled` to `true` by default. | no -[service.name][res-attr-cfg] | Toggles the `service.name` resource attribute.
Sets `enabled` to `true` by default. | no -[service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -------- | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [heroku.app.id][res-attr-cfg] | Toggles the `heroku.app.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [heroku.dyno.id][res-attr-cfg] | Toggles the `heroku.dyno.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [heroku.release.commit][res-attr-cfg] | Toggles the `heroku.release.commit` resource attribute.
Sets `enabled` to `true` by default. | no | +| [heroku.release.creation_timestamp][res-attr-cfg] | Toggles the `heroku.release.creation_timestamp` resource attribute.
Sets `enabled` to `true` by default. | no | +| [service.instance.id][res-attr-cfg] | Toggles the `service.instance.id` resource attribute.
Sets `enabled` to `true` by default. | no | +| [service.name][res-attr-cfg] | Toggles the `service.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [service.version][res-attr-cfg] | Toggles the `service.version` resource attribute.
Sets `enabled` to `true` by default. | no | When [Heroku dyno metadata][] is active, Heroku applications publish information through environment variables. We map these environment variables to resource attributes as follows: | Dyno metadata environment variable | Resource attribute | -|------------------------------------|-------------------------------------| +| ---------------------------------- | ----------------------------------- | | `HEROKU_APP_ID` | `heroku.app.id` | | `HEROKU_APP_NAME` | `service.name` | | `HEROKU_DYNO_ID` | `service.instance.id` | @@ -606,47 +616,48 @@ Use the [Docker](#docker) detector if running {{< param "PRODUCT_ROOT_NAME" >}} The `system` block supports the following attributes: -Attribute | Type | Description | Default | Required ------------------- | --------------- | --------------------------------------------------------------------------- |---------------- | -------- -`hostname_sources` | `list(string)` | A priority list of sources from which the hostname will be fetched. | `["dns", "os"]` | no +| Attribute | Type | Description | Default | Required | +| ------------------ | -------------- | ------------------------------------------------------------------- | --------------- | -------- | +| `hostname_sources` | `list(string)` | A priority list of sources from which the hostname will be fetched. | `["dns", "os"]` | no | The valid options for `hostname_sources` are: -* `"dns"`: Uses multiple sources to get the fully qualified domain name. -Firstly, it looks up the host name in the local machine's `hosts` file. If that fails, it looks up the CNAME. -Lastly, if that fails, it does a reverse DNS query. Note: this hostname source may produce unreliable results on Windows. -To produce a FQDN, Windows hosts might have better results using the "lookup" hostname source, which is mentioned below. -* `"os"`: Provides the hostname provided by the local machine's kernel. -* `"cname"`: Provides the canonical name, as provided by `net.LookupCNAME` in the Go standard library. -Note: this hostname source may produce unreliable results on Windows. -* `"lookup"`: Does a reverse DNS lookup of the current host's IP address. + +- `"dns"`: Uses multiple sources to get the fully qualified domain name. + Firstly, it looks up the host name in the local machine's `hosts` file. If that fails, it looks up the CNAME. + Lastly, if that fails, it does a reverse DNS query. Note: this hostname source may produce unreliable results on Windows. + To produce a FQDN, Windows hosts might have better results using the "lookup" hostname source, which is mentioned below. +- `"os"`: Provides the hostname provided by the local machine's kernel. +- `"cname"`: Provides the canonical name, as provided by `net.LookupCNAME` in the Go standard library. + Note: this hostname source may produce unreliable results on Windows. +- `"lookup"`: Does a reverse DNS lookup of the current host's IP address. In case of an error in fetching a hostname from a source, the next source from the list of `hostname_sources` will be considered. The `system` block supports the following blocks: -Block | Description | Required -----------------------------------------------------|----------------------------------------------|--------- -[resource_attributes](#system--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| --------------------------------------------------- | -------------------------------------------- | -------- | +| [resource_attributes](#system--resource_attributes) | Configures which resource attributes to add. | no | #### system > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ----------------------------------------|-----------------------------------------------------------------------------------------------------|--------- -[host.arch][res-attr-cfg] | Toggles the `host.arch` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.cache.l2.size][res-attr-cfg] | Toggles the `host.cpu.cache.l2.size` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.family][res-attr-cfg] | Toggles the `host.cpu.family` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.model.id][res-attr-cfg] | Toggles the `host.cpu.model.id` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.model.name][res-attr-cfg] | Toggles the `host.cpu.model.name` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.stepping][res-attr-cfg] | Toggles the `host.cpu.stepping` resource attribute.
Sets `enabled` to `false` by default. | no -[host.cpu.vendor.id][res-attr-cfg] | Toggles the `host.cpu.vendor.id` resource attribute.
Sets `enabled` to `false` by default. | no -[host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `false` by default. | no -[host.ip][res-attr-cfg] | Toggles the `host.ip` resource attribute.
Sets `enabled` to `false` by default. | no -[host.mac][res-attr-cfg] | Toggles the `host.mac` resource attribute.
Sets `enabled` to `false` by default. | no -[host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no -[os.description][res-attr-cfg] | Toggles the `os.description` resource attribute.
Sets `enabled` to `false` by default. | no -[os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| -------------------------------------- | --------------------------------------------------------------------------------------------------- | -------- | +| [host.arch][res-attr-cfg] | Toggles the `host.arch` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.cache.l2.size][res-attr-cfg] | Toggles the `host.cpu.cache.l2.size` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.family][res-attr-cfg] | Toggles the `host.cpu.family` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.model.id][res-attr-cfg] | Toggles the `host.cpu.model.id` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.model.name][res-attr-cfg] | Toggles the `host.cpu.model.name` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.stepping][res-attr-cfg] | Toggles the `host.cpu.stepping` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.cpu.vendor.id][res-attr-cfg] | Toggles the `host.cpu.vendor.id` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.id][res-attr-cfg] | Toggles the `host.id` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.ip][res-attr-cfg] | Toggles the `host.ip` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.mac][res-attr-cfg] | Toggles the `host.mac` resource attribute.
Sets `enabled` to `false` by default. | no | +| [host.name][res-attr-cfg] | Toggles the `host.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [os.description][res-attr-cfg] | Toggles the `os.description` resource attribute.
Sets `enabled` to `false` by default. | no | +| [os.type][res-attr-cfg] | Toggles the `os.type` resource attribute.
Sets `enabled` to `true` by default. | no | ### openshift @@ -654,10 +665,10 @@ The `openshift` block queries the OpenShift and Kubernetes APIs to retrieve vari The `openshift` block supports the following attributes: -Attribute | Type | Description | Default | Required ----------- |---------- | ------------------------------------------------------- |-------------| -------- -`address` | `string` | Address of the OpenShift API server. | _See below_ | no -`token` | `string` | Token used to identify against the OpenShift API server.| "" | no +| Attribute | Type | Description | Default | Required | +| --------- | -------- | -------------------------------------------------------- | ----------- | -------- | +| `address` | `string` | Address of the OpenShift API server. | _See below_ | no | +| `token` | `string` | Token used to identify against the OpenShift API server. | "" | no | The "get", "watch", and "list" permissions are required: @@ -666,9 +677,9 @@ kind: ClusterRole metadata: name: grafana-agent rules: -- apiGroups: ["config.openshift.io"] - resources: ["infrastructures", "infrastructures/status"] - verbs: ["get", "watch", "list"] + - apiGroups: ["config.openshift.io"] + resources: ["infrastructures", "infrastructures/status"] + verbs: ["get", "watch", "list"] ``` By default, the API address is determined from the environment variables `KUBERNETES_SERVICE_HOST`, @@ -678,10 +689,10 @@ The determination of the API address, `ca_file`, and the service token is skippe The `openshift` block supports the following blocks: -Block | Description | Required ----------------------------------------------- | ---------------------------------------------------- | -------- -[resource_attributes](#openshift--resource_attributes) | Configures which resource attributes to add. | no -[tls](#openshift--tls) | TLS settings for the connection with the OpenShift API. | yes +| Block | Description | Required | +| ------------------------------------------------------ | ------------------------------------------------------- | -------- | +| [resource_attributes](#openshift--resource_attributes) | Configures which resource attributes to add. | no | +| [tls](#openshift--tls) | TLS settings for the connection with the OpenShift API. | yes | #### openshift > tls @@ -694,12 +705,12 @@ server. The `resource_attributes` block supports the following blocks: -Block | Description | Required ---------------------------------- | --------------------------------------------------------------------------------------------- | -------- -[cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no -[cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no -[k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| -------------------------------- | -------------------------------------------------------------------------------------------- | -------- | +| [cloud.platform][res-attr-cfg] | Toggles the `cloud.platform` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.provider][res-attr-cfg] | Toggles the `cloud.provider` resource attribute.
Sets `enabled` to `true` by default. | no | +| [cloud.region][res-attr-cfg] | Toggles the `cloud.region` resource attribute.
Sets `enabled` to `true` by default. | no | +| [k8s.cluster.name][res-attr-cfg] | Toggles the `k8s.cluster.name` resource attribute.
Sets `enabled` to `true` by default. | no | ### kubernetes_node @@ -707,11 +718,11 @@ The `kubernetes_node` block queries the Kubernetes API server to retrieve variou The `kubernetes_node` block supports the following attributes: -Attribute | Type | Description | Default | Required -------------------- |--------- | ------------------------------------------------------------------------- |------------------ | -------- -`auth_type` | `string` | Configures how to authenticate to the K8s API server. | `"none"` | no -`context` | `string` | Override the current context when `auth_type` is set to `"kubeConfig"`. | `""` | no -`node_from_env_var` | `string` | The name of an environment variable from which to retrieve the node name. | `"K8S_NODE_NAME"` | no +| Attribute | Type | Description | Default | Required | +| ------------------- | -------- | ------------------------------------------------------------------------- | ----------------- | -------- | +| `auth_type` | `string` | Configures how to authenticate to the K8s API server. | `"none"` | no | +| `context` | `string` | Override the current context when `auth_type` is set to `"kubeConfig"`. | `""` | no | +| `node_from_env_var` | `string` | The name of an environment variable from which to retrieve the node name. | `"K8S_NODE_NAME"` | no | The "get" and "list" permissions are required: @@ -726,24 +737,25 @@ rules: ``` `auth_type` can be set to one of the following: -* `none`: no authentication. -* `serviceAccount`: use the standard service account token provided to the {{< param "PRODUCT_ROOT_NAME" >}} pod. -* `kubeConfig`: use credentials from `~/.kube/config`. + +- `none`: no authentication. +- `serviceAccount`: use the standard service account token provided to the {{< param "PRODUCT_ROOT_NAME" >}} pod. +- `kubeConfig`: use credentials from `~/.kube/config`. The `kubernetes_node` block supports the following blocks: -Block | Description | Required ----------------------------------------------- | ------------------------------------------------- | -------- -[resource_attributes](#kubernetes_node--resource_attributes) | Configures which resource attributes to add. | no +| Block | Description | Required | +| ------------------------------------------------------------ | -------------------------------------------- | -------- | +| [resource_attributes](#kubernetes_node--resource_attributes) | Configures which resource attributes to add. | no | #### kubernetes_node > resource_attributes The `resource_attributes` block supports the following blocks: -Block | Description | Required ------------------------------- | ------------------------------------------------------------------------------------------ | -------- -[k8s.node.name][res-attr-cfg] | Toggles the `k8s.node.name` resource attribute.
Sets `enabled` to `true` by default. | no -[k8s.node.uid][res-attr-cfg] | Toggles the `k8s.node.uid` resource attribute.
Sets `enabled` to `true` by default. | no +| Block | Description | Required | +| ----------------------------- | ----------------------------------------------------------------------------------------- | -------- | +| [k8s.node.name][res-attr-cfg] | Toggles the `k8s.node.name` resource attribute.
Sets `enabled` to `true` by default. | no | +| [k8s.node.uid][res-attr-cfg] | Toggles the `k8s.node.uid` resource attribute.
Sets `enabled` to `true` by default. | no | ## Common configuration @@ -756,9 +768,9 @@ For example, some resource attributes have `enabled` set to `true` by default, w The following attributes are supported: -Attribute | Type | Description | Default | Required ---------- | ------- | ----------------------------------------------------------------------------------- |------------- | -------- -`enabled` | `bool` | Toggles whether to add the resource attribute to the span, log, or metric resource. | _See below_ | no +| Attribute | Type | Description | Default | Required | +| --------- | ------ | ----------------------------------------------------------------------------------- | ----------- | -------- | +| `enabled` | `bool` | Toggles whether to add the resource attribute to the span, log, or metric resource. | _See below_ | no | To see the default value for `enabled`, refer to the tables in the sections above which list the resource attributes blocks. The "Description" column will state either... @@ -773,14 +785,15 @@ The "Description" column will state either... The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` OTLP-formatted data for any telemetry signal of these types: -* logs -* metrics -* traces + +- logs +- metrics +- traces ## Component health @@ -895,11 +908,11 @@ otelcol.processor.resourcedetection "default" { You need to add this to your workload: ```yaml - env: - - name: K8S_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName +env: + - name: K8S_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName ``` ### kubernetes_node with a custom environment variable @@ -924,12 +937,13 @@ otelcol.processor.resourcedetection "default" { You need to add this to your workload: ```yaml - env: - - name: my_custom_var - valueFrom: - fieldRef: - fieldPath: spec.nodeName +env: + - name: my_custom_var + valueFrom: + fieldRef: + fieldPath: spec.nodeName ``` + ## Compatible components @@ -947,4 +961,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.span.md b/docs/sources/flow/reference/components/otelcol.processor.span.md index 71c7357fec82..e6113369508c 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.span.md +++ b/docs/sources/flow/reference/components/otelcol.processor.span.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.span/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.span/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.span/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.span/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.span/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.span/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.span/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.span/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.span/ description: Learn about otelcol.processor.span labels: @@ -17,7 +17,7 @@ title: otelcol.processor.span `otelcol.processor.span` accepts traces telemetry data from other `otelcol` components and modifies the names and attributes of the spans. -It also supports the ability to filter input data to determine if +It also supports the ability to filter input data to determine if it should be included or excluded from this processor. > **NOTE**: `otelcol.processor.span` is a wrapper over the upstream @@ -81,28 +81,29 @@ If both an `include` block and an `exclude`block are specified, the `include` pr ### name block -The `name` block configures how to rename a span and add attributes. +The `name` block configures how to rename a span and add attributes. The following attributes are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`from_attributes` | `list(string)` | Attribute keys to pull values from, to generate a new span name. | `[]` | no -`separator` | `string` | Separates attributes values in the new span name. | `""` | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | ---------------------------------------------------------------- | ------- | -------- | +| `from_attributes` | `list(string)` | Attribute keys to pull values from, to generate a new span name. | `[]` | no | +| `separator` | `string` | Separates attributes values in the new span name. | `""` | no | Firstly `from_attributes` rules are applied, then [to-attributes][] are applied. At least one of these 2 fields must be set. `from_attributes` represents the attribute keys to pull the values from to generate the new span name: -* All attribute keys are required in the span to rename a span. -If any attribute is missing from the span, no rename will occur. -* The new span name is constructed in order of the `from_attributes` -specified in the configuration. + +- All attribute keys are required in the span to rename a span. + If any attribute is missing from the span, no rename will occur. +- The new span name is constructed in order of the `from_attributes` + specified in the configuration. `separator` is the string used to separate attributes values in the new span name. If no value is set, no separator is used between attribute -values. `separator` is used with `from_attributes` only; +values. `separator` is used with `from_attributes` only; it is not used with [to-attributes][]. ### to_attributes block @@ -111,18 +112,19 @@ The `to_attributes` block configures how to create attributes from a span name. The following attributes are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`rules` | `list(string)` | A list of regex rules to extract attribute values from span name. | | yes -`break_after_match` | `bool` | Configures if processing of rules should stop after the first match. | `false` | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------- | -------------------------------------------------------------------- | ------- | -------- | +| `rules` | `list(string)` | A list of regex rules to extract attribute values from span name. | | yes | +| `break_after_match` | `bool` | Configures if processing of rules should stop after the first match. | `false` | no | Each rule in the `rules` list is a regex pattern string. -1. The span name is checked against each regex in the list. -2. If it matches, then all named subexpressions of the regex are extracted as attributes and are added to the span. -3. Each subexpression name becomes an attribute name and the subexpression matched portion becomes the attribute value. -4. The matched portion in the span name is replaced by extracted attribute name. -5. If the attributes already exist in the span then they will be overwritten. -6. The process is repeated for all rules in the order they are specified. + +1. The span name is checked against each regex in the list. +2. If it matches, then all named subexpressions of the regex are extracted as attributes and are added to the span. +3. Each subexpression name becomes an attribute name and the subexpression matched portion becomes the attribute value. +4. The matched portion in the span name is replaced by extracted attribute name. +5. If the attributes already exist in the span then they will be overwritten. +6. The process is repeated for all rules in the order they are specified. 7. Each subsequent rule works on the span name that is the output after processing the previous rule. `break_after_match` specifies if processing of rules should stop after the first @@ -135,58 +137,59 @@ The `status` block specifies a status which should be set for this span. The following attributes are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`code` | `string` | A status code. | | yes -`description` | `string` | An optional field documenting Error status codes. | `""` | no +| Name | Type | Description | Default | Required | +| ------------- | -------- | ------------------------------------------------- | ------- | -------- | +| `code` | `string` | A status code. | | yes | +| `description` | `string` | An optional field documenting Error status codes. | `""` | no | The supported values for `code` are: -* `Ok` -* `Error` -* `Unset` + +- `Ok` +- `Error` +- `Unset` `description` should only be specified if `code` is set to `Error`. ### include block -The `include` block provides an option to include data being fed into the +The `include` block provides an option to include data being fed into the [name][] and [status][] blocks based on the properties of a span. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_type` | `string` | Controls how items to match against are interpreted. | | yes -`services` | `list(string)` | A list of items to match the service name against. | `[]` | no -`span_names` | `list(string)` | A list of items to match the span name against. | `[]` | no -`span_kinds` | `list(string)` | A list of items to match the span kind against. | `[]` | no +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------------- | ------- | -------- | +| `match_type` | `string` | Controls how items to match against are interpreted. | | yes | +| `services` | `list(string)` | A list of items to match the service name against. | `[]` | no | +| `span_names` | `list(string)` | A list of items to match the span name against. | `[]` | no | +| `span_kinds` | `list(string)` | A list of items to match the span kind against. | `[]` | no | `match_type` is required and must be set to either `"regexp"` or `"strict"`. A match occurs if at least one item in the lists matches. -One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified +One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. ### exclude block -The `exclude` block provides an option to exclude data from being fed into the +The `exclude` block provides an option to exclude data from being fed into the [name][] and [status][] blocks based on the properties of a span. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_type` | `string` | Controls how items to match against are interpreted. | | yes -`services` | `list(string)` | A list of items to match the service name against. | `[]` | no -`span_names` | `list(string)` | A list of items to match the span name against. | `[]` | no -`span_kinds` | `list(string)` | A list of items to match the span kind against. | `[]` | no +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------------- | ------- | -------- | +| `match_type` | `string` | Controls how items to match against are interpreted. | | yes | +| `services` | `list(string)` | A list of items to match the service name against. | `[]` | no | +| `span_names` | `list(string)` | A list of items to match the span name against. | `[]` | no | +| `span_kinds` | `list(string)` | A list of items to match the span kind against. | `[]` | no | `match_type` is required and must be set to either `"regexp"` or `"strict"`. A match occurs if at least one item in the lists matches. -One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified +One of `services`, `span_names`, `span_kinds`, [attribute][], [resource][], or [library][] must be specified with a non-empty value for a valid configuration. ### regexp block @@ -213,11 +216,11 @@ with a non-empty value for a valid configuration. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | -`input` accepts `otelcol.Consumer` OTLP-formatted data for traces telemetry signals. +`input` accepts `otelcol.Consumer` OTLP-formatted data for traces telemetry signals. Logs and metrics are not supported. ## Component health @@ -235,7 +238,7 @@ information. ### Creating a new span name from attribute values This example creates a new span name from the values of attributes `db.svc`, -`operation`, and `id`, in that order, separated by the value `::`. +`operation`, and `id`, in that order, separated by the value `::`. All attribute keys need to be specified in the span for the processor to rename it. ```river @@ -253,20 +256,22 @@ otelcol.processor.span "default" { For a span with the following attributes key/value pairs, the above Flow configuration will change the span name to `"location::get::1234"`: + ```json -{ - "db.svc": "location", - "operation": "get", +{ + "db.svc": "location", + "operation": "get", "id": "1234" } ``` -For a span with the following attributes key/value pairs, the above -Flow configuration will not change the span name. +For a span with the following attributes key/value pairs, the above +Flow configuration will not change the span name. This is because the attribute key `operation` isn't set: + ```json -{ - "db.svc": "location", +{ + "db.svc": "location", "id": "1234" } ``` @@ -287,10 +292,11 @@ otelcol.processor.span "default" { For a span with the following attributes key/value pairs, the above Flow configuration will change the span name to `"locationget1234"`: + ```json -{ - "db.svc": "location", - "operation": "get", +{ + "db.svc": "location", + "operation": "get", "id": "1234" } ``` @@ -298,6 +304,7 @@ Flow configuration will change the span name to `"locationget1234"`: ### Renaming a span name and adding attributes Example input and output using the Flow configuration below: + 1. Let's assume input span name is `/api/v1/document/12345678/update` 2. The span name will be changed to `/api/v1/document/{documentId}/update` 3. A new attribute `"documentId"="12345678"` will be added to the span. @@ -321,6 +328,7 @@ otelcol.processor.span "default" { This example renames the span name to `{operation_website}` and adds the attribute `{Key: operation_website, Value: }` if the span has the following properties: + - Service name contains the word `banks`. - The span name contains `/` anywhere in the string. - The span name is not `donot/change`. @@ -367,7 +375,7 @@ otelcol.processor.span "default" { ### Setting a status depending on an attribute value -This example sets the status to success only when attribute `http.status_code` +This example sets the status to success only when attribute `http.status_code` is equal to `400`. ```river @@ -388,6 +396,7 @@ otelcol.processor.span "default" { } } ``` + ## Compatible components @@ -405,4 +414,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md index baeb5593db53..6603dac221f3 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md +++ b/docs/sources/flow/reference/components/otelcol.processor.tail_sampling.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.tail_sampling/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.tail_sampling/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.tail_sampling/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.tail_sampling/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.tail_sampling/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.tail_sampling/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.tail_sampling/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.tail_sampling/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.tail_sampling/ description: Learn about otelcol.processor.tail_sampling labels: @@ -16,7 +16,7 @@ title: otelcol.processor.tail_sampling {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} `otelcol.processor.tail_sampling` samples traces based on a set of defined -policies. All spans for a given trace *must* be received by the same collector +policies. All spans for a given trace _must_ be received by the same collector instance for effective sampling decisions. The `tail_sampling` component uses both soft and hard limits, where the hard limit @@ -53,11 +53,11 @@ otelcol.processor.tail_sampling "LABEL" { `otelcol.processor.tail_sampling` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`decision_wait` | `duration` | Wait time since the first span of a trace before making a sampling decision. | `"30s"` | no -`num_traces` | `int` | Number of traces kept in memory. | `50000` | no -`expected_new_traces_per_sec` | `int` | Expected number of new traces (helps in allocating data structures). | `0` | no +| Name | Type | Description | Default | Required | +| ----------------------------- | ---------- | ---------------------------------------------------------------------------- | ------- | -------- | +| `decision_wait` | `duration` | Wait time since the first span of a trace before making a sampling decision. | `"30s"` | no | +| `num_traces` | `int` | Number of traces kept in memory. | `50000` | no | +| `expected_new_traces_per_sec` | `int` | Expected number of new traces (helps in allocating data structures). | `0` | no | `decision_wait` determines the number of batches to maintain on a channel. Its value must convert to a number of seconds greater than zero. @@ -70,44 +70,44 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `otelcol.processor.tail_sampling`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -policy | [policy] [] | Policies used to make a sampling decision. | yes -policy > latency | [latency] | The policy will sample based on the duration of the trace. | no -policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no -policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no -policy > status_code | [status_code] | The policy will sample based upon the status code. | no -policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no -policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no -policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no -policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no -policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event).| no -policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no -policy > and | [and] | The policy will sample based on multiple policies, creates an `and` policy. | no -policy > and > and_sub_policy | [and_sub_policy] [] | A set of policies underneath an `and` policy type. | no -policy > and > and_sub_policy > latency | [latency] | The policy will sample based on the duration of the trace. | no -policy > and > and_sub_policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no -policy > and > and_sub_policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no -policy > and > and_sub_policy > status_code | [status_code] | The policy will sample based upon the status code. | no -policy > and > and_sub_policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no -policy > and > and_sub_policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no -policy > and > and_sub_policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no -policy > and > and_sub_policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no -policy > and > and_sub_policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event). | no -policy > and > and_sub_policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no -policy > composite | [composite] | This policy will sample based on a combination of above samplers, with ordering and rate allocation per sampler. | no -policy > composite > composite_sub_policy | [composite_sub_policy] [] | A set of policies underneath a `composite` policy type. | no -policy > composite > composite_sub_policy > latency | [latency] | The policy will sample based on the duration of the trace. | no -policy > composite > composite_sub_policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no -policy > composite > composite_sub_policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no -policy > composite > composite_sub_policy > status_code | [status_code] | The policy will sample based upon the status code. | no -policy > composite > composite_sub_policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no -policy > composite > composite_sub_policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no -policy > composite > composite_sub_policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no -policy > composite > composite_sub_policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no -policy > composite > composite_sub_policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event). | no -policy > composite > composite_sub_policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no -output | [output] [] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------------------------------------------------- | ------------------------- | ---------------------------------------------------------------------------------------------------------------- | -------- | +| policy | [policy] [] | Policies used to make a sampling decision. | yes | +| policy > latency | [latency] | The policy will sample based on the duration of the trace. | no | +| policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no | +| policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no | +| policy > status_code | [status_code] | The policy will sample based upon the status code. | no | +| policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no | +| policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no | +| policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no | +| policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no | +| policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event). | no | +| policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no | +| policy > and | [and] | The policy will sample based on multiple policies, creates an `and` policy. | no | +| policy > and > and_sub_policy | [and_sub_policy] [] | A set of policies underneath an `and` policy type. | no | +| policy > and > and_sub_policy > latency | [latency] | The policy will sample based on the duration of the trace. | no | +| policy > and > and_sub_policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no | +| policy > and > and_sub_policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no | +| policy > and > and_sub_policy > status_code | [status_code] | The policy will sample based upon the status code. | no | +| policy > and > and_sub_policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no | +| policy > and > and_sub_policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no | +| policy > and > and_sub_policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no | +| policy > and > and_sub_policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no | +| policy > and > and_sub_policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event). | no | +| policy > and > and_sub_policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no | +| policy > composite | [composite] | This policy will sample based on a combination of above samplers, with ordering and rate allocation per sampler. | no | +| policy > composite > composite_sub_policy | [composite_sub_policy] [] | A set of policies underneath a `composite` policy type. | no | +| policy > composite > composite_sub_policy > latency | [latency] | The policy will sample based on the duration of the trace. | no | +| policy > composite > composite_sub_policy > numeric_attribute | [numeric_attribute] | The policy will sample based on number attributes (resource and record). | no | +| policy > composite > composite_sub_policy > probabilistic | [probabilistic] | The policy will sample a percentage of traces. | no | +| policy > composite > composite_sub_policy > status_code | [status_code] | The policy will sample based upon the status code. | no | +| policy > composite > composite_sub_policy > string_attribute | [string_attribute] | The policy will sample based on string attributes (resource and record) value matches. | no | +| policy > composite > composite_sub_policy > rate_limiting | [rate_limiting] | The policy will sample based on rate. | no | +| policy > composite > composite_sub_policy > span_count | [span_count] | The policy will sample based on the minimum number of spans within a batch. | no | +| policy > composite > composite_sub_policy > boolean_attribute | [boolean_attribute] | The policy will sample based on a boolean attribute (resource and record). | no | +| policy > composite > composite_sub_policy > ottl_condition | [ottl_condition] | The policy will sample based on a given boolean OTTL condition (span and span event). | no | +| policy > composite > composite_sub_policy > trace_state | [trace_state] | The policy will sample based on TraceState value matches. | no | +| output | [output] [] | Configures where to send received telemetry data. | yes | [policy]: #policy-block [latency]: #latency-block @@ -125,6 +125,7 @@ output | [output] [] | Co [composite]: #composite-block [composite_sub_policy]: #composite_sub_policy-block [output]: #output-block + [otelcol.exporter.otlp]: {{< relref "./otelcol.exporter.otlp.md" >}} ### policy block @@ -133,17 +134,17 @@ The `policy` block configures a sampling policy used by the component. At least The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | The custom name given to the policy. | | yes -`type` | `string` | The valid policy type for this policy. | | yes +| Name | Type | Description | Default | Required | +| ------ | -------- | -------------------------------------- | ------- | -------- | +| `name` | `string` | The custom name given to the policy. | | yes | +| `type` | `string` | The valid policy type for this policy. | | yes | Each policy results in a decision, and the processor evaluates them to make a final decision: - When there's an "inverted not sample" decision, the trace is not sampled. - When there's a "sample" decision, the trace is sampled. - When there's an "inverted sample" decision and no "not sample" decisions, the trace is sampled. -- In all other cases, the trace is *not* sampled. +- In all other cases, the trace is _not_ sampled. An "inverted" decision is the one made based on the "invert_match" attribute, such as the one from the string tag policy. @@ -153,10 +154,10 @@ The `latency` block configures a policy of type `latency`. The policy samples ba The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`threshold_ms` | `number` | Lower latency threshold for sampling, in milliseconds. | | yes -`upper_threshold_ms` | `number` | Upper latency threshold for sampling, in milliseconds. | `0` | no +| Name | Type | Description | Default | Required | +| -------------------- | -------- | ------------------------------------------------------ | ------- | -------- | +| `threshold_ms` | `number` | Lower latency threshold for sampling, in milliseconds. | | yes | +| `upper_threshold_ms` | `number` | Upper latency threshold for sampling, in milliseconds. | `0` | no | For a trace to be sampled, its latency should be greater than `threshold_ms` and lower than or equal to `upper_threshold_ms`. @@ -168,12 +169,12 @@ The `numeric_attribute` block configures a policy of type `numeric_attribute`. T The following arguments are supported: -Name | Type | Description | Default | Required ----- | ------- | ----------- | ------- | -------- -`key` | `string` | Tag that the filter is matched against. | | yes -`min_value` | `number` | The minimum value of the attribute to be considered a match. | | yes -`max_value` | `number` | The maximum value of the attribute to be considered a match. | | yes -`invert_match` | `bool` | Indicates that values must not match against attribute values. | `false` | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | -------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | Tag that the filter is matched against. | | yes | +| `min_value` | `number` | The minimum value of the attribute to be considered a match. | | yes | +| `max_value` | `number` | The maximum value of the attribute to be considered a match. | | yes | +| `invert_match` | `bool` | Indicates that values must not match against attribute values. | `false` | no | ### probabilistic block @@ -181,10 +182,10 @@ The `probabilistic` block configures a policy of type `probabilistic`. The polic The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`sampling_percentage` | `number` | The percentage rate at which traces are sampled. | | yes -`hash_salt` | `string` | See below. | | no +| Name | Type | Description | Default | Required | +| --------------------- | -------- | ------------------------------------------------ | ------- | -------- | +| `sampling_percentage` | `number` | The percentage rate at which traces are sampled. | | yes | +| `hash_salt` | `string` | See below. | | no | Use `hash_salt` to configure the hashing salts. This is important in scenarios where multiple layers of collectors have different sampling rates. If multiple collectors use the same salt with different sampling rates, passing one @@ -196,9 +197,9 @@ The `status_code` block configures a policy of type `status_code`. The policy sa The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`status_codes` | `list(string)` | Holds the configurable settings to create a status code filter sampling policy evaluator. | | yes +| Name | Type | Description | Default | Required | +| -------------- | -------------- | ----------------------------------------------------------------------------------------- | ------- | -------- | +| `status_codes` | `list(string)` | Holds the configurable settings to create a status code filter sampling policy evaluator. | | yes | `status_codes` values must be "OK", "ERROR" or "UNSET". @@ -208,13 +209,13 @@ The `string_attribute` block configures a policy of type `string_attribute`. The The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | Tag that the filter is matched against. | | yes -`values` | `list(string)` | Set of values or regular expressions to use when matching against attribute values. | | yes -`enabled_regex_matching` | `bool` | Determines whether to match attribute values by regexp string. | false | no -`cache_max_size` | `string` | The maximum number of attribute entries of Least Recently Used (LRU) Cache that stores the matched result from the regular expressions defined in `values.` | | no -`invert_match` | `bool` | Indicates that values or regular expressions must not match against attribute values. | false | no +| Name | Type | Description | Default | Required | +| ------------------------ | -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | Tag that the filter is matched against. | | yes | +| `values` | `list(string)` | Set of values or regular expressions to use when matching against attribute values. | | yes | +| `enabled_regex_matching` | `bool` | Determines whether to match attribute values by regexp string. | false | no | +| `cache_max_size` | `string` | The maximum number of attribute entries of Least Recently Used (LRU) Cache that stores the matched result from the regular expressions defined in `values.` | | no | +| `invert_match` | `bool` | Indicates that values or regular expressions must not match against attribute values. | false | no | ### rate_limiting block @@ -222,9 +223,9 @@ The `rate_limiting` block configures a policy of type `rate_limiting`. The polic The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`spans_per_second` | `number` | Sets the maximum number of spans that can be processed each second. | | yes +| Name | Type | Description | Default | Required | +| ------------------ | -------- | ------------------------------------------------------------------- | ------- | -------- | +| `spans_per_second` | `number` | Sets the maximum number of spans that can be processed each second. | | yes | ### span_count block @@ -232,42 +233,43 @@ The `span_count` block configures a policy of type `span_count`. The policy samp The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_spans` | `number` | Minimum number of spans in a trace. | | yes -`max_spans` | `number` | Maximum number of spans in a trace. | `0` | no +| Name | Type | Description | Default | Required | +| ----------- | -------- | ----------------------------------- | ------- | -------- | +| `min_spans` | `number` | Minimum number of spans in a trace. | | yes | +| `max_spans` | `number` | Maximum number of spans in a trace. | `0` | no | Set `max_spans` to `0`, if you do not want to limit the policy samples based on the maximum number of spans in a trace. ### boolean_attribute block -The `boolean_attribute` block configures a policy of type `boolean_attribute`. +The `boolean_attribute` block configures a policy of type `boolean_attribute`. The policy samples based on a boolean attribute (resource and record). The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | Attribute key to match against. | | yes -`value` | `bool` | The bool value (`true` or `false`) to use when matching against attribute values. | | yes +| Name | Type | Description | Default | Required | +| ------- | -------- | --------------------------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | Attribute key to match against. | | yes | +| `value` | `bool` | The bool value (`true` or `false`) to use when matching against attribute values. | | yes | ### ottl_condition block -The `ottl_condition` block configures a policy of type `ottl_condition`. The policy samples based on a given boolean +The `ottl_condition` block configures a policy of type `ottl_condition`. The policy samples based on a given boolean [OTTL](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl) condition (span and span event). The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`error_mode` | `string` | Error handling if OTTL conditions fail to evaluate. | | yes -`span` | `list(string)` | OTTL conditions for spans. | `[]` | no -`spanevent` | `list(string)` | OTTL conditions for span events. | `[]` | no +| Name | Type | Description | Default | Required | +| ------------ | -------------- | --------------------------------------------------- | ------- | -------- | +| `error_mode` | `string` | Error handling if OTTL conditions fail to evaluate. | | yes | +| `span` | `list(string)` | OTTL conditions for spans. | `[]` | no | +| `spanevent` | `list(string)` | OTTL conditions for span events. | `[]` | no | The supported values for `error_mode` are: -* `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. -* `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. -* `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. + +- `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. +- `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. +- `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. At least one of `span` or `spanevent` should be specified. Both `span` and `spanevent` can also be specified. @@ -277,10 +279,10 @@ The `trace_state` block configures a policy of type `trace_state`. The policy sa The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | Tag that the filter is matched against. | | yes -`values` | `list(string)` | Set of values to use when matching against trace_state values. | | yes +| Name | Type | Description | Default | Required | +| -------- | -------------- | -------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | Tag that the filter is matched against. | | yes | +| `values` | `list(string)` | Set of values to use when matching against trace_state values. | | yes | ### and block @@ -292,10 +294,10 @@ The `and_sub_policy` block configures a sampling policy used by the `and` block. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | The custom name given to the policy. | | yes -`type` | `string` | The valid policy type for this policy. | | yes +| Name | Type | Description | Default | Required | +| ------ | -------- | -------------------------------------- | ------- | -------- | +| `name` | `string` | The custom name given to the policy. | | yes | +| `type` | `string` | The valid policy type for this policy. | | yes | ### composite block @@ -311,10 +313,10 @@ The `composite_sub_policy` block configures a sampling policy used by the `compo The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`name` | `string` | The custom name given to the policy. | | yes -`type` | `string` | The valid policy type for this policy. | | yes +| Name | Type | Description | Default | Required | +| ------ | -------- | -------------------------------------- | ------- | -------- | +| `name` | `string` | The custom name given to the policy. | | yes | +| `type` | `string` | The valid policy type for this policy. | | yes | ### output block @@ -324,9 +326,9 @@ Name | Type | Description | Default | Required The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -563,6 +565,7 @@ otelcol.exporter.otlp "production" { } } ``` + ## Compatible components @@ -580,4 +583,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.processor.transform.md b/docs/sources/flow/reference/components/otelcol.processor.transform.md index 06ecc32e044a..cd5c7fa4613f 100644 --- a/docs/sources/flow/reference/components/otelcol.processor.transform.md +++ b/docs/sources/flow/reference/components/otelcol.processor.transform.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.transform/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.transform/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.transform/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.transform/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.transform/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.transform/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.transform/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.transform/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.processor.transform/ description: Learn about otelcol.processor.transform labels: @@ -19,40 +19,45 @@ title: otelcol.processor.transform components and modifies it using the [OpenTelemetry Transformation Language (OTTL)][OTTL]. OTTL statements consist of [OTTL functions][], which act on paths. A path is a reference to a telemetry data such as: -* Resource attributes. -* Instrumentation scope name. -* Span attributes. -In addition to the [standard OTTL functions][OTTL functions], +- Resource attributes. +- Instrumentation scope name. +- Span attributes. + +In addition to the [standard OTTL functions][OTTL functions], there is also a set of metrics-only functions: -* [convert_sum_to_gauge][] -* [convert_gauge_to_sum][] -* [convert_summary_count_val_to_sum][] -* [convert_summary_sum_val_to_sum][] -* [copy_metric][] + +- [convert_sum_to_gauge][] +- [convert_gauge_to_sum][] +- [convert_summary_count_val_to_sum][] +- [convert_summary_sum_val_to_sum][] +- [copy_metric][] [OTTL][] statements can also contain constructs such as: -* [Booleans][OTTL booleans]: - * `not true` - * `not IsMatch(name, "http_.*")` -* [Boolean Expressions][OTTL boolean expressions] consisting of a `where` followed by one or more booleans: - * `set(attributes["whose_fault"], "ours") where attributes["http.status"] == 500` - * `set(attributes["whose_fault"], "theirs") where attributes["http.status"] == 400 or attributes["http.status"] == 404` -* [Math expressions][OTTL math expressions]: - * `1 + 1` - * `end_time_unix_nano - start_time_unix_nano` - * `sum([1, 2, 3, 4]) + (10 / 1) - 1` + +- [Booleans][OTTL booleans]: + - `not true` + - `not IsMatch(name, "http_.*")` +- [Boolean Expressions][OTTL boolean expressions] consisting of a `where` followed by one or more booleans: + - `set(attributes["whose_fault"], "ours") where attributes["http.status"] == 500` + - `set(attributes["whose_fault"], "theirs") where attributes["http.status"] == 400 or attributes["http.status"] == 404` +- [Math expressions][OTTL math expressions]: + - `1 + 1` + - `end_time_unix_nano - start_time_unix_nano` + - `sum([1, 2, 3, 4]) + (10 / 1) - 1` {{< admonition type="note" >}} There are two ways of inputting strings in River configuration files: -* Using quotation marks ([normal River strings][river-strings]). Characters such as `\` and + +- Using quotation marks ([normal River strings][river-strings]). Characters such as `\` and `"` must be escaped by preceding them with a `\` character. -* Using backticks ([raw River strings][river-raw-strings]). No characters must be escaped. +- Using backticks ([raw River strings][river-raw-strings]). No characters must be escaped. However, it's not possible to have backticks inside the string. -For example, the OTTL statement `set(description, "Sum") where type == "Sum"` can be written as: -* A normal River string: `"set(description, \"Sum\") where type == \"Sum\""`. -* A raw River string: ``` `set(description, "Sum") where type == "Sum"` ```. +For example, the OTTL statement `set(description, "Sum") where type == "Sum"` can be written as: + +- A normal River string: `"set(description, \"Sum\") where type == \"Sum\""`. +- A raw River string: `` `set(description, "Sum") where type == "Sum"` ``. Raw strings are generally more convenient for writing OTTL statements. @@ -69,25 +74,25 @@ will be redirected to the upstream repository. You can specify multiple `otelcol.processor.transform` components by giving them different labels. {{< admonition type="warning" >}} -`otelcol.processor.transform` allows you to modify all aspects of your telemetry. Some specific risks are given below, -but this is not an exhaustive list. It is important to understand your data before using this processor. +`otelcol.processor.transform` allows you to modify all aspects of your telemetry. Some specific risks are given below, +but this is not an exhaustive list. It is important to understand your data before using this processor. -- [Unsound Transformations][]: Transformations between metric data types are not defined in the [metrics data model][]. -To use these functions, you must understand the incoming data and know that it can be meaningfully converted -to a new metric data type or can be used to create new metrics. - - Although OTTL allows you to use the `set` function with `metric.data_type`, +- [Unsound Transformations][]: Transformations between metric data types are not defined in the [metrics data model][]. + To use these functions, you must understand the incoming data and know that it can be meaningfully converted + to a new metric data type or can be used to create new metrics. + - Although OTTL allows you to use the `set` function with `metric.data_type`, its implementation in the transform processor is a [no-op][]. To modify a data type, you must use a specific function such as `convert_gauge_to_sum`. - [Identity Conflict][]: Transformation of metrics can potentially affect a metric's identity, - leading to an Identity Crisis. Be especially cautious when transforming a metric name and when reducing or changing + leading to an Identity Crisis. Be especially cautious when transforming a metric name and when reducing or changing existing attributes. Adding new attributes is safe. -- [Orphaned Telemetry][]: The processor allows you to modify `span_id`, `trace_id`, and `parent_span_id` for traces - and `span_id`, and `trace_id` logs. Modifying these fields could lead to orphaned spans or logs. +- [Orphaned Telemetry][]: The processor allows you to modify `span_id`, `trace_id`, and `parent_span_id` for traces + and `span_id`, and `trace_id` logs. Modifying these fields could lead to orphaned spans or logs. -[Unsound Transformations]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#unsound-transformations +[Unsound Transformations]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL*VERSION" >}}/docs/standard-warnings.md#unsound-transformations [Identity Conflict]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#identity-conflict [Orphaned Telemetry]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#orphaned-telemetry -[no-op]: https://en.wikipedia.org/wiki/NOP_(code) +[no-op]: https://en.wikipedia.org/wiki/NOP*(code) [metrics data model]: https://github.com/open-telemetry/opentelemetry-specification/blob/main//specification/metrics/data-model.md {{< /admonition >}} @@ -107,112 +112,116 @@ otelcol.processor.transform "LABEL" { `otelcol.processor.transform` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | ------------------------------------------------------------------ | ------------- | -------- | +| `error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no | The supported values for `error_mode` are: -* `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. -* `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. -* `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. + +- `ignore`: Ignore errors returned by conditions, log them, and continue on to the next condition. This is the recommended mode. +- `silent`: Ignore errors returned by conditions, do not log them, and continue on to the next condition. +- `propagate`: Return the error up the pipeline. This will result in the payload being dropped from {{< param "PRODUCT_ROOT_NAME" >}}. ## Blocks The following blocks are supported inside the definition of `otelcol.processor.transform`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -trace_statements | [trace_statements][] | Statements which transform traces. | no -metric_statements | [metric_statements][] | Statements which transform metrics. | no -log_statements | [log_statements][] | Statements which transform logs. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ----------------- | --------------------- | ------------------------------------------------- | -------- | +| trace_statements | [trace_statements][] | Statements which transform traces. | no | +| metric_statements | [metric_statements][] | Statements which transform metrics. | no | +| log_statements | [log_statements][] | Statements which transform logs. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | [trace_statements]: #trace_statements-block [metric_statements]: #metric_statements-block [log_statements]: #log_statements-block [output]: #output-block - [OTTL Context]: #ottl-context ### trace_statements block -The `trace_statements` block specifies statements which transform trace telemetry signals. +The `trace_statements` block specifies statements which transform trace telemetry signals. Multiple `trace_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------------------------- | ------- | -------- | +| `context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes | +| `statements` | `list(string)` | A list of OTTL statements. | | yes | The supported values for `context` are: -* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). -* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). -* `span`: Use when interacting only with OTLP spans. -* `spanevent`: Use when interacting only with OTLP span events. + +- `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +- `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +- `span`: Use when interacting only with OTLP spans. +- `spanevent`: Use when interacting only with OTLP span events. See [OTTL Context][] for more information about how ot use contexts. ### metric_statements block -The `metric_statements` block specifies statements which transform metric telemetry signals. +The `metric_statements` block specifies statements which transform metric telemetry signals. Multiple `metric_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------------------------- | ------- | -------- | +| `context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes | +| `statements` | `list(string)` | A list of OTTL statements. | | yes | The supported values for `context` are: -* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). -* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). -* `metric`: Use when interacting only with individual OTLP metrics. -* `datapoint`: Use when interacting only with individual OTLP metric data points. + +- `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +- `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +- `metric`: Use when interacting only with individual OTLP metrics. +- `datapoint`: Use when interacting only with individual OTLP metric data points. Refer to [OTTL Context][] for more information about how to use contexts. ### log_statements block -The `log_statements` block specifies statements which transform log telemetry signals. +The `log_statements` block specifies statements which transform log telemetry signals. Multiple `log_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +| Name | Type | Description | Default | Required | +| ------------ | -------------- | ---------------------------------------------------------------- | ------- | -------- | +| `context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes | +| `statements` | `list(string)` | A list of OTTL statements. | | yes | The supported values for `context` are: -* `resource`: Use when interacting only with OTLP resources (for example, resource attributes). -* `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). -* `log`: Use when interacting only with OTLP logs. + +- `resource`: Use when interacting only with OTLP resources (for example, resource attributes). +- `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). +- `log`: Use when interacting only with OTLP logs. See [OTTL Context][] for more information about how ot use contexts. ### OTTL Context Each context allows the transformation of its type of telemetry. -For example, statements associated with a `resource` context will be able to transform the resource's +For example, statements associated with a `resource` context will be able to transform the resource's `attributes` and `dropped_attributes_count`. Each type of `context` defines its own paths and enums specific to that context. Refer to the OpenTelemetry documentation for a list of paths and enums for each context: -* [resource][OTTL resource context] -* [scope][OTTL scope context] -* [span][OTTL span context] -* [spanevent][OTTL spanevent context] -* [log][OTTL log context] -* [metric][OTTL metric context] -* [datapoint][OTTL datapoint context] +- [resource][OTTL resource context] +- [scope][OTTL scope context] +- [span][OTTL span context] +- [spanevent][OTTL spanevent context] +- [log][OTTL log context] +- [metric][OTTL metric context] +- [datapoint][OTTL datapoint context] + +Contexts **NEVER** supply access to individual items "lower" in the protobuf definition. -Contexts __NEVER__ supply access to individual items "lower" in the protobuf definition. -- This means statements associated to a `resource` __WILL NOT__ be able to access the underlying instrumentation scopes. -- This means statements associated to a `scope` __WILL NOT__ be able to access the underlying telemetry slices (spans, metrics, or logs). -- Similarly, statements associated to a `metric` __WILL NOT__ be able to access individual datapoints, but can access the entire datapoints slice. -- Similarly, statements associated to a `span` __WILL NOT__ be able to access individual SpanEvents, but can access the entire SpanEvents slice. +- This means statements associated to a `resource` **WILL NOT** be able to access the underlying instrumentation scopes. +- This means statements associated to a `scope` **WILL NOT** be able to access the underlying telemetry slices (spans, metrics, or logs). +- Similarly, statements associated to a `metric` **WILL NOT** be able to access individual datapoints, but can access the entire datapoints slice. +- Similarly, statements associated to a `span` **WILL NOT** be able to access individual SpanEvents, but can access the entire SpanEvents slice. For practical purposes, this means that a context cannot make decisions on its telemetry based on telemetry "lower" in the structure. -For example, __the following context statement is not possible__ because it attempts to use individual datapoint +For example, **the following context statement is not possible** because it attempts to use individual datapoint attributes in the condition of a statement associated to a `metric`: ```river @@ -224,13 +233,14 @@ metric_statements { } ``` -Context __ALWAYS__ supply access to the items "higher" in the protobuf definition that are associated to the telemetry being transformed. +Context **ALWAYS** supply access to the items "higher" in the protobuf definition that are associated to the telemetry being transformed. + - This means that statements associated to a `datapoint` have access to a datapoint's metric, instrumentation scope, and resource. - This means that statements associated to a `spanevent` have access to a spanevent's span, instrumentation scope, and resource. - This means that statements associated to a `span`/`metric`/`log` have access to the telemetry's instrumentation scope, and resource. - This means that statements associated to a `scope` have access to the scope's resource. -For example, __the following context statement is possible__ because `datapoint` statements can access the datapoint's metric. +For example, **the following context statement is possible** because `datapoint` statements can access the datapoint's metric. ```river metric_statements { @@ -242,13 +252,14 @@ metric_statements { ``` The protobuf definitions for OTLP signals are maintained on GitHub: -* [traces][traces protobuf] -* [metrics][metrics protobuf] -* [logs][logs protobuf] + +- [traces][traces protobuf] +- [metrics][metrics protobuf] +- [logs][logs protobuf] Whenever possible, associate your statements to the context which the statement intens to transform. The contexts are nested, and the higher-level contexts don't have to iterate through any of the -contexts at a lower level. For example, although you can modify resource attributes associated to a +contexts at a lower level. For example, although you can modify resource attributes associated to a span using the `span` context, it is more efficient to use the `resource` context. ### output block @@ -259,9 +270,9 @@ span using the `span` context, it is more efficient to use the `resource` contex The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. +| Name | Type | Description | +| ------- | ------------------ | ---------------------------------------------------------------- | +| `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. | `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). @@ -293,7 +304,7 @@ otelcol.processor.transform "default" { trace_statements { context = "span" statements = [ - // Accessing a map with a key that does not exist will return nil. + // Accessing a map with a key that does not exist will return nil. `set(attributes["test"], "pass") where attributes["test"] == nil`, ] } @@ -443,12 +454,12 @@ otelcol.processor.transform "default" { // Parse body as JSON and merge the resulting map with the cache map, ignoring non-json bodies. // cache is a field exposed by OTTL that is a temporary storage place for complex operations. `merge_maps(cache, ParseJSON(body), "upsert") where IsMatch(body, "^\\{")`, - + // Set attributes using the values merged into cache. // If the attribute doesn't exist in cache then nothing happens. `set(attributes["attr1"], cache["attr1"])`, `set(attributes["attr2"], cache["attr2"])`, - + // To access nested maps you can chain index ([]) operations. // If nested or attr3 do no exist in cache then nothing happens. `set(attributes["nested.attr3"], cache["nested"]["attr3"])`, @@ -469,8 +480,8 @@ each `"` with a `\"`, and each `\` with a `\\` inside a [normal][river-strings] ### Various transformations of attributes and status codes -The example takes advantage of context efficiency by grouping transformations -with the context which it intends to transform. +The example takes advantage of context efficiency by grouping transformations +with the context which it intends to transform. ```river otelcol.receiver.otlp "default" { @@ -575,7 +586,6 @@ each `"` with a `\"`, and each `\` with a `\\` inside a [normal][river-strings] [metrics protobuf]: https://github.com/open-telemetry/opentelemetry-proto/blob/v1.0.0/opentelemetry/proto/metrics/v1/metrics.proto [logs protobuf]: https://github.com/open-telemetry/opentelemetry-proto/blob/v1.0.0/opentelemetry/proto/logs/v1/logs.proto - [OTTL]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/README.md [OTTL functions]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/ottlfuncs/README.md [convert_sum_to_gauge]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/{{< param "OTEL_VERSION" >}}/processor/transformprocessor#convert_sum_to_gauge @@ -593,6 +603,7 @@ each `"` with a `\"`, and each `\` with a `\\` inside a [normal][river-strings] [OTTL metric context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottlmetric/README.md [OTTL datapoint context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottldatapoint/README.md [OTTL log context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/{{< param "OTEL_VERSION" >}}/pkg/ottl/contexts/ottllog/README.md + ## Compatible components @@ -610,4 +621,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md b/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md index b4a6e0f1058a..fd48fba098ca 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.jaeger.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.jaeger/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.jaeger/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.jaeger/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.jaeger/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.jaeger/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.jaeger/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.jaeger/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.jaeger/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.jaeger/ description: Learn about otelcol.receiver.jaeger title: otelcol.receiver.jaeger @@ -50,21 +50,21 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.receiver.jaeger`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -protocols | [protocols][] | Configures the protocols the component can accept traffic over. | yes -protocols > grpc | [grpc][] | Configures a Jaeger gRPC server to receive traces. | no -protocols > grpc > tls | [tls][] | Configures TLS for the gRPC server. | no -protocols > grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no -protocols > grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no -protocols > grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no -protocols > thrift_http | [thrift_http][] | Configures a Thrift HTTP server to receive traces. | no -protocols > thrift_http > tls | [tls][] | Configures TLS for the Thrift HTTP server. | no -protocols > thrift_http > cors | [cors][] | Configures CORS for the Thrift HTTP server. | no -protocols > thrift_binary | [thrift_binary][] | Configures a Thrift binary UDP server to receive traces. | no -protocols > thrift_compact | [thrift_compact][] | Configures a Thrift compact UDP server to receive traces. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------------------------------------- | ---------------------- | -------------------------------------------------------------------------- | -------- | +| protocols | [protocols][] | Configures the protocols the component can accept traffic over. | yes | +| protocols > grpc | [grpc][] | Configures a Jaeger gRPC server to receive traces. | no | +| protocols > grpc > tls | [tls][] | Configures TLS for the gRPC server. | no | +| protocols > grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no | +| protocols > grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no | +| protocols > grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no | +| protocols > thrift_http | [thrift_http][] | Configures a Thrift HTTP server to receive traces. | no | +| protocols > thrift_http > tls | [tls][] | Configures TLS for the Thrift HTTP server. | no | +| protocols > thrift_http > cors | [cors][] | Configures CORS for the Thrift HTTP server. | no | +| protocols > thrift_binary | [thrift_binary][] | Configures a Thrift binary UDP server to receive traces. | no | +| protocols > thrift_compact | [thrift_compact][] | Configures a Thrift compact UDP server to receive traces. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | The `>` symbol indicates deeper levels of nesting. For example, `protocols > grpc` refers to a `grpc` block defined inside a `protocols` block. @@ -100,15 +100,15 @@ the `grpc` block isn't provided, a gRPC server isn't started. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14250"` | no -`transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no -`max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no -`max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | --------- | -------------------------------------------------------------------------- | ----------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14250"` | no | +| `transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no | +| `max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no | +| `max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | ### tls block @@ -132,13 +132,13 @@ servers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no -`max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no -`max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no -`time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no -`timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no +| Name | Type | Description | Default | Required | +| -------------------------- | ---------- | ------------------------------------------------------------------------------------ | ------------ | -------- | +| `max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no | +| `max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no | +| `max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no | +| `time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no | +| `timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no | ### enforcement_policy block @@ -148,10 +148,10 @@ configured policy. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no -`permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ----------------------------------------------------------------------- | ------- | -------- | +| `min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no | +| `permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no | ### thrift_http block @@ -161,11 +161,11 @@ server isn't started. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14268"` | no -`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | --------- | --------------------------------------------------------------- | ----------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:14268"` | no | +| `max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | ### cors block @@ -173,19 +173,19 @@ The `cors` block configures CORS settings for an HTTP server. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no -`allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no -`max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | -------------------------------------------------------- | ---------------------- | -------- | +| `allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no | +| `allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no | +| `max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no | The `allowed_headers` specifies which headers are acceptable from a CORS request. The following headers are always implicitly allowed: -* `Accept` -* `Accept-Language` -* `Content-Type` -* `Content-Language` +- `Accept` +- `Accept-Language` +- `Content-Type` +- `Content-Language` If `allowed_headers` includes `"*"`, all headers will be permitted. @@ -197,13 +197,13 @@ provided, a UDP server isn't started. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:6832"` | no -`queue_size` | `number` | Maximum number of UDP messages that can be queued at once. | `1000` | no -`max_packet_size` | `string` | Maximum UDP message size. | `"65KiB"` | no -`workers` | `number` | Number of workers to concurrently read from the message queue. | `10` | no -`socket_buffer_size` | `string` | Buffer to allocate for the UDP socket. | | no +| Name | Type | Description | Default | Required | +| -------------------- | -------- | -------------------------------------------------------------- | ---------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:6832"` | no | +| `queue_size` | `number` | Maximum number of UDP messages that can be queued at once. | `1000` | no | +| `max_packet_size` | `string` | Maximum UDP message size. | `"65KiB"` | no | +| `workers` | `number` | Number of workers to concurrently read from the message queue. | `10` | no | +| `socket_buffer_size` | `string` | Buffer to allocate for the UDP socket. | | no | ### thrift_compact block @@ -213,13 +213,13 @@ provided, a UDP server isn't started. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:6831"` | no -`queue_size` | `number` | Maximum number of UDP messages that can be queued at once. | `1000` | no -`max_packet_size` | `string` | Maximum UDP message size. | `"65KiB"` | no -`workers` | `number` | Number of workers to concurrently read from the message queue. | `10` | no -`socket_buffer_size` | `string` | Buffer to allocate for the UDP socket. | | no +| Name | Type | Description | Default | Required | +| -------------------- | -------- | -------------------------------------------------------------- | ---------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:6831"` | no | +| `queue_size` | `number` | Maximum number of UDP messages that can be queued at once. | `1000` | no | +| `max_packet_size` | `string` | Maximum UDP message size. | `"65KiB"` | no | +| `workers` | `number` | Number of workers to concurrently read from the message queue. | `10` | no | +| `socket_buffer_size` | `string` | Buffer to allocate for the UDP socket. | | no | ### debug_metrics block @@ -278,6 +278,7 @@ otelcol.exporter.otlp "default" { ## Technical details `otelcol.receiver.jaeger` supports [gzip](https://en.wikipedia.org/wiki/Gzip) for compression. + ## Compatible components @@ -286,10 +287,9 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.kafka.md b/docs/sources/flow/reference/components/otelcol.receiver.kafka.md index 47d8a6305a20..f5996fa226ac 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.kafka.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.kafka.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.kafka/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.kafka/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.kafka/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.kafka/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.kafka/ description: Learn about otelcol.receiver.kafka title: otelcol.receiver.kafka @@ -41,41 +41,41 @@ otelcol.receiver.kafka "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`brokers` | `array(string)` | Kafka brokers to connect to. | | yes -`protocol_version` | `string` | Kafka protocol version to use. | | yes -`topic` | `string` | Kafka topic to read from. | | no -`encoding` | `string` | Encoding of payload read from Kafka. | `"otlp_proto"` | no -`group_id` | `string` | Consumer group to consume messages from. | `"otel-collector"` | no -`client_id` | `string` | Consumer client ID to use. | `"otel-collector"` | no -`initial_offset` | `string` | Initial offset to use if no offset was previously committed. | `"latest"` | no -`resolve_canonical_bootstrap_servers_only` | `bool` | Whether to resolve then reverse-lookup broker IPs during startup. | `"false"` | no +| Name | Type | Description | Default | Required | +| ------------------------------------------ | --------------- | ----------------------------------------------------------------- | ------------------ | -------- | +| `brokers` | `array(string)` | Kafka brokers to connect to. | | yes | +| `protocol_version` | `string` | Kafka protocol version to use. | | yes | +| `topic` | `string` | Kafka topic to read from. | | no | +| `encoding` | `string` | Encoding of payload read from Kafka. | `"otlp_proto"` | no | +| `group_id` | `string` | Consumer group to consume messages from. | `"otel-collector"` | no | +| `client_id` | `string` | Consumer client ID to use. | `"otel-collector"` | no | +| `initial_offset` | `string` | Initial offset to use if no offset was previously committed. | `"latest"` | no | +| `resolve_canonical_bootstrap_servers_only` | `bool` | Whether to resolve then reverse-lookup broker IPs during startup. | `"false"` | no | If `topic` is not set, different topics will be used for different telemetry signals: -* Metrics will be received from an `otlp_metrics` topic. -* Traces will be received from an `otlp_spans` topic. -* Logs will be received from an `otlp_logs` topic. +- Metrics will be received from an `otlp_metrics` topic. +- Traces will be received from an `otlp_spans` topic. +- Logs will be received from an `otlp_logs` topic. If `topic` is set to a specific value, then only the signal type that corresponds to the data stored in the topic must be set in the output block. -For example, if `topic` is set to `"my_telemetry"`, then the `"my_telemetry"` topic can only contain either metrics, logs, or traces. +For example, if `topic` is set to `"my_telemetry"`, then the `"my_telemetry"` topic can only contain either metrics, logs, or traces. If it contains only metrics, then `otelcol.receiver.kafka` should be configured to output only metrics. The `encoding` argument determines how to decode messages read from Kafka. `encoding` must be one of the following strings: -* `"otlp_proto"`: Decode messages as OTLP protobuf. -* `"jaeger_proto"`: Decode messages as a single Jaeger protobuf span. -* `"jaeger_json"`: Decode messages as a single Jaeger JSON span. -* `"zipkin_proto"`: Decode messages as a list of Zipkin protobuf spans. -* `"zipkin_json"`: Decode messages as a list of Zipkin JSON spans. -* `"zipkin_thrift"`: Decode messages as a list of Zipkin Thrift spans. -* `"raw"`: Copy the log message bytes into the body of a log record. -* `"text"`: Decode the log message as text and insert it into the body of a log record. +- `"otlp_proto"`: Decode messages as OTLP protobuf. +- `"jaeger_proto"`: Decode messages as a single Jaeger protobuf span. +- `"jaeger_json"`: Decode messages as a single Jaeger JSON span. +- `"zipkin_proto"`: Decode messages as a list of Zipkin protobuf spans. +- `"zipkin_json"`: Decode messages as a list of Zipkin JSON spans. +- `"zipkin_thrift"`: Decode messages as a list of Zipkin Thrift spans. +- `"raw"`: Copy the log message bytes into the body of a log record. +- `"text"`: Decode the log message as text and insert it into the body of a log record. By default, UTF-8 is used to decode. A different encoding can be chosen by using `text_`. For example, `text_utf-8` or `text_shift_jis`. -* `"json"`: Decode the JSON payload and insert it into the body of a log record. -* `"azure_resource_logs"`: The payload is converted from Azure Resource Logs format to an OTLP log. +- `"json"`: Decode the JSON payload and insert it into the body of a log record. +- `"azure_resource_logs"`: The payload is converted from Azure Resource Logs format to an OTLP log. `"otlp_proto"` must be used to read all telemetry types from Kafka; other encodings are signal-specific. @@ -87,21 +87,21 @@ encodings are signal-specific. The following blocks are supported inside the definition of `otelcol.receiver.kafka`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -authentication | [authentication][] | Configures authentication for connecting to Kafka brokers. | no -authentication > plaintext | [plaintext][] | Authenticates against Kafka brokers with plaintext. | no -authentication > sasl | [sasl][] | Authenticates against Kafka brokers with SASL. | no -authentication > sasl > aws_msk | [aws_msk][] | Additional SASL parameters when using AWS_MSK_IAM. | no -authentication > tls | [tls][] | Configures TLS for connecting to the Kafka brokers. | no -authentication > kerberos | [kerberos][] | Authenticates against Kafka brokers with Kerberos. | no -metadata | [metadata][] | Configures how to retrieve metadata from Kafka brokers. | no -metadata > retry | [retry][] | Configures how to retry metadata retrieval. | no -autocommit | [autocommit][] | Configures how to automatically commit updated topic offsets to back to the Kafka brokers. | no -message_marking | [message_marking][] | Configures when Kafka messages are marked as read. | no -header_extraction | [header_extraction][] | Extract headers from Kafka records. | no -debug_metrics | [debug_metrics][] | Configures the metrics which this component generates to monitor its state. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------------------- | --------------------- | ------------------------------------------------------------------------------------------ | -------- | +| authentication | [authentication][] | Configures authentication for connecting to Kafka brokers. | no | +| authentication > plaintext | [plaintext][] | Authenticates against Kafka brokers with plaintext. | no | +| authentication > sasl | [sasl][] | Authenticates against Kafka brokers with SASL. | no | +| authentication > sasl > aws_msk | [aws_msk][] | Additional SASL parameters when using AWS_MSK_IAM. | no | +| authentication > tls | [tls][] | Configures TLS for connecting to the Kafka brokers. | no | +| authentication > kerberos | [kerberos][] | Authenticates against Kafka brokers with Kerberos. | no | +| metadata | [metadata][] | Configures how to retrieve metadata from Kafka brokers. | no | +| metadata > retry | [retry][] | Configures how to retry metadata retrieval. | no | +| autocommit | [autocommit][] | Configures how to automatically commit updated topic offsets to back to the Kafka brokers. | no | +| message_marking | [message_marking][] | Configures when Kafka messages are marked as read. | no | +| header_extraction | [header_extraction][] | Extract headers from Kafka records. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics which this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | The `>` symbol indicates deeper levels of nesting. For example, `authentication > tls` refers to a `tls` block defined inside an @@ -133,10 +133,10 @@ The `plaintext` block configures `PLAIN` authentication against Kafka brokers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | Username to use for `PLAIN` authentication. | | yes -`password` | `secret` | Password to use for `PLAIN` authentication. | | yes +| Name | Type | Description | Default | Required | +| ---------- | -------- | ------------------------------------------- | ------- | -------- | +| `username` | `string` | Username to use for `PLAIN` authentication. | | yes | +| `password` | `secret` | Password to use for `PLAIN` authentication. | | yes | ### sasl block @@ -144,19 +144,19 @@ The `sasl` block configures SASL authentication against Kafka brokers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | Username to use for SASL authentication. | | yes -`password` | `secret` | Password to use for SASL authentication. | | yes -`mechanism` | `string` | SASL mechanism to use when authenticating. | | yes -`version` | `number` | Version of the SASL Protocol to use when authenticating. | `0` | no +| Name | Type | Description | Default | Required | +| ----------- | -------- | -------------------------------------------------------- | ------- | -------- | +| `username` | `string` | Username to use for SASL authentication. | | yes | +| `password` | `secret` | Password to use for SASL authentication. | | yes | +| `mechanism` | `string` | SASL mechanism to use when authenticating. | | yes | +| `version` | `number` | Version of the SASL Protocol to use when authenticating. | `0` | no | The `mechanism` argument can be set to one of the following strings: -* `"PLAIN"` -* `"AWS_MSK_IAM"` -* `"SCRAM-SHA-256"` -* `"SCRAM-SHA-512"` +- `"PLAIN"` +- `"AWS_MSK_IAM"` +- `"SCRAM-SHA-256"` +- `"SCRAM-SHA-512"` When `mechanism` is set to `"AWS_MSK_IAM"`, the [`aws_msk` child block][aws_msk] must also be provided. @@ -169,10 +169,10 @@ using the `AWS_MSK_IAM` mechanism. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`region` | `string` | AWS region the MSK cluster is based in. | | yes -`broker_addr` | `string` | MSK address to connect to for authentication. | | yes +| Name | Type | Description | Default | Required | +| ------------- | -------- | --------------------------------------------- | ------- | -------- | +| `region` | `string` | AWS region the MSK cluster is based in. | | yes | +| `broker_addr` | `string` | MSK address to connect to for authentication. | | yes | ### tls block @@ -189,15 +189,15 @@ broker. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`service_name` | `string` | Kerberos service name. | | no -`realm` | `string` | Kerberos realm. | | no -`use_keytab` | `string` | Enables using keytab instead of password. | | no -`username` | `string` | Kerberos username to authenticate as. | | yes -`password` | `secret` | Kerberos password to authenticate with. | | no -`config_file` | `string` | Path to Kerberos location (for example, `/etc/krb5.conf`). | | no -`keytab_file` | `string` | Path to keytab file (for example, `/etc/security/kafka.keytab`). | | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | ---------------------------------------------------------------- | ------- | -------- | +| `service_name` | `string` | Kerberos service name. | | no | +| `realm` | `string` | Kerberos realm. | | no | +| `use_keytab` | `string` | Enables using keytab instead of password. | | no | +| `username` | `string` | Kerberos username to authenticate as. | | yes | +| `password` | `secret` | Kerberos password to authenticate with. | | no | +| `config_file` | `string` | Path to Kerberos location (for example, `/etc/krb5.conf`). | | no | +| `keytab_file` | `string` | Path to keytab file (for example, `/etc/security/kafka.keytab`). | | no | When `use_keytab` is `false`, the `password` argument is required. When `use_keytab` is `true`, the file pointed to by the `keytab_file` argument is @@ -211,9 +211,9 @@ Kafka broker. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`include_all_topics` | `bool` | When true, maintains metadata for all topics. | `true` | no +| Name | Type | Description | Default | Required | +| -------------------- | ------ | --------------------------------------------- | ------- | -------- | +| `include_all_topics` | `bool` | When true, maintains metadata for all topics. | `true` | no | If the `include_all_topics` argument is `true`, `otelcol.receiver.kafka` maintains a full set of metadata for all topics rather than the minimal set @@ -232,10 +232,10 @@ fails. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_retries` | `number` | How many times to reattempt retrieving metadata. | `3` | no -`backoff` | `duration` | Time to wait between retries. | `"250ms"` | no +| Name | Type | Description | Default | Required | +| ------------- | ---------- | ------------------------------------------------ | --------- | -------- | +| `max_retries` | `number` | How many times to reattempt retrieving metadata. | `3` | no | +| `backoff` | `duration` | Time to wait between retries. | `"250ms"` | no | ### autocommit block @@ -244,10 +244,10 @@ offsets back to the Kafka brokers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enable` | `bool` | Enable autocommitting updated topic offsets. | `true` | no -`interval` | `duration` | How frequently to autocommit. | `"1s"` | no +| Name | Type | Description | Default | Required | +| ---------- | ---------- | -------------------------------------------- | ------- | -------- | +| `enable` | `bool` | Enable autocommitting updated topic offsets. | `true` | no | +| `interval` | `duration` | How frequently to autocommit. | `"1s"` | no | ### message_marking block @@ -255,10 +255,10 @@ The `message_marking` block configures when Kafka messages are marked as read. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`after_execution` | `bool` | Mark messages after forwarding telemetry data to other components. | `false` | no -`include_unsuccessful` | `bool` | Whether failed forwards should be marked as read. | `false` | no +| Name | Type | Description | Default | Required | +| ---------------------- | ------ | ------------------------------------------------------------------ | ------- | -------- | +| `after_execution` | `bool` | Mark messages after forwarding telemetry data to other components. | `false` | no | +| `include_unsuccessful` | `bool` | Whether failed forwards should be marked as read. | `false` | no | By default, a Kafka message is marked as read immediately after it is retrieved from the Kafka broker. If the `after_execution` argument is true, messages are @@ -281,10 +281,10 @@ The `header_extraction` block configures how to extract headers from Kafka recor The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`extract_headers` | `bool` | Enables attaching header fields to resource attributes. | `false` | no -`headers` | `list(string)` | A list of headers to extract from the Kafka record. | `[]` | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | ------------------------------------------------------- | ------- | -------- | +| `extract_headers` | `bool` | Enables attaching header fields to resource attributes. | `false` | no | +| `headers` | `list(string)` | A list of headers to extract from the Kafka record. | `[]` | no | Regular expressions are not allowed in the `headers` argument. Only exact matching will be performed. @@ -341,6 +341,7 @@ otelcol.exporter.otlp "default" { } } ``` + ## Compatible components @@ -349,10 +350,9 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.loki.md b/docs/sources/flow/reference/components/otelcol.receiver.loki.md index a658f35a7fee..d79100625674 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.loki.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.loki.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.loki/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.loki/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.loki/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.loki/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.loki/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.loki/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.loki/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.loki/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.loki/ description: Learn about otelcol.receiver.loki labels: @@ -41,9 +41,9 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.receiver.loki`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -output | [output][] | Configures where to send converted telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | -------------------------------------------------- | -------- | +| output | [output][] | Configures where to send converted telemetry data. | yes | [output]: #output-block @@ -55,9 +55,9 @@ output | [output][] | Configures where to send converted telemetry data. | yes The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `LogsReceiver` | A value that other components can use to send Loki logs to. +| Name | Type | Description | +| ---------- | -------------- | ----------------------------------------------------------- | +| `receiver` | `LogsReceiver` | A value that other components can use to send Loki logs to. | ## Component health diff --git a/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md b/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md index 04242aa602cb..c57fbf87b69e 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.opencensus.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.opencensus/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.opencensus/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.opencensus/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.opencensus/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.opencensus/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.opencensus/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.opencensus/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.opencensus/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.opencensus/ description: Learn about otelcol.receiver.opencensus title: otelcol.receiver.opencensus @@ -39,20 +39,19 @@ otelcol.receiver.opencensus "LABEL" { `otelcol.receiver.opencensus` supports the following arguments: - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`cors_allowed_origins` | `list(string)` | A list of allowed Cross-Origin Resource Sharing (CORS) origins. | | no -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:55678"` | no -`transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no -`max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no -`max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | -------------- | -------------------------------------------------------------------------- | ----------------- | -------- | +| `cors_allowed_origins` | `list(string)` | A list of allowed Cross-Origin Resource Sharing (CORS) origins. | | no | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:55678"` | no | +| `transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no | +| `max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no | +| `max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | `cors_allowed_origins` are the allowed [CORS](https://github.com/rs/cors) origins for HTTP/JSON requests. -An empty list means that CORS is not enabled at all. A wildcard (*) can be +An empty list means that CORS is not enabled at all. A wildcard (\*) can be used to match any origin or one or more characters of an origin. The "endpoint" parameter is the same for both gRPC and HTTP/JSON, as the protocol is recognized and processed accordingly. @@ -67,14 +66,14 @@ in the string, such as "512KiB" or "1024KB". The following blocks are supported inside the definition of `otelcol.receiver.opencensus`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls | [tls][] | Configures TLS for the gRPC server. | no -keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no -keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no -keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------------------ | ---------------------- | -------------------------------------------------------------------------- | -------- | +| tls | [tls][] | Configures TLS for the gRPC server. | no | +| keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no | +| keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no | +| keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | The `>` symbol indicates deeper levels of nesting. For example, `grpc > tls` refers to a `tls` block defined inside a `grpc` block. @@ -108,13 +107,13 @@ servers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no -`max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no -`max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no -`time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no -`timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no +| Name | Type | Description | Default | Required | +| -------------------------- | ---------- | ------------------------------------------------------------------------------------ | ------------ | -------- | +| `max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no | +| `max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no | +| `max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no | +| `time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no | +| `timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no | ### enforcement_policy block @@ -124,10 +123,10 @@ configured policy. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no -`permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ----------------------------------------------------------------------- | ------- | -------- | +| `min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no | +| `permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no | ### debug_metrics block @@ -210,6 +209,7 @@ otelcol.exporter.otlp "default" { } } ``` + ## Compatible components @@ -218,7 +218,6 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/otelcol.receiver.otlp.md b/docs/sources/flow/reference/components/otelcol.receiver.otlp.md index 116591fae318..c85a678d2761 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.otlp.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.otlp.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.otlp/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.otlp/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.otlp/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.otlp/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.otlp/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.otlp/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.otlp/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.otlp/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.otlp/ description: Learn about otelcol.receiver.otlp title: otelcol.receiver.otlp @@ -46,18 +46,18 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.receiver.otlp`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -grpc | [grpc][] | Configures the gRPC server to receive telemetry data. | no -grpc > tls | [tls][] | Configures TLS for the gRPC server. | no -grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no -grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no -grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no -http | [http][] | Configures the HTTP server to receive telemetry data. | no -http > tls | [tls][] | Configures TLS for the HTTP server. | no -http > cors | [cors][] | Configures CORS for the HTTP server. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------------------------- | ---------------------- | -------------------------------------------------------------------------- | -------- | +| grpc | [grpc][] | Configures the gRPC server to receive telemetry data. | no | +| grpc > tls | [tls][] | Configures TLS for the gRPC server. | no | +| grpc > keepalive | [keepalive][] | Configures keepalive settings for the configured server. | no | +| grpc > keepalive > server_parameters | [server_parameters][] | Server parameters used to configure keepalive settings. | no | +| grpc > keepalive > enforcement_policy | [enforcement_policy][] | Enforcement policy for keepalive settings. | no | +| http | [http][] | Configures the HTTP server to receive telemetry data. | no | +| http > tls | [tls][] | Configures TLS for the HTTP server. | no | +| http > cors | [cors][] | Configures CORS for the HTTP server. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | The `>` symbol indicates deeper levels of nesting. For example, `grpc > tls` refers to a `tls` block defined inside a `grpc` block. @@ -79,15 +79,15 @@ The `grpc` block configures the gRPC server used by the component. If the The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4317"` | no -`transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no -`max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no -`max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no -`read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no -`write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ------------------------ | --------- | -------------------------------------------------------------------------- | ---------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4317"` | no | +| `transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no | +| `max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no | +| `max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no | +| `read_buffer_size` | `string` | Size of the read buffer the gRPC server will use for reading from clients. | `"512KiB"` | no | +| `write_buffer_size` | `string` | Size of the write buffer the gRPC server will use for writing to clients. | | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | ### tls block @@ -111,13 +111,13 @@ servers. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no -`max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no -`max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no -`time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no -`timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no +| Name | Type | Description | Default | Required | +| -------------------------- | ---------- | ------------------------------------------------------------------------------------ | ------------ | -------- | +| `max_connection_idle` | `duration` | Maximum age for idle connections. | `"infinity"` | no | +| `max_connection_age` | `duration` | Maximum age for non-idle connections. | `"infinity"` | no | +| `max_connection_age_grace` | `duration` | Time to wait before forcibly closing connections. | `"infinity"` | no | +| `time` | `duration` | How often to ping inactive clients to check for liveness. | `"2h"` | no | +| `timeout` | `duration` | Time to wait before closing inactive clients that do not respond to liveness checks. | `"20s"` | no | ### enforcement_policy block @@ -127,10 +127,10 @@ configured policy. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no -`permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no +| Name | Type | Description | Default | Required | +| ----------------------- | ---------- | ----------------------------------------------------------------------- | ------- | -------- | +| `min_time` | `duration` | Minimum time clients should wait before sending a keepalive ping. | `"5m"` | no | +| `permit_without_stream` | `boolean` | Allow clients to send keepalive pings when there are no active streams. | `false` | no | ### http block @@ -139,19 +139,20 @@ The `http` block configures the HTTP server used by the component. If the The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4318"` | no -`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no -`traces_url_path` | `string` | The URL path to receive traces on. | `"/v1/traces"`| no -`metrics_url_path` | `string` | The URL path to receive metrics on. | `"/v1/metrics"` | no -`logs_url_path` | `string` | The URL path to receive logs on. | `"/v1/logs"` | no +| Name | Type | Description | Default | Required | +| ----------------------- | --------- | --------------------------------------------------------------- | ---------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4318"` | no | +| `max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | +| `traces_url_path` | `string` | The URL path to receive traces on. | `"/v1/traces"` | no | +| `metrics_url_path` | `string` | The URL path to receive metrics on. | `"/v1/metrics"` | no | +| `logs_url_path` | `string` | The URL path to receive logs on. | `"/v1/logs"` | no | To send telemetry signals to `otelcol.receiver.otlp` with HTTP/JSON, POST to: -* `[endpoint][traces_url_path]` for traces. -* `[endpoint][metrics_url_path]` for metrics. -* `[endpoint][logs_url_path]` for logs. + +- `[endpoint][traces_url_path]` for traces. +- `[endpoint][metrics_url_path]` for metrics. +- `[endpoint][logs_url_path]` for logs. ### cors block @@ -159,19 +160,19 @@ The `cors` block configures CORS settings for an HTTP server. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no -`allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no -`max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | -------------------------------------------------------- | ---------------------- | -------- | +| `allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no | +| `allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no | +| `max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no | The `allowed_headers` argument specifies which headers are acceptable from a CORS request. The following headers are always implicitly allowed: -* `Accept` -* `Accept-Language` -* `Content-Type` -* `Content-Language` +- `Accept` +- `Accept-Language` +- `Content-Type` +- `Content-Language` If `allowed_headers` includes `"*"`, all headers are permitted. @@ -199,13 +200,13 @@ information. ## Debug metrics -* `receiver_accepted_spans_ratio_total` (counter): Number of spans successfully pushed into the pipeline. -* `receiver_refused_spans_ratio_total` (counter): Number of spans that could not be pushed into the pipeline. -* `rpc_server_duration_milliseconds` (histogram): Duration of RPC requests from a gRPC server. -* `rpc_server_request_size_bytes` (histogram): Measures size of RPC request messages (uncompressed). -* `rpc_server_requests_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. -* `rpc_server_response_size_bytes` (histogram): Measures size of RPC response messages (uncompressed). -* `rpc_server_responses_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. +- `receiver_accepted_spans_ratio_total` (counter): Number of spans successfully pushed into the pipeline. +- `receiver_refused_spans_ratio_total` (counter): Number of spans that could not be pushed into the pipeline. +- `rpc_server_duration_milliseconds` (histogram): Duration of RPC requests from a gRPC server. +- `rpc_server_request_size_bytes` (histogram): Measures size of RPC request messages (uncompressed). +- `rpc_server_requests_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. +- `rpc_server_response_size_bytes` (histogram): Measures size of RPC response messages (uncompressed). +- `rpc_server_responses_per_rpc` (histogram): Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs. ## Example @@ -242,6 +243,7 @@ otelcol.exporter.otlp "default" { ## Technical details `otelcol.receiver.otlp` supports [gzip](https://en.wikipedia.org/wiki/Gzip) for compression. + ## Compatible components @@ -250,10 +252,9 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md index ce9e9b9f897b..b272ffe58a6d 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.prometheus.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.prometheus/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.prometheus/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.prometheus/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.prometheus/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.prometheus/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.prometheus/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.prometheus/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.prometheus/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.prometheus/ description: Learn about otelcol.receiver.prometheus labels: @@ -42,9 +42,9 @@ through inner blocks. The following blocks are supported inside the definition of `otelcol.receiver.prometheus`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| --------- | ---------- | ------------------------------------------------- | -------- | +| output | [output][] | Configures where to send received telemetry data. | yes | [output]: #output-block @@ -56,9 +56,9 @@ output | [output][] | Configures where to send received telemetry data. | yes The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `MetricsReceiver` | A value that other components can use to send Prometheus metrics to. +| Name | Type | Description | +| ---------- | ----------------- | -------------------------------------------------------------------- | +| `receiver` | `MetricsReceiver` | A value that other components can use to send Prometheus metrics to. | ## Component health @@ -99,6 +99,7 @@ otelcol.exporter.otlp "default" { } } ``` + ## Compatible components @@ -116,4 +117,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md b/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md index d24741a59b9b..d23f1a27ccc2 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.vcenter.md @@ -1,8 +1,8 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.vcenter/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.vcenter/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.vcenter/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.vcenter/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.vcenter/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.vcenter/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.vcenter/ title: otelcol.receiver.vcenter description: Learn about otelcol.receiver.vcenter @@ -14,7 +14,7 @@ labels: {{< docs/shared lookup="flow/stability/experimental.md" source="agent" version="" >}} -`otelcol.receiver.vcenter` accepts metrics from a +`otelcol.receiver.vcenter` accepts metrics from a vCenter or ESXi host running VMware vSphere APIs and forwards it to other `otelcol.*` components. @@ -58,15 +58,14 @@ otelcol.receiver.vcenter "LABEL" { `otelcol.receiver.vcenter` supports the following arguments: - -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`endpoint` | `string` | Endpoint to a vCenter Server or ESXi host which has the SDK path enabled. | | yes -`username` | `string` | Username to use for authentication. | | yes -`password` | `string` | Password to use for authentication. | | yes -`collection_interval` | `duration` | Defines how often to collect metrics. | `"1m"` | no -`initial_delay` | `duration` | Defines how long this receiver waits before starting. | `"1s"` | no -`timeout` | `duration` | Defines the timeout for the underlying HTTP client. | `"0s"` | no +| Name | Type | Description | Default | Required | +| --------------------- | ---------- | ------------------------------------------------------------------------- | ------- | -------- | +| `endpoint` | `string` | Endpoint to a vCenter Server or ESXi host which has the SDK path enabled. | | yes | +| `username` | `string` | Username to use for authentication. | | yes | +| `password` | `string` | Password to use for authentication. | | yes | +| `collection_interval` | `duration` | Defines how often to collect metrics. | `"1m"` | no | +| `initial_delay` | `duration` | Defines how long this receiver waits before starting. | `"1s"` | no | +| `timeout` | `duration` | Defines the timeout for the underlying HTTP client. | `"0s"` | no | `endpoint` has the format `://`. For example, `https://vcsa.hostname.localnet`. @@ -75,13 +74,13 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `otelcol.receiver.vcenter`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls | [tls][] | Configures TLS for the HTTP client. | no -metrics | [metrics][] | Configures which metrics will be sent to downstream components. | no -resource_attributes | [resource_attributes][] | Configures resource attributes for metrics sent to downstream components. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -output | [output][] | Configures where to send received telemetry data. | yes +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------------- | -------------------------------------------------------------------------- | -------- | +| tls | [tls][] | Configures TLS for the HTTP client. | no | +| metrics | [metrics][] | Configures which metrics will be sent to downstream components. | no | +| resource_attributes | [resource_attributes][] | Configures resource attributes for metrics sent to downstream components. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received telemetry data. | yes | [tls]: #tls-block [debug_metrics]: #debug_metrics-block @@ -98,77 +97,75 @@ isn't provided, TLS won't be used for connections to the server. ### metrics block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`vcenter.cluster.cpu.effective` | [metric][] | Enables the `vcenter.cluster.cpu.effective` metric. | `true` | no -`vcenter.cluster.cpu.usage` | [metric][] | Enables the `vcenter.cluster.cpu.usage` metric. | `true` | no -`vcenter.cluster.host.count` | [metric][] | Enables the `vcenter.cluster.host.count` metric. | `true` | no -`vcenter.cluster.memory.effective` | [metric][] | Enables the `vcenter.cluster.memory.effective` metric. | `true` | no -`vcenter.cluster.memory.limit` | [metric][] | Enables the `vcenter.cluster.memory.limit` metric. | `true` | no -`vcenter.cluster.memory.used` | [metric][] | Enables the `vcenter.cluster.memory.used` metric. | `true` | no -`vcenter.cluster.vm.count` | [metric][] | Enables the `vcenter.cluster.vm.count` metric. | `true` | no -`vcenter.datastore.disk.usage` | [metric][] | Enables the `vcenter.datastore.disk.usage` metric. | `true` | no -`vcenter.datastore.disk.utilization` | [metric][] | Enables the `vcenter.datastore.disk.utilization` metric. | `true` | no -`vcenter.host.cpu.usage` | [metric][] | Enables the `vcenter.host.cpu.usage` metric. | `true` | no -`vcenter.host.cpu.utilization` | [metric][] | Enables the `vcenter.host.cpu.utilization` metric. | `true` | no -`vcenter.host.disk.latency.avg` | [metric][] | Enables the `vcenter.host.disk.latency.avg` metric. | `true` | no -`vcenter.host.disk.latency.max` | [metric][] | Enables the `vcenter.host.disk.latency.max` metric. | `true` | no -`vcenter.host.disk.throughput` | [metric][] | Enables the `vcenter.host.disk.throughput` metric. | `true` | no -`vcenter.host.memory.usage` | [metric][] | Enables the `vcenter.host.memory.usage` metric. | `true` | no -`vcenter.host.memory.utilization` | [metric][] | Enables the `vcenter.host.memory.utilization` metric. | `true` | no -`vcenter.host.network.packet.count` | [metric][] | Enables the `vcenter.host.network.packet.count` metric. | `true` | no -`vcenter.host.network.packet.errors` | [metric][] | Enables the `vcenter.host.network.packet.errors` metric. | `true` | no -`vcenter.host.network.throughput` | [metric][] | Enables the `vcenter.host.network.throughput` metric. | `true` | no -`vcenter.host.network.usage` | [metric][] | Enables the `vcenter.host.network.usage` metric. | `true` | no -`vcenter.resource_pool.cpu.shares` | [metric][] | Enables the `vcenter.resource_pool.cpu.shares` metric. | `true` | no -`vcenter.resource_pool.cpu.usage` | [metric][] | Enables the `vcenter.resource_pool.cpu.usage` metric. | `true` | no -`vcenter.resource_pool.memory.shares` | [metric][] | Enables the `vcenter.resource_pool.memory.shares` metric. | `true` | no -`vcenter.resource_pool.memory.usage` | [metric][] | Enables the `vcenter.resource_pool.memory.usage` metric. | `true` | no -`vcenter.vm.cpu.usage` | [metric][] | Enables the `vcenter.vm.cpu.usage` metric. | `true` | no -`vcenter.vm.cpu.utilization` | [metric][] | Enables the `vcenter.vm.cpu.utilization` metric. | `true` | no -`vcenter.vm.disk.latency.avg` | [metric][] | Enables the `vcenter.vm.disk.latency.avg` metric. | `true` | no -`vcenter.vm.disk.latency.max` | [metric][] | Enables the `vcenter.vm.disk.latency.max` metric. | `true` | no -`vcenter.vm.disk.throughput` | [metric][] | Enables the `vcenter.vm.disk.throughput` metric. | `true` | no -`vcenter.vm.disk.usage` | [metric][] | Enables the `vcenter.vm.disk.usage` metric. | `true` | no -`vcenter.vm.disk.utilization` | [metric][] | Enables the `vcenter.vm.disk.utilization` metric. | `true` | no -`vcenter.vm.memory.ballooned` | [metric][] | Enables the `vcenter.vm.memory.ballooned` metric. | `true` | no -`vcenter.vm.memory.swapped` | [metric][] | Enables the `vcenter.vm.memory.swapped` metric. | `true` | no -`vcenter.vm.memory.swapped_ssd` | [metric][] | Enables the `vcenter.vm.memory.swapped_ssd` metric. | `true` | no -`vcenter.vm.memory.usage` | [metric][] | Enables the `vcenter.vm.memory.usage` metric. | `true` | no -`vcenter.vm.memory.utilization` | [metric][] | Enables the `vcenter.vm.memory.utilization` metric. | `false` | no -`vcenter.vm.network.packet.count` | [metric][] | Enables the `vcenter.vm.network.packet.count` metric. | `true` | no -`vcenter.vm.network.throughput` | [metric][] | Enables the `vcenter.vm.network.throughput` metric. | `true` | no -`vcenter.vm.network.usage` | [metric][] | Enables the `vcenter.vm.network.usage` metric. | `true` | no +| Name | Type | Description | Default | Required | +| ------------------------------------- | ---------- | --------------------------------------------------------- | ------- | -------- | +| `vcenter.cluster.cpu.effective` | [metric][] | Enables the `vcenter.cluster.cpu.effective` metric. | `true` | no | +| `vcenter.cluster.cpu.usage` | [metric][] | Enables the `vcenter.cluster.cpu.usage` metric. | `true` | no | +| `vcenter.cluster.host.count` | [metric][] | Enables the `vcenter.cluster.host.count` metric. | `true` | no | +| `vcenter.cluster.memory.effective` | [metric][] | Enables the `vcenter.cluster.memory.effective` metric. | `true` | no | +| `vcenter.cluster.memory.limit` | [metric][] | Enables the `vcenter.cluster.memory.limit` metric. | `true` | no | +| `vcenter.cluster.memory.used` | [metric][] | Enables the `vcenter.cluster.memory.used` metric. | `true` | no | +| `vcenter.cluster.vm.count` | [metric][] | Enables the `vcenter.cluster.vm.count` metric. | `true` | no | +| `vcenter.datastore.disk.usage` | [metric][] | Enables the `vcenter.datastore.disk.usage` metric. | `true` | no | +| `vcenter.datastore.disk.utilization` | [metric][] | Enables the `vcenter.datastore.disk.utilization` metric. | `true` | no | +| `vcenter.host.cpu.usage` | [metric][] | Enables the `vcenter.host.cpu.usage` metric. | `true` | no | +| `vcenter.host.cpu.utilization` | [metric][] | Enables the `vcenter.host.cpu.utilization` metric. | `true` | no | +| `vcenter.host.disk.latency.avg` | [metric][] | Enables the `vcenter.host.disk.latency.avg` metric. | `true` | no | +| `vcenter.host.disk.latency.max` | [metric][] | Enables the `vcenter.host.disk.latency.max` metric. | `true` | no | +| `vcenter.host.disk.throughput` | [metric][] | Enables the `vcenter.host.disk.throughput` metric. | `true` | no | +| `vcenter.host.memory.usage` | [metric][] | Enables the `vcenter.host.memory.usage` metric. | `true` | no | +| `vcenter.host.memory.utilization` | [metric][] | Enables the `vcenter.host.memory.utilization` metric. | `true` | no | +| `vcenter.host.network.packet.count` | [metric][] | Enables the `vcenter.host.network.packet.count` metric. | `true` | no | +| `vcenter.host.network.packet.errors` | [metric][] | Enables the `vcenter.host.network.packet.errors` metric. | `true` | no | +| `vcenter.host.network.throughput` | [metric][] | Enables the `vcenter.host.network.throughput` metric. | `true` | no | +| `vcenter.host.network.usage` | [metric][] | Enables the `vcenter.host.network.usage` metric. | `true` | no | +| `vcenter.resource_pool.cpu.shares` | [metric][] | Enables the `vcenter.resource_pool.cpu.shares` metric. | `true` | no | +| `vcenter.resource_pool.cpu.usage` | [metric][] | Enables the `vcenter.resource_pool.cpu.usage` metric. | `true` | no | +| `vcenter.resource_pool.memory.shares` | [metric][] | Enables the `vcenter.resource_pool.memory.shares` metric. | `true` | no | +| `vcenter.resource_pool.memory.usage` | [metric][] | Enables the `vcenter.resource_pool.memory.usage` metric. | `true` | no | +| `vcenter.vm.cpu.usage` | [metric][] | Enables the `vcenter.vm.cpu.usage` metric. | `true` | no | +| `vcenter.vm.cpu.utilization` | [metric][] | Enables the `vcenter.vm.cpu.utilization` metric. | `true` | no | +| `vcenter.vm.disk.latency.avg` | [metric][] | Enables the `vcenter.vm.disk.latency.avg` metric. | `true` | no | +| `vcenter.vm.disk.latency.max` | [metric][] | Enables the `vcenter.vm.disk.latency.max` metric. | `true` | no | +| `vcenter.vm.disk.throughput` | [metric][] | Enables the `vcenter.vm.disk.throughput` metric. | `true` | no | +| `vcenter.vm.disk.usage` | [metric][] | Enables the `vcenter.vm.disk.usage` metric. | `true` | no | +| `vcenter.vm.disk.utilization` | [metric][] | Enables the `vcenter.vm.disk.utilization` metric. | `true` | no | +| `vcenter.vm.memory.ballooned` | [metric][] | Enables the `vcenter.vm.memory.ballooned` metric. | `true` | no | +| `vcenter.vm.memory.swapped` | [metric][] | Enables the `vcenter.vm.memory.swapped` metric. | `true` | no | +| `vcenter.vm.memory.swapped_ssd` | [metric][] | Enables the `vcenter.vm.memory.swapped_ssd` metric. | `true` | no | +| `vcenter.vm.memory.usage` | [metric][] | Enables the `vcenter.vm.memory.usage` metric. | `true` | no | +| `vcenter.vm.memory.utilization` | [metric][] | Enables the `vcenter.vm.memory.utilization` metric. | `false` | no | +| `vcenter.vm.network.packet.count` | [metric][] | Enables the `vcenter.vm.network.packet.count` metric. | `true` | no | +| `vcenter.vm.network.throughput` | [metric][] | Enables the `vcenter.vm.network.throughput` metric. | `true` | no | +| `vcenter.vm.network.usage` | [metric][] | Enables the `vcenter.vm.network.usage` metric. | `true` | no | [metric]: #metric-block #### metric block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Whether to enable the metric. | `true` | no - +| Name | Type | Description | Default | Required | +| --------- | --------- | ----------------------------- | ------- | -------- | +| `enabled` | `boolean` | Whether to enable the metric. | `true` | no | ### resource_attributes block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`vcenter.cluster.name` | [resource_attribute][] | Enables the `vcenter.cluster.name` resource attribute. | `true` | no -`vcenter.datastore.name` | [resource_attribute][] | Enables the `vcenter.cluster.resource_pool` resource attribute. | `true` | no -`vcenter.host.name` | [resource_attribute][] | Enables the `vcenter.host.name` resource attribute. | `true` | no -`vcenter.resource_pool.inventory_path` | [resource_attribute][] | Enables the `vcenter.resource_pool.inventory_path` resource attribute. | `true` | no -`vcenter.resource_pool.name` | [resource_attribute][] | Enables the `vcenter.resource_pool.name` resource attribute. | `true` | no -`vcenter.vm.id` | [resource_attribute][] | Enables the `vcenter.vm.id` resource attribute. | `true` | no -`vcenter.vm.name` | [resource_attribute][] | Enables the `vcenter.vm.name` resource attribute. | `true` | no +| Name | Type | Description | Default | Required | +| -------------------------------------- | ---------------------- | ---------------------------------------------------------------------- | ------- | -------- | +| `vcenter.cluster.name` | [resource_attribute][] | Enables the `vcenter.cluster.name` resource attribute. | `true` | no | +| `vcenter.datastore.name` | [resource_attribute][] | Enables the `vcenter.cluster.resource_pool` resource attribute. | `true` | no | +| `vcenter.host.name` | [resource_attribute][] | Enables the `vcenter.host.name` resource attribute. | `true` | no | +| `vcenter.resource_pool.inventory_path` | [resource_attribute][] | Enables the `vcenter.resource_pool.inventory_path` resource attribute. | `true` | no | +| `vcenter.resource_pool.name` | [resource_attribute][] | Enables the `vcenter.resource_pool.name` resource attribute. | `true` | no | +| `vcenter.vm.id` | [resource_attribute][] | Enables the `vcenter.vm.id` resource attribute. | `true` | no | +| `vcenter.vm.name` | [resource_attribute][] | Enables the `vcenter.vm.name` resource attribute. | `true` | no | [resource_attribute]: #resource_attribute-block #### resource_attribute block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Whether to enable the resource attribute. | `true` | no - +| Name | Type | Description | Default | Required | +| --------- | --------- | ----------------------------------------- | ------- | -------- | +| `enabled` | `boolean` | Whether to enable the resource attribute. | `true` | no | ### debug_metrics block @@ -229,10 +226,9 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md b/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md index 077aae622d14..669e3da30b12 100644 --- a/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md +++ b/docs/sources/flow/reference/components/otelcol.receiver.zipkin.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.zipkin/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.zipkin/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.zipkin/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.zipkin/ + - /docs/grafana-cloud/agent/flow/reference/components/otelcol.receiver.zipkin/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.receiver.zipkin/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.receiver.zipkin/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.receiver.zipkin/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/otelcol.receiver.zipkin/ description: Learn about otelcol.receiver.zipkin title: otelcol.receiver.zipkin @@ -35,12 +35,12 @@ otelcol.receiver.zipkin "LABEL" { `otelcol.receiver.zipkin` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`parse_string_tags` | `bool` | Parse string tags and binary annotations into non-string types. | `false` | no -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:9411"` | no -`max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no -`include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no +| Name | Type | Description | Default | Required | +| ----------------------- | --------- | --------------------------------------------------------------- | ---------------- | -------- | +| `parse_string_tags` | `bool` | Parse string tags and binary annotations into non-string types. | `false` | no | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:9411"` | no | +| `max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | If `parse_string_tags` is `true`, string tags and binary annotations are converted to `int`, `bool`, and `float` if possible. String tags and binary @@ -51,12 +51,12 @@ annotations that cannot be converted remain unchanged. The following blocks are supported inside the definition of `otelcol.receiver.zipkin`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -tls | [tls][] | Configures TLS for the HTTP server. | no -cors | [cors][] | Configures CORS for the HTTP server. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -output | [output][] | Configures where to send received traces. | yes +| Hierarchy | Block | Description | Required | +| ------------- | ----------------- | -------------------------------------------------------------------------- | -------- | +| tls | [tls][] | Configures TLS for the HTTP server. | no | +| cors | [cors][] | Configures CORS for the HTTP server. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no | +| output | [output][] | Configures where to send received traces. | yes | The `>` symbol indicates deeper levels of nesting. For example, `grpc > tls` refers to a `tls` block defined inside a `grpc` block. @@ -79,19 +79,19 @@ The `cors` block configures CORS settings for an HTTP server. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no -`allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no -`max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | -------------------------------------------------------- | ---------------------- | -------- | +| `allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no | +| `allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no | +| `max_age` | `number` | Configures the `Access-Control-Max-Age` response header. | | no | The `allowed_headers` argument specifies which headers are acceptable from a CORS request. The following headers are always implicitly allowed: -* `Accept` -* `Accept-Language` -* `Content-Type` -* `Content-Language` +- `Accept` +- `Accept-Language` +- `Content-Type` +- `Content-Language` If `allowed_headers` includes `"*"`, all headers are permitted. @@ -143,6 +143,7 @@ otelcol.exporter.otlp "default" { } } ``` + ## Compatible components @@ -151,10 +152,9 @@ otelcol.exporter.otlp "default" { - Components that export [OpenTelemetry `otelcol.Consumer`](../../compatibility/#opentelemetry-otelcolconsumer-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/prometheus.exporter.apache.md b/docs/sources/flow/reference/components/prometheus.exporter.apache.md index 5bbccf271d13..64cc0ef23f74 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.apache.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.apache.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.apache/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.apache/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.apache/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.apache/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.apache/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.apache/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.apache/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.apache/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.apache/ description: Learn about prometheus.exporter.apache title: prometheus.exporter.apache diff --git a/docs/sources/flow/reference/components/prometheus.exporter.azure.md b/docs/sources/flow/reference/components/prometheus.exporter.azure.md index 3c014f6919c4..88a4b0780d41 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.azure.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.azure.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.azure/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.azure/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.azure/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.azure/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.azure/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.azure/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.azure/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.azure/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.azure/ description: Learn about prometheus.exporter.azure title: prometheus.exporter.azure @@ -11,7 +11,7 @@ title: prometheus.exporter.azure # prometheus.exporter.azure -The `prometheus.exporter.azure` component embeds [`azure-metrics-exporter`](https://github.com/webdevops/azure-metrics-exporter) to collect metrics from [Azure Monitor](https://azure.microsoft.com/en-us/products/monitor). +The `prometheus.exporter.azure` component embeds [`azure-metrics-exporter`](https://github.com/webdevops/azure-metrics-exporter) to collect metrics from [Azure Monitor](https://azure.microsoft.com/en-us/products/monitor). The exporter supports all metrics defined by Azure Monitor. You can find the complete list of available metrics in the [Azure Monitor documentation](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported). Metrics for this integration are exposed with the template `azure_{type}_{metric}_{aggregation}_{unit}` by default. As an example, @@ -62,22 +62,22 @@ prometheus.exporter.azure LABEL { You can use the following arguments to configure the exporter's behavior. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -|-------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------| -| `subscriptions` | `list(string)` | List of subscriptions to scrape metrics from. | | yes | -| `resource_type` | `string` | The Azure Resource Type to scrape metrics for. | | yes | -| `metrics` | `list(string)` | The metrics to scrape from resources. | | yes | +| Name | Type | Description | Default | Required | +| ----------------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | -------- | +| `subscriptions` | `list(string)` | List of subscriptions to scrape metrics from. | | yes | +| `resource_type` | `string` | The Azure Resource Type to scrape metrics for. | | yes | +| `metrics` | `list(string)` | The metrics to scrape from resources. | | yes | | `resource_graph_query_filter` | `string` | The [Kusto query][] filter to apply when searching for resources. Can't be used if `regions` is set. | | no | | `regions` | `list(string)` | The list of regions for gathering metrics and enables gathering metrics for all resources in the subscription. Can't be used if `resource_graph_query_filter` is set. | | no | -| `metric_aggregations` | `list(string)` | Aggregations to apply for the metrics produced. | | no | -| `timespan` | `string` | [ISO8601 Duration][] over which the metrics are being queried. | `"PT1M"` (1 minute) | no | -| `included_dimensions` | `list(string)` | List of dimensions to include on the final metrics. | | no | -| `included_resource_tags` | `list(string)` | List of resource tags to include on the final metrics. | `["owner"]` | no | -| `metric_namespace` | `string` | Namespace for `resource_type` which have multiple levels of metrics. | | no | -| `azure_cloud_environment` | `string` | Name of the cloud environment to connect to. | `"azurecloud"` | no | -| `metric_name_template` | `string` | Metric template used to expose the metrics. | `"azure_{type}_{metric}_{aggregation}_{unit}"` | no | -| `metric_help_template` | `string` | Description of the metric. | `"Azure metric {metric} for {type} with aggregation {aggregation} as {unit}"` | no | -| `validate_dimensions` | `bool` | Enable dimension validation in the azure sdk | `false` | no | +| `metric_aggregations` | `list(string)` | Aggregations to apply for the metrics produced. | | no | +| `timespan` | `string` | [ISO8601 Duration][] over which the metrics are being queried. | `"PT1M"` (1 minute) | no | +| `included_dimensions` | `list(string)` | List of dimensions to include on the final metrics. | | no | +| `included_resource_tags` | `list(string)` | List of resource tags to include on the final metrics. | `["owner"]` | no | +| `metric_namespace` | `string` | Namespace for `resource_type` which have multiple levels of metrics. | | no | +| `azure_cloud_environment` | `string` | Name of the cloud environment to connect to. | `"azurecloud"` | no | +| `metric_name_template` | `string` | Metric template used to expose the metrics. | `"azure_{type}_{metric}_{aggregation}_{unit}"` | no | +| `metric_help_template` | `string` | Description of the metric. | `"Azure metric {metric} for {type} with aggregation {aggregation} as {unit}"` | no | +| `validate_dimensions` | `bool` | Enable dimension validation in the azure sdk | `false` | no | The list of available `resource_type` values and their corresponding `metrics` can be found in [Azure Monitor essentials][]. @@ -93,7 +93,7 @@ Tags in `included_resource_tags` will be added as labels with the name `tag_}} diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md index c40f951d9e6c..7c721cb05930 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.cadvisor.md @@ -1,15 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.cadvisor/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.cadvisor/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.cadvisor/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.cadvisor/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.cadvisor/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.cadvisor/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.cadvisor/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.cadvisor/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.cadvisor/ description: Learn about the prometheus.exporter.cadvisor title: prometheus.exporter.cadvisor --- # prometheus.exporter.cadvisor + The `prometheus.exporter.cadvisor` component exposes container metrics using [cAdvisor](https://github.com/google/cadvisor). @@ -21,29 +22,30 @@ prometheus.exporter.cadvisor "LABEL" { ``` ## Arguments + The following arguments can be used to configure the exporter's behavior. All arguments are optional. Omitted fields take their default values. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`store_container_labels` | `bool` | Whether to convert container labels and environment variables into labels on Prometheus metrics for each container. | `true` | no -`allowlisted_container_labels` | `list(string)` | Allowlist of container labels to convert to Prometheus labels. | `[]` | no -`env_metadata_allowlist` | `list(string)` | Allowlist of environment variable keys matched with a specified prefix that needs to be collected for containers. | `[]` | no -`raw_cgroup_prefix_allowlist` | `list(string)` | List of cgroup path prefixes that need to be collected, even when docker_only is specified. | `[]` | no -`perf_events_config` | `string` | Path to a JSON file containing the configuration of perf events to measure. | `""` | no -`resctrl_interval` | `duration` | Interval to update resctrl mon groups. | `0` | no -`disabled_metrics` | `list(string)` | List of metrics to be disabled which, if set, overrides the default disabled metrics. | (see below) | no -`enabled_metrics` | `list(string)` | List of metrics to be enabled which, if set, overrides disabled_metrics. | `[]` | no -`storage_duration` | `duration` | Length of time to keep data stored in memory. | `2m` | no -`containerd_host` | `string` | Containerd endpoint. | `/run/containerd/containerd.sock` | no -`containerd_namespace` | `string` | Containerd namespace. | `k8s.io` | no -`docker_host` | `string` | Docker endpoint. | `unix:///var/run/docker.sock` | no -`use_docker_tls` | `bool` | Use TLS to connect to docker. | `false` | no -`docker_tls_cert` | `string` | Path to client certificate for TLS connection to docker. | `cert.pem` | no -`docker_tls_key` | `string` | Path to private key for TLS connection to docker. | `key.pem` | no -`docker_tls_ca` | `string` | Path to a trusted CA for TLS connection to docker. | `ca.pem` | no -`docker_only` | `bool` | Only report docker containers in addition to root stats. | `false` | no -`disable_root_cgroup_stats` | `bool` | Disable collecting root Cgroup stats. | `false` | no +| Name | Type | Description | Default | Required | +| ------------------------------ | -------------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------- | -------- | +| `store_container_labels` | `bool` | Whether to convert container labels and environment variables into labels on Prometheus metrics for each container. | `true` | no | +| `allowlisted_container_labels` | `list(string)` | Allowlist of container labels to convert to Prometheus labels. | `[]` | no | +| `env_metadata_allowlist` | `list(string)` | Allowlist of environment variable keys matched with a specified prefix that needs to be collected for containers. | `[]` | no | +| `raw_cgroup_prefix_allowlist` | `list(string)` | List of cgroup path prefixes that need to be collected, even when docker_only is specified. | `[]` | no | +| `perf_events_config` | `string` | Path to a JSON file containing the configuration of perf events to measure. | `""` | no | +| `resctrl_interval` | `duration` | Interval to update resctrl mon groups. | `0` | no | +| `disabled_metrics` | `list(string)` | List of metrics to be disabled which, if set, overrides the default disabled metrics. | (see below) | no | +| `enabled_metrics` | `list(string)` | List of metrics to be enabled which, if set, overrides disabled_metrics. | `[]` | no | +| `storage_duration` | `duration` | Length of time to keep data stored in memory. | `2m` | no | +| `containerd_host` | `string` | Containerd endpoint. | `/run/containerd/containerd.sock` | no | +| `containerd_namespace` | `string` | Containerd namespace. | `k8s.io` | no | +| `docker_host` | `string` | Docker endpoint. | `unix:///var/run/docker.sock` | no | +| `use_docker_tls` | `bool` | Use TLS to connect to docker. | `false` | no | +| `docker_tls_cert` | `string` | Path to client certificate for TLS connection to docker. | `cert.pem` | no | +| `docker_tls_key` | `string` | Path to private key for TLS connection to docker. | `key.pem` | no | +| `docker_tls_ca` | `string` | Path to a trusted CA for TLS connection to docker. | `ca.pem` | no | +| `docker_only` | `bool` | Only report docker containers in addition to root stats. | `false` | no | +| `disable_root_cgroup_stats` | `bool` | Disable collecting root Cgroup stats. | `false` | no | For `allowlisted_container_labels` to take effect, `store_container_labels` must be set to `false`. @@ -55,7 +57,8 @@ A `resctrl_interval` of `0` disables updating mon groups. The values for `enabled_metrics` and `disabled_metrics` do not correspond to Prometheus metrics, but to kinds of metrics that should (or shouldn't) be -exposed. The full list of values that can be used is +exposed. The full list of values that can be used is + ``` "cpu", "sched", "percpu", "memory", "memory_numa", "cpuLoad", "diskIO", "disk", "network", "tcp", "advtcp", "udp", "app", "process", "hugetlb", "perf_event", diff --git a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md index 4caae767f321..392a693fde3d 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.cloudwatch.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.cloudwatch/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.cloudwatch/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.cloudwatch/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.cloudwatch/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.cloudwatch/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.cloudwatch/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.cloudwatch/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.cloudwatch/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.cloudwatch/ description: Learn about prometheus.exporter.cloudwatch title: prometheus.exporter.cloudwatch @@ -138,7 +138,7 @@ Omitted fields take their default values. You can use the following blocks in`prometheus.exporter.cloudwatch` to configure collector-specific options: | Hierarchy | Name | Description | Required | -|--------------------|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| +| ------------------ | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | | discovery | [discovery][] | Configures a discovery job. Multiple jobs can be configured. | no\* | | discovery > role | [role][] | Configures the IAM roles the job should assume to scrape metrics. Defaults to the role configured in the environment {{< param "PRODUCT_NAME" >}} runs on. | no | | discovery > metric | [metric][] | Configures the list of metrics the job should scrape. Multiple metrics can be defined inside one job. | yes | diff --git a/docs/sources/flow/reference/components/prometheus.exporter.consul.md b/docs/sources/flow/reference/components/prometheus.exporter.consul.md index a8480208ed4d..003445a9e47c 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.consul.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.consul.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.consul/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.consul/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.consul/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.consul/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.consul/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.consul/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.consul/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.consul/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.consul/ description: Learn about prometheus.exporter.consul title: prometheus.exporter.consul @@ -26,21 +26,21 @@ prometheus.exporter.consul "LABEL" { The following arguments can be used to configure the exporter's behavior. All arguments are optional. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -| -------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | -------- | +| Name | Type | Description | Default | Required | +| -------------------------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | -------- | | `server` | `string` | Address (host and port) of the Consul instance we should connect to. This could be a local {{< param "PRODUCT_ROOT_NAME" >}} (localhost:8500, for instance), or the address of a Consul server. | `http://localhost:8500` | no | -| `ca_file` | `string` | File path to a PEM-encoded certificate authority used to validate the authenticity of a server certificate. | | no | -| `cert_file` | `string` | File path to a PEM-encoded certificate used with the private key to verify the exporter's authenticity. | | no | -| `key_file` | `string` | File path to a PEM-encoded private key used with the certificate to verify the exporter's authenticity. | | no | -| `server_name` | `string` | When provided, this overrides the hostname for the TLS certificate. It can be used to ensure that the certificate name matches the hostname we declare. | | no | -| `timeout` | `duration` | Timeout on HTTP requests to consul. | 500ms | no | -| `insecure_skip_verify` | `bool` | Disable TLS host verification. | false | no | -| `concurrent_request_limit` | `string` | Limit the maximum number of concurrent requests to consul, 0 means no limit. | | no | -| `allow_stale` | `bool` | Allows any Consul server (non-leader) to service a read. | `true` | no | -| `require_consistent` | `bool` | Forces the read to be fully consistent. | | no | -| `kv_prefix` | `string` | Prefix under which to look for KV pairs. | | no | -| `kv_filter` | `string` | Only store keys that match this regex pattern. | `.*` | no | -| `generate_health_summary` | `bool` | Collects information about each registered service and exports `consul_catalog_service_node_healthy`. | `true` | no | +| `ca_file` | `string` | File path to a PEM-encoded certificate authority used to validate the authenticity of a server certificate. | | no | +| `cert_file` | `string` | File path to a PEM-encoded certificate used with the private key to verify the exporter's authenticity. | | no | +| `key_file` | `string` | File path to a PEM-encoded private key used with the certificate to verify the exporter's authenticity. | | no | +| `server_name` | `string` | When provided, this overrides the hostname for the TLS certificate. It can be used to ensure that the certificate name matches the hostname we declare. | | no | +| `timeout` | `duration` | Timeout on HTTP requests to consul. | 500ms | no | +| `insecure_skip_verify` | `bool` | Disable TLS host verification. | false | no | +| `concurrent_request_limit` | `string` | Limit the maximum number of concurrent requests to consul, 0 means no limit. | | no | +| `allow_stale` | `bool` | Allows any Consul server (non-leader) to service a read. | `true` | no | +| `require_consistent` | `bool` | Forces the read to be fully consistent. | | no | +| `kv_prefix` | `string` | Prefix under which to look for KV pairs. | | no | +| `kv_filter` | `string` | Only store keys that match this regex pattern. | `.*` | no | +| `generate_health_summary` | `bool` | Collects information about each registered service and exports `consul_catalog_service_node_healthy`. | `true` | no | ## Exported fields diff --git a/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md b/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md index 80fdd881ae66..5ba4ba84c01b 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.dnsmasq.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.dnsmasq/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.dnsmasq/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.dnsmasq/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.dnsmasq/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.dnsmasq/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.dnsmasq/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.dnsmasq/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.dnsmasq/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.dnsmasq/ description: Learn about prometheus.exporter.dnsmasq title: prometheus.exporter.dnsmasq diff --git a/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md b/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md index 487ce82eabf0..7ccbc1055074 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.elasticsearch.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.elasticsearch/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.elasticsearch/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.elasticsearch/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.elasticsearch/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.elasticsearch/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.elasticsearch/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.elasticsearch/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.elasticsearch/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.elasticsearch/ description: Learn about prometheus.exporter.elasticsearch title: prometheus.exporter.elasticsearch @@ -61,9 +61,9 @@ Omitted fields take their default values. The following blocks are supported inside the definition of `prometheus.exporter.elasticsearch`: -| Hierarchy | Block | Description | Required | -| ------------------- | ----------------- | -------------------------------------------------------- | -------- | -| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | [basic_auth]: #basic_auth-block diff --git a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md index 017542a0a864..2937f5391786 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.gcp.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.gcp.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.gcp/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.gcp/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.gcp/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.gcp/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.gcp/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.gcp/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.gcp/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.gcp/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.gcp/ description: Learn about prometheus.exporter.gcp title: prometheus.exporter.gcp diff --git a/docs/sources/flow/reference/components/prometheus.exporter.github.md b/docs/sources/flow/reference/components/prometheus.exporter.github.md index 10b641a6e612..7ae30ffc90b3 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.github.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.github.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.github/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.github/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.github/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.github/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.github/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.github/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.github/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.github/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.github/ description: Learn about prometheus.exporter.github title: prometheus.exporter.github diff --git a/docs/sources/flow/reference/components/prometheus.exporter.kafka.md b/docs/sources/flow/reference/components/prometheus.exporter.kafka.md index 23fe16550947..dc145a20404e 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.kafka.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.kafka.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.kafka/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.kafka/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.kafka/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.kafka/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.kafka/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.kafka/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.kafka/ description: Learn about prometheus.exporter.kafka title: prometheus.exporter.kafka diff --git a/docs/sources/flow/reference/components/prometheus.exporter.memcached.md b/docs/sources/flow/reference/components/prometheus.exporter.memcached.md index 8bf7d6e54fdc..ed8d5eb2ddd0 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.memcached.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.memcached.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.memcached/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.memcached/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.memcached/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.memcached/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.memcached/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.memcached/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.memcached/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.memcached/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.memcached/ description: Learn about prometheus.exporter.memcached title: prometheus.exporter.memcached diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md index e6231dad9dbe..553cb82a52bf 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.mongodb.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mongodb/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mongodb/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mongodb/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mongodb/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mongodb/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mongodb/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mongodb/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mongodb/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.mongodb/ description: Learn about prometheus.exporter.mongodb title: prometheus.exporter.mongodb diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mssql.md b/docs/sources/flow/reference/components/prometheus.exporter.mssql.md index ef7e70859100..f83855182ae3 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.mssql.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.mssql.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mssql/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mssql/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mssql/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mssql/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mssql/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mssql/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mssql/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mssql/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.mssql/ description: Learn about prometheus.exporter.mssql title: prometheus.exporter.mssql @@ -52,6 +52,7 @@ If specified, the `query_config` argument must be a YAML document as string defi See [sql_exporter](https://github.com/burningalchemist/sql_exporter#collectors) for details on how to create a configuration. ### Authentication + By default, the `USERNAME` and `PASSWORD` used within the `connection_string` argument corresponds to a SQL Server username and password. If {{< param "PRODUCT_ROOT_NAME" >}} is running in the same Windows domain as the SQL Server, then you can use the parameter `authenticator=winsspi` within the `connection_string` to authenticate without any additional credentials. @@ -60,7 +61,7 @@ If {{< param "PRODUCT_ROOT_NAME" >}} is running in the same Windows domain as th sqlserver://@:?authenticator=winsspi ``` -If you want to use Windows credentials to authenticate, instead of SQL Server credentials, you can use the parameter `authenticator=ntlm` within the `connection_string`. +If you want to use Windows credentials to authenticate, instead of SQL Server credentials, you can use the parameter `authenticator=ntlm` within the `connection_string`. The `USERNAME` and `PASSWORD` then corresponds to a Windows username and password. The Windows domain may need to be prefixed to the username with a trailing `\`. @@ -130,12 +131,14 @@ Replace the following: [scrape]: {{< relref "./prometheus.scrape.md" >}} ## Custom metrics + You can use the optional `query_config` parameter to retrieve custom Prometheus metrics for a MSSQL instance. If this is defined, the new configuration will be used to query your MSSQL instance and create whatever Prometheus metrics are defined. If you want additional metrics on top of the default metrics, the default configuration must be used as a base. The default configuration used by this integration is as follows: + ``` collector_name: mssql_standard @@ -216,8 +219,8 @@ metrics: query: | SELECT (a.cntr_value * 1.0 / b.cntr_value) * 100.0 as BufferCacheHitRatio FROM sys.dm_os_performance_counters a - JOIN (SELECT cntr_value, OBJECT_NAME - FROM sys.dm_os_performance_counters + JOIN (SELECT cntr_value, OBJECT_NAME + FROM sys.dm_os_performance_counters WHERE counter_name = 'Buffer cache hit ratio base' AND OBJECT_NAME = 'SQLServer:Buffer Manager') b ON a.OBJECT_NAME = b.OBJECT_NAME WHERE a.counter_name = 'Buffer cache hit ratio' diff --git a/docs/sources/flow/reference/components/prometheus.exporter.mysql.md b/docs/sources/flow/reference/components/prometheus.exporter.mysql.md index 14df71386abc..6ab8f186380d 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.mysql.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.mysql.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mysql/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mysql/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mysql/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mysql/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.mysql/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.mysql/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.mysql/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.mysql/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.mysql/ description: Learn about prometheus.exporter.mysql title: prometheus.exporter.mysql @@ -98,9 +98,9 @@ View more detailed documentation on the tables used in `perf_schema_file_instanc ### perf_schema.memory_events block -| Name | Type | Description | Default | Required | -| --------------- | -------- | ----------------------------------------------------------------------------------- | ------------------ | -------- | -| `remove_prefix` | `string` | Prefix to trim away from `performance_schema.memory_summary_global_by_event_name`. | `"memory/"` | no | +| Name | Type | Description | Default | Required | +| --------------- | -------- | ---------------------------------------------------------------------------------- | ----------- | -------- | +| `remove_prefix` | `string` | Prefix to trim away from `performance_schema.memory_summary_global_by_event_name`. | `"memory/"` | no | ### heartbeat block diff --git a/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md b/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md index a259a5bfae75..b8cd92133419 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.oracledb.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.oracledb/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.oracledb/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.oracledb/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.oracledb/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.oracledb/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.oracledb/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.oracledb/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.oracledb/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.oracledb/ description: Learn about prometheus.exporter.oracledb title: prometheus.exporter.oracledb diff --git a/docs/sources/flow/reference/components/prometheus.exporter.postgres.md b/docs/sources/flow/reference/components/prometheus.exporter.postgres.md index 5778217cfa95..c8ecc0d38b13 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.postgres.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.postgres.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.postgres/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.postgres/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.postgres/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.postgres/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.postgres/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.postgres/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.postgres/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.postgres/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.postgres/ description: Learn about prometheus.exporter.postgres labels: @@ -32,7 +32,7 @@ prometheus.exporter.postgres "LABEL" { The following arguments are supported: | Name | Type | Description | Default | Required | -|------------------------------|----------------|-------------------------------------------------------------------------------|---------|----------| +| ---------------------------- | -------------- | ----------------------------------------------------------------------------- | ------- | -------- | | `data_source_names` | `list(secret)` | Specifies the Postgres server(s) to connect to. | | yes | | `disable_settings_metrics` | `bool` | Disables collection of metrics from pg_settings. | `false` | no | | `disable_default_metrics` | `bool` | When `true`, only exposes metrics supplied from `custom_queries_config_path`. | `false` | no | diff --git a/docs/sources/flow/reference/components/prometheus.exporter.process.md b/docs/sources/flow/reference/components/prometheus.exporter.process.md index 2ece4bfb9652..6db59e9b3cf8 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.process.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.process.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.process/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.process/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.process/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.process/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.process/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.process/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.process/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.process/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.process/ description: Learn about prometheus.exporter.process title: prometheus.exporter.process diff --git a/docs/sources/flow/reference/components/prometheus.exporter.redis.md b/docs/sources/flow/reference/components/prometheus.exporter.redis.md index 93cc839aeb6c..789657f67b3c 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.redis.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.redis.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.redis/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.redis/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.redis/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.redis/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.redis/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.redis/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.redis/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.redis/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.redis/ description: Learn about prometheus.exporter.redis title: prometheus.exporter.redis diff --git a/docs/sources/flow/reference/components/prometheus.exporter.self.md b/docs/sources/flow/reference/components/prometheus.exporter.self.md index 42970e3214f1..be0a00351fd6 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.self.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.self.md @@ -1,8 +1,8 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.agent/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.agent/ -- ./prometheus.exporter.agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.agent/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.agent/ + - ./prometheus.exporter.agent/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.self/ description: Learn about prometheus.exporter.self title: prometheus.exporter.self @@ -67,13 +67,14 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. -[scrape]: {{< relref "./prometheus.scrape.md" >}} +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. +[scrape]: {{< relref "./prometheus.scrape.md" >}} diff --git a/docs/sources/flow/reference/components/prometheus.exporter.snmp.md b/docs/sources/flow/reference/components/prometheus.exporter.snmp.md index 033bfbd26139..6d4c6ce73ba7 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.snmp.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.snmp.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.snmp/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.snmp/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.snmp/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.snmp/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.snmp/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.snmp/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.snmp/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.snmp/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.snmp/ description: Learn about prometheus.exporter.snmp title: prometheus.exporter.snmp diff --git a/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md b/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md index c0b075826066..d5c17b82c3de 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.snowflake.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.snowflake/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.snowflake/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.snowflake/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.snowflake/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.snowflake/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.snowflake/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.snowflake/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.snowflake/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.snowflake/ description: Learn about prometheus.exporter.snowflake title: prometheus.exporter.snowflake diff --git a/docs/sources/flow/reference/components/prometheus.exporter.squid.md b/docs/sources/flow/reference/components/prometheus.exporter.squid.md index 44df6488631a..e90c92970026 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.squid.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.squid.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.squid/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.squid/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.squid/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.squid/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.squid/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.squid/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.squid/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.squid/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.squid/ description: Learn about prometheus.exporter.squid title: prometheus.exporter.squid diff --git a/docs/sources/flow/reference/components/prometheus.exporter.statsd.md b/docs/sources/flow/reference/components/prometheus.exporter.statsd.md index 40eb9e4edabc..da7b26c58ea2 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.statsd.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.statsd.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.statsd/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.statsd/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.statsd/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.statsd/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.statsd/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.statsd/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.statsd/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.statsd/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.statsd/ description: Learn about prometheus.exporter.statsd title: prometheus.exporter.statsd diff --git a/docs/sources/flow/reference/components/prometheus.exporter.unix.md b/docs/sources/flow/reference/components/prometheus.exporter.unix.md index 1917b510d7cc..3a5cb9da77db 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.unix.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.unix.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.unix/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.unix/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.unix/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.unix/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.unix/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.unix/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.unix/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.unix/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.unix/ description: Learn about prometheus.exporter.unix title: prometheus.exporter.unix @@ -19,7 +19,6 @@ The `node_exporter` itself is comprised of various _collectors_, which can be enabled and disabled at will. For more information on collectors, refer to the [`collectors-list`](#collectors-list) section. - Multiple `prometheus.exporter.unix` components can be specified by giving them different labels. ## Usage diff --git a/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md b/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md index 6cb16c8ec5a7..92d189a215f8 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.vsphere.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.vsphere/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.vsphere/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.vsphere/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.vsphere/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.vsphere/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.vsphere/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.vsphere/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.vsphere/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.vsphere/ title: prometheus.exporter.vsphere description: Learn about prometheus.exporter.vsphere @@ -34,18 +34,17 @@ prometheus.exporter.vsphere "LABEL" { You can use the following arguments to configure the exporter's behavior. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -| ---------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | -| `vsphere_url` | `string` | The url of the vCenter endpoint SDK | | no | -| `vsphere_user` | `string` | vCenter username. | | no | -| `vsphere_password` | `secret` | vCenter password. | | no | -| `request_chunk_size` | `int` | Number of managed objects to include in each request to vsphere when fetching performance counters. | `256` | no | -| `collect_concurrency` | `int` | Number of concurrent requests to vSphere when fetching performance counters. | `8` | no | -| `discovery_interval` | `duration` | Interval on which to run vSphere managed object discovery. | `0` | no | -| `enable_exporter_metrics` | `boolean` | Enable the exporter metrics. | `true` | no | - -- Setting `discovery_interval` to a non-zero value will result in object discovery running in the background. Each scrape will use object data gathered during the last discovery. When this value is 0, object discovery occurs per scrape. - +| Name | Type | Description | Default | Required | +| ------------------------- | ---------- | --------------------------------------------------------------------------------------------------- | ------- | -------- | +| `vsphere_url` | `string` | The url of the vCenter endpoint SDK | | no | +| `vsphere_user` | `string` | vCenter username. | | no | +| `vsphere_password` | `secret` | vCenter password. | | no | +| `request_chunk_size` | `int` | Number of managed objects to include in each request to vsphere when fetching performance counters. | `256` | no | +| `collect_concurrency` | `int` | Number of concurrent requests to vSphere when fetching performance counters. | `8` | no | +| `discovery_interval` | `duration` | Interval on which to run vSphere managed object discovery. | `0` | no | +| `enable_exporter_metrics` | `boolean` | Enable the exporter metrics. | `true` | no | + +- Setting `discovery_interval` to a non-zero value will result in object discovery running in the background. Each scrape will use object data gathered during the last discovery. When this value is 0, object discovery occurs per scrape. ## Exported fields diff --git a/docs/sources/flow/reference/components/prometheus.exporter.windows.md b/docs/sources/flow/reference/components/prometheus.exporter.windows.md index a38befddde21..7ec51d6b1505 100644 --- a/docs/sources/flow/reference/components/prometheus.exporter.windows.md +++ b/docs/sources/flow/reference/components/prometheus.exporter.windows.md @@ -1,15 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.windows/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.windows/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.windows/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.windows/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.exporter.windows/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.exporter.windows/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.exporter.windows/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.exporter.windows/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.exporter.windows/ description: Learn about prometheus.exporter.windows title: prometheus.exporter.windows --- # prometheus.exporter.windows + The `prometheus.exporter.windows` component embeds [windows_exporter](https://github.com/prometheus-community/windows_exporter) which exposes a wide variety of hardware and OS metrics for Windows-based systems. @@ -35,7 +36,7 @@ The following arguments can be used to configure the exporter's behavior. All arguments are optional. Omitted fields take their default values. | Name | Type | Description | Default | Required | -|----------------------|----------------|-------------------------------------------|-------------------------------------------------------------|----------| +| -------------------- | -------------- | ----------------------------------------- | ----------------------------------------------------------- | -------- | | `enabled_collectors` | `list(string)` | List of collectors to enable. | `["cpu","cs","logical_disk","net","os","service","system"]` | no | | `timeout` | `duration` | Configure timeout for collecting metrics. | `4m` | no | @@ -48,20 +49,20 @@ Refer to the [Collectors list](#collectors-list) for the default set. The following blocks are supported inside the definition of `prometheus.exporter.windows` to configure collector-specific options: -Hierarchy | Name | Description | Required ----------------|--------------------|------------------------------------------|--------- -dfsr | [dfsr][] | Configures the dfsr collector. | no -exchange | [exchange][] | Configures the exchange collector. | no -iis | [iis][] | Configures the iis collector. | no -logical_disk | [logical_disk][] | Configures the logical_disk collector. | no -msmq | [msmq][] | Configures the msmq collector. | no -mssql | [mssql][] | Configures the mssql collector. | no -network | [network][] | Configures the network collector. | no -process | [process][] | Configures the process collector. | no -scheduled_task | [scheduled_task][] | Configures the scheduled_task collector. | no -service | [service][] | Configures the service collector. | no -smtp | [smtp][] | Configures the smtp collector. | no -text_file | [text_file][] | Configures the text_file collector. | no +| Hierarchy | Name | Description | Required | +| -------------- | ------------------ | ---------------------------------------- | -------- | +| dfsr | [dfsr][] | Configures the dfsr collector. | no | +| exchange | [exchange][] | Configures the exchange collector. | no | +| iis | [iis][] | Configures the iis collector. | no | +| logical_disk | [logical_disk][] | Configures the logical_disk collector. | no | +| msmq | [msmq][] | Configures the msmq collector. | no | +| mssql | [mssql][] | Configures the mssql collector. | no | +| network | [network][] | Configures the network collector. | no | +| process | [process][] | Configures the process collector. | no | +| scheduled_task | [scheduled_task][] | Configures the scheduled_task collector. | no | +| service | [service][] | Configures the service collector. | no | +| smtp | [smtp][] | Configures the smtp collector. | no | +| text_file | [text_file][] | Configures the text_file collector. | no | [dfsr]: #dfsr-block [exchange]: #exchange-block @@ -78,16 +79,15 @@ text_file | [text_file][] | Configures the text_file collector. | ### dfsr block -Name | Type | Description | Default | Required ------------------|----------------|------------------------------------------------------|------------------------------------|--------- -`source_enabled` | `list(string)` | Comma-separated list of DFSR Perflib sources to use. | `["connection","folder","volume"]` | no - +| Name | Type | Description | Default | Required | +| ---------------- | -------------- | ---------------------------------------------------- | ---------------------------------- | -------- | +| `source_enabled` | `list(string)` | Comma-separated list of DFSR Perflib sources to use. | `["connection","folder","volume"]` | no | ### exchange block -Name | Type | Description | Default | Required ----------------|----------|--------------------------------------------|---------|--------- -`enabled_list` | `string` | Comma-separated list of collectors to use. | `""` | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | ------------------------------------------ | ------- | -------- | +| `enabled_list` | `string` | Comma-separated list of collectors to use. | `""` | no | The collectors specified by `enabled_list` can include the following: @@ -103,108 +103,92 @@ The collectors specified by `enabled_list` can include the following: For example, `enabled_list` may be set to `"AvailabilityService,OutlookWebAccess"`. - ### iis block -Name | Type | Description | Default | Required ----------------|----------|--------------------------------------------------|---------|--------- -`app_exclude` | `string` | Regular expression of applications to ignore. | `""` | no -`app_include` | `string` | Regular expression of applications to report on. | `".*"` | no -`site_exclude` | `string` | Regular expression of sites to ignore. | `""` | no -`site_include` | `string` | Regular expression of sites to report on. | `".*"` | no - +| Name | Type | Description | Default | Required | +| -------------- | -------- | ------------------------------------------------ | ------- | -------- | +| `app_exclude` | `string` | Regular expression of applications to ignore. | `""` | no | +| `app_include` | `string` | Regular expression of applications to report on. | `".*"` | no | +| `site_exclude` | `string` | Regular expression of sites to ignore. | `""` | no | +| `site_include` | `string` | Regular expression of sites to report on. | `".*"` | no | ### logical_disk block -Name | Type | Description | Default | Required -----------|----------|-------------------------------------------|---------|--------- -`exclude` | `string` | Regular expression of volumes to exclude. | `""` | no -`include` | `string` | Regular expression of volumes to include. | `".+"` | no +| Name | Type | Description | Default | Required | +| --------- | -------- | ----------------------------------------- | ------- | -------- | +| `exclude` | `string` | Regular expression of volumes to exclude. | `""` | no | +| `include` | `string` | Regular expression of volumes to include. | `".+"` | no | Volume names must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included. - ### msmq block -Name | Type | Description | Default | Required ----------------|----------|-------------------------------------------------|---------|--------- -`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | ----------------------------------------------- | ------- | -------- | +| `where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no | Specifying `enabled_classes` is useful to limit the response to the MSMQs you specify, reducing the size of the response. - ### mssql block -Name | Type | Description | Default | Required ----- |----------| ----------- | ------- | -------- -`enabled_classes` | `list(string)` | Comma-separated list of MSSQL WMI classes to use. | `["accessmethods", "availreplica", "bufman", "databases", "dbreplica", "genstats", "locks", "memmgr", "sqlstats", "sqlerrorstransactions"]` | no - +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| `enabled_classes` | `list(string)` | Comma-separated list of MSSQL WMI classes to use. | `["accessmethods", "availreplica", "bufman", "databases", "dbreplica", "genstats", "locks", "memmgr", "sqlstats", "sqlerrorstransactions"]` | no | ### network block -Name | Type | Description | Default | Required -----------|----------|-----------------------------------------|---------|--------- -`exclude` | `string` | Regular expression of NIC:s to exclude. | `""` | no -`include` | `string` | Regular expression of NIC:s to include. | `".*"` | no +| Name | Type | Description | Default | Required | +| --------- | -------- | --------------------------------------- | ------- | -------- | +| `exclude` | `string` | Regular expression of NIC:s to exclude. | `""` | no | +| `include` | `string` | Regular expression of NIC:s to include. | `".*"` | no | NIC names must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included. - ### process block -Name | Type | Description | Default | Required -----------|----------|---------------------------------------------|---------|--------- -`exclude` | `string` | Regular expression of processes to exclude. | `""` | no -`include` | `string` | Regular expression of processes to include. | `".*"` | no +| Name | Type | Description | Default | Required | +| --------- | -------- | ------------------------------------------- | ------- | -------- | +| `exclude` | `string` | Regular expression of processes to exclude. | `""` | no | +| `include` | `string` | Regular expression of processes to include. | `".*"` | no | Processes must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude` to be included. - ### scheduled_task block -Name | Type | Description | Default | Required -----------|----------|-----------------------------|---------|--------- -`exclude` | `string` | Regexp of tasks to exclude. | `""` | no -`include` | `string` | Regexp of tasks to include. | `".+"` | no +| Name | Type | Description | Default | Required | +| --------- | -------- | --------------------------- | ------- | -------- | +| `exclude` | `string` | Regexp of tasks to exclude. | `""` | no | +| `include` | `string` | Regexp of tasks to include. | `".+"` | no | For a server name to be included, it must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude`. - ### service block -Name | Type | Description | Default | Required ----------------|----------|-------------------------------------------------------------|---------|--------- -`use_api` | `string` | Use the Windows API to collect service data instead of WMI. | `false` | no -`where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | ----------------------------------------------------- | ------- | -------- | +| `use_api` | `string` | Use API calls to collect service data instead of WMI. | `false` | no | +| `where_clause` | `string` | WQL 'where' clause to use in WMI metrics query. | `""` | no | The `where_clause` argument can be used to limit the response to the services you specify, reducing the size of the response. If `use_api` is enabled, 'where_clause' won't be effective. -The Windows API is more performant than WMI. Set `use_api` to `true` in situations when the WMI takes too long to get the service information. -Setting `use_api` to `true` does have a few disadvantages compared to using WMI: -* WMI queries in `where_clause` won't work. -* The `status` field of the service is not available. You can use the `state` property to retrieve status information. This property provides the same information, but it is formatted differently. - - ### smtp block -Name | Type | Description | Default | Required -----------|----------|---------------------------------------|---------|--------- -`exclude` | `string` | Regexp of virtual servers to ignore. | | no -`include` | `string` | Regexp of virtual servers to include. | `".+"` | no +| Name | Type | Description | Default | Required | +| --------- | -------- | ------------------------------------- | ------- | -------- | +| `exclude` | `string` | Regexp of virtual servers to ignore. | | no | +| `include` | `string` | Regexp of virtual servers to include. | `".+"` | no | For a server name to be included, it must match the regular expression specified by `include` and must _not_ match the regular expression specified by `exclude`. - ### text_file block -Name | Type | Description | Default | Required -----------------------|----------|----------------------------------------------------|-------------------------------------------------------|--------- -`text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\Grafana Agent Flow\textfile_inputs` | no +| Name | Type | Description | Default | Required | +| --------------------- | -------- | -------------------------------------------------- | ----------------------------------------------------- | -------- | +| `text_file_directory` | `string` | The directory containing the files to be ingested. | `C:\Program Files\Grafana Agent Flow\textfile_inputs` | no | When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. Each `.prom` file found must end with an empty line feed to work properly. - ## Exported fields {{< docs/shared lookup="flow/reference/components/exporter-component-exports.md" source="agent" version="" >}} @@ -226,6 +210,7 @@ debug information. debug metrics. ## Collectors list + The following table lists the available collectors that `windows_exporter` brings bundled in. Some collectors only work on specific operating systems; enabling a collector that is not supported by the host OS where Flow is running @@ -235,65 +220,64 @@ Users can choose to enable a subset of collectors to limit the amount of metrics exposed by the `prometheus.exporter.windows` component, or disable collectors that are expensive to run. - -Name | Description | Enabled by default ----------|-------------|-------------------- -[ad](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.ad.md) | Active Directory Domain Services | -[adcs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.adcs.md) | Active Directory Certificate Services | -[adfs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.adfs.md) | Active Directory Federation Services | -[cache](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cache.md) | Cache metrics | -[cpu](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cpu.md) | CPU usage | ✓ -[cpu_info](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cpu_info.md) | CPU Information | -[cs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cs.md) | "Computer System" metrics (system properties, num cpus/total memory) | ✓ -[container](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.container.md) | Container metrics | -[dfsr](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dfsr.md) | DFSR metrics | -[dhcp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dhcp.md) | DHCP Server | -[dns](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dns.md) | DNS Server | -[exchange](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.exchange.md) | Exchange metrics | -[fsrmquota](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.fsrmquota.md) | Microsoft File Server Resource Manager (FSRM) Quotas collector | -[hyperv](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.hyperv.md) | Hyper-V hosts | -[iis](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.iis.md) | IIS sites and applications | -[logical_disk](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.logical_disk.md) | Logical disks, disk I/O | ✓ -[logon](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.logon.md) | User logon sessions | -[memory](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.memory.md) | Memory usage metrics | -[mscluster_cluster](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_cluster.md) | MSCluster cluster metrics | -[mscluster_network](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_network.md) | MSCluster network metrics | -[mscluster_node](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_node.md) | MSCluster Node metrics | -[mscluster_resource](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_resource.md) | MSCluster Resource metrics | -[mscluster_resourcegroup](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_resourcegroup.md) | MSCluster ResourceGroup metrics | -[msmq](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.msmq.md) | MSMQ queues | -[mssql](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mssql.md) | [SQL Server Performance Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects#SQLServerPOs) metrics | -[netframework_clrexceptions](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrexceptions.md) | .NET Framework CLR Exceptions | -[netframework_clrinterop](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrinterop.md) | .NET Framework Interop Metrics | -[netframework_clrjit](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrjit.md) | .NET Framework JIT metrics | -[netframework_clrloading](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrloading.md) | .NET Framework CLR Loading metrics | -[netframework_clrlocksandthreads](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrlocksandthreads.md) | .NET Framework locks and metrics threads | -[netframework_clrmemory](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrmemory.md) | .NET Framework Memory metrics | -[netframework_clrremoting](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrremoting.md) | .NET Framework Remoting metrics | -[netframework_clrsecurity](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrsecurity.md) | .NET Framework Security Check metrics | -[net](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.net.md) | Network interface I/O | ✓ -[os](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.os.md) | OS metrics (memory, processes, users) | ✓ -[physical_disk](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.physical_disk.md) | Physical disks | ✓ -[process](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.process.md) | Per-process metrics | -[remote_fx](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.remote_fx.md) | RemoteFX protocol (RDP) metrics | -[scheduled_task](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.scheduled_task.md) | Scheduled Tasks metrics | -[service](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.service.md) | Service state metrics | ✓ -[smtp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.smtp.md) | IIS SMTP Server | -[system](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.system.md) | System calls | ✓ -[tcp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.tcp.md) | TCP connections | -[teradici_pcoip](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.teradici_pcoip.md) | [Teradici PCoIP](https://www.teradici.com/web-help/pcoip_wmi_specs/) session metrics | -[time](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.time.md) | Windows Time Service | -[thermalzone](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.thermalzone.md) | Thermal information -[terminal_services](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.terminal_services.md) | Terminal services (RDS) -[textfile](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.textfile.md) | Read prometheus metrics from a text file | -[vmware_blast](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware_blast.md) | VMware Blast session metrics | -[vmware](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware.md) | Performance counters installed by the Vmware Guest agent | +| Name | Description | Enabled by default | +| --------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | +| [ad](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.ad.md) | Active Directory Domain Services | +| [adcs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.adcs.md) | Active Directory Certificate Services | +| [adfs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.adfs.md) | Active Directory Federation Services | +| [cache](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cache.md) | Cache metrics | +| [cpu](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cpu.md) | CPU usage | ✓ | +| [cpu_info](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cpu_info.md) | CPU Information | +| [cs](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.cs.md) | "Computer System" metrics (system properties, num cpus/total memory) | ✓ | +| [container](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.container.md) | Container metrics | +| [dfsr](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dfsr.md) | DFSR metrics | +| [dhcp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dhcp.md) | DHCP Server | +| [dns](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.dns.md) | DNS Server | +| [exchange](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.exchange.md) | Exchange metrics | +| [fsrmquota](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.fsrmquota.md) | Microsoft File Server Resource Manager (FSRM) Quotas collector | +| [hyperv](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.hyperv.md) | Hyper-V hosts | +| [iis](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.iis.md) | IIS sites and applications | +| [logical_disk](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.logical_disk.md) | Logical disks, disk I/O | ✓ | +| [logon](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.logon.md) | User logon sessions | +| [memory](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.memory.md) | Memory usage metrics | +| [mscluster_cluster](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_cluster.md) | MSCluster cluster metrics | +| [mscluster_network](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_network.md) | MSCluster network metrics | +| [mscluster_node](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_node.md) | MSCluster Node metrics | +| [mscluster_resource](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_resource.md) | MSCluster Resource metrics | +| [mscluster_resourcegroup](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mscluster_resourcegroup.md) | MSCluster ResourceGroup metrics | +| [msmq](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.msmq.md) | MSMQ queues | +| [mssql](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.mssql.md) | [SQL Server Performance Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects#SQLServerPOs) metrics | +| [netframework_clrexceptions](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrexceptions.md) | .NET Framework CLR Exceptions | +| [netframework_clrinterop](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrinterop.md) | .NET Framework Interop Metrics | +| [netframework_clrjit](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrjit.md) | .NET Framework JIT metrics | +| [netframework_clrloading](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrloading.md) | .NET Framework CLR Loading metrics | +| [netframework_clrlocksandthreads](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrlocksandthreads.md) | .NET Framework locks and metrics threads | +| [netframework_clrmemory](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrmemory.md) | .NET Framework Memory metrics | +| [netframework_clrremoting](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrremoting.md) | .NET Framework Remoting metrics | +| [netframework_clrsecurity](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.netframework_clrsecurity.md) | .NET Framework Security Check metrics | +| [net](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.net.md) | Network interface I/O | ✓ | +| [os](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.os.md) | OS metrics (memory, processes, users) | ✓ | +| [physical_disk](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.physical_disk.md) | Physical disks | ✓ | +| [process](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.process.md) | Per-process metrics | +| [remote_fx](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.remote_fx.md) | RemoteFX protocol (RDP) metrics | +| [scheduled_task](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.scheduled_task.md) | Scheduled Tasks metrics | +| [service](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.service.md) | Service state metrics | ✓ | +| [smtp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.smtp.md) | IIS SMTP Server | +| [system](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.system.md) | System calls | ✓ | +| [tcp](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.tcp.md) | TCP connections | +| [teradici_pcoip](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.teradici_pcoip.md) | [Teradici PCoIP](https://www.teradici.com/web-help/pcoip_wmi_specs/) session metrics | +| [time](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.time.md) | Windows Time Service | +| [thermalzone](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.thermalzone.md) | Thermal information | +| [terminal_services](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.terminal_services.md) | Terminal services (RDS) | +| [textfile](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.textfile.md) | Read prometheus metrics from a text file | +| [vmware_blast](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware_blast.md) | VMware Blast session metrics | +| [vmware](https://github.com/prometheus-community/windows_exporter/blob/master/docs/collector.vmware.md) | Performance counters installed by the Vmware Guest agent | Refer to the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples. {{< admonition type="caution" >}} Certain collectors will cause {{< param "PRODUCT_ROOT_NAME" >}} to crash if those collectors are used and the required infrastructure isn't installed. -These include but aren't limited to mscluster_*, vmware, nps, dns, msmq, teradici_pcoip, ad, hyperv, and scheduled_task. +These include but aren't limited to mscluster\_\*, vmware, nps, dns, msmq, teradici_pcoip, ad, hyperv, and scheduled_task. {{< /admonition >}} ## Example @@ -321,10 +305,12 @@ prometheus.remote_write "demo" { } } ``` + Replace the following: - - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. - - `USERNAME`: The username to use for authentication to the remote_write API. - - `PASSWORD`: The password to use for authentication to the remote_write API. + +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the remote_write API. +- `PASSWORD`: The password to use for authentication to the remote_write API. [scrape]: {{< relref "./prometheus.scrape.md" >}} diff --git a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md index 34d73ae78477..df8ec37c5a01 100644 --- a/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md +++ b/docs/sources/flow/reference/components/prometheus.operator.podmonitors.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.podmonitors/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.podmonitors/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.podmonitors/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.podmonitors/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.podmonitors/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.podmonitors/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.podmonitors/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.podmonitors/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.operator.podmonitors/ description: Learn about prometheus.operator.podmonitors labels: @@ -37,28 +37,28 @@ prometheus.operator.podmonitors "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes -`namespaces` | `list(string)` | List of namespaces to search for PodMonitor resources. If not specified, all namespaces will be searched. || no +| Name | Type | Description | Default | Required | +| ------------ | ----------------------- | --------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes | +| `namespaces` | `list(string)` | List of namespaces to search for PodMonitor resources. If not specified, all namespaces will be searched. | | no | ## Blocks The following blocks are supported inside the definition of `prometheus.operator.podmonitors`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to find PodMonitors. | no -client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no -client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -rule | [rule][] | Relabeling rules to apply to discovered targets. | no -scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no -selector | [selector][] | Label selector for which PodMonitors to discover. | no -selector > match_expression | [match_expression][] | Label selector expression for which PodMonitors to discover. | no -clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | -------------------- | ------------------------------------------------------------------------------------------------ | -------- | +| client | [client][] | Configures Kubernetes client used to find PodMonitors. | no | +| client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no | +| client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| rule | [rule][] | Relabeling rules to apply to discovered targets. | no | +| scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no | +| selector | [selector][] | Label selector for which PodMonitors to discover. | no | +| selector > match_expression | [match_expression][] | Label selector expression for which PodMonitors to discover. | no | +| clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_ROOT_NAME" >}} is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -83,25 +83,26 @@ used. The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|---------|--------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -135,9 +136,9 @@ The `selector` block describes a Kubernetes label selector for PodMonitors. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no | When the `match_labels` argument is empty, all PodMonitor resources will be matched. @@ -148,26 +149,26 @@ PodMonitors discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values`| `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument must be one of the following strings: -* `"In"` -* `"NotIn"` -* `"Exists"` -* `"DoesNotExist"` +- `"In"` +- `"NotIn"` +- `"Exists"` +- `"DoesNotExist"` If there are multiple `match_expressions` blocks inside of a `selector` block, they are combined together with AND clauses. ### clustering (beta) -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | ------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes | When {{< param "PRODUCT_ROOT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this component instance opts-in to participating in @@ -262,6 +263,7 @@ prometheus.operator.podmonitors "pods" { } } ``` + ## Compatible components @@ -270,10 +272,9 @@ prometheus.operator.podmonitors "pods" { - Components that export [Prometheus `MetricsReceiver`](../../compatibility/#prometheus-metricsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/prometheus.operator.probes.md b/docs/sources/flow/reference/components/prometheus.operator.probes.md index 01ca2fd73017..4a06554ea7d4 100644 --- a/docs/sources/flow/reference/components/prometheus.operator.probes.md +++ b/docs/sources/flow/reference/components/prometheus.operator.probes.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.probes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.probes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.probes/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.probes/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.probes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.probes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.probes/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.probes/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.operator.probes/ description: Learn about prometheus.operator.probes labels: @@ -16,7 +16,7 @@ title: prometheus.operator.probes {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} `prometheus.operator.probes` discovers [Probe](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.Probe) resources in your Kubernetes cluster and scrapes the targets they reference. - This component performs three main functions: +This component performs three main functions: 1. Discover Probe resources from your Kubernetes cluster. 1. Discover targets or ingresses that match those Probes. @@ -40,28 +40,28 @@ prometheus.operator.probes "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes -`namespaces` | `list(string)` | List of namespaces to search for Probe resources. If not specified, all namespaces will be searched. || no +| Name | Type | Description | Default | Required | +| ------------ | ----------------------- | ---------------------------------------------------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes | +| `namespaces` | `list(string)` | List of namespaces to search for Probe resources. If not specified, all namespaces will be searched. | | no | ## Blocks The following blocks are supported inside the definition of `prometheus.operator.probes`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to find Probes. | no -client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no -client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -rule | [rule][] | Relabeling rules to apply to discovered targets. | no -scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no -selector | [selector][] | Label selector for which Probes to discover. | no -selector > match_expression | [match_expression][] | Label selector expression for which Probes to discover. | no -clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | -------------------- | ------------------------------------------------------------------------ | -------- | +| client | [client][] | Configures Kubernetes client used to find Probes. | no | +| client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no | +| client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| rule | [rule][] | Relabeling rules to apply to discovered targets. | no | +| scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no | +| selector | [selector][] | Label selector for which Probes to discover. | no | +| selector > match_expression | [match_expression][] | Label selector expression for which Probes to discover. | no | +| clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -85,25 +85,26 @@ configuration with the service account of the running {{< param "PRODUCT_ROOT_NA The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|---------|--------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -137,9 +138,9 @@ The `selector` block describes a Kubernetes label selector for Probes. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no | When the `match_labels` argument is empty, all Probe resources will be matched. @@ -150,26 +151,26 @@ Probes discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values`| `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument must be one of the following strings: -* `"In"` -* `"NotIn"` -* `"Exists"` -* `"DoesNotExist"` +- `"In"` +- `"NotIn"` +- `"Exists"` +- `"DoesNotExist"` If there are multiple `match_expressions` blocks inside of a `selector` block, they are combined together with AND clauses. ### clustering (experimental) -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | ------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes | When {{< param "PRODUCT_NAME" >}} is running in [clustered mode][], and `enabled` is set to true, then this component instance opts-in to participating in @@ -264,6 +265,7 @@ prometheus.operator.probes "probes" { } } ``` + ## Compatible components @@ -272,7 +274,6 @@ prometheus.operator.probes "probes" { - Components that export [Prometheus `MetricsReceiver`](../../compatibility/#prometheus-metricsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md index 24a1b886aa30..3bb6faec68a2 100644 --- a/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md +++ b/docs/sources/flow/reference/components/prometheus.operator.servicemonitors.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.servicemonitors/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.servicemonitors/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.servicemonitors/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.servicemonitors/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.operator.servicemonitors/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.operator.servicemonitors/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.operator.servicemonitors/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.operator.servicemonitors/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.operator.servicemonitors/ description: Learn about prometheus.operator.servicemonitors labels: @@ -39,28 +39,28 @@ prometheus.operator.servicemonitors "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes -`namespaces` | `list(string)` | List of namespaces to search for ServiceMonitor resources. If not specified, all namespaces will be searched. || no +| Name | Type | Description | Default | Required | +| ------------ | ----------------------- | ------------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes | +| `namespaces` | `list(string)` | List of namespaces to search for ServiceMonitor resources. If not specified, all namespaces will be searched. | | no | ## Blocks The following blocks are supported inside the definition of `prometheus.operator.servicemonitors`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to find ServiceMonitors. | no -client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no -client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -rule | [rule][] | Relabeling rules to apply to discovered targets. | no -scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no -selector | [selector][] | Label selector for which ServiceMonitors to discover. | no -selector > match_expression | [match_expression][] | Label selector expression for which ServiceMonitors to discover. | no -clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | -------------------- | ------------------------------------------------------------------------------------------- | -------- | +| client | [client][] | Configures Kubernetes client used to find ServiceMonitors. | no | +| client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no | +| client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| rule | [rule][] | Relabeling rules to apply to discovered targets. | no | +| scrape | [scrape][] | Default scrape configuration to apply to discovered targets. | no | +| selector | [selector][] | Label selector for which ServiceMonitors to discover. | no | +| selector > match_expression | [match_expression][] | Label selector expression for which ServiceMonitors to discover. | no | +| clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined @@ -84,25 +84,26 @@ If the `client` block isn't provided, the default in-cluster configuration with The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|---------|--------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -136,9 +137,9 @@ The `selector` block describes a Kubernetes label selector for ServiceMonitors. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no +| Name | Type | Description | Default | Required | +| -------------- | ------------- | ------------------------------------------------- | ------- | -------- | +| `match_labels` | `map(string)` | Label keys and values used to discover resources. | `{}` | no | When the `match_labels` argument is empty, all ServiceMonitor resources will be matched. @@ -149,26 +150,26 @@ ServiceMonitors discovery. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`key` | `string` | The label name to match against. | | yes -`operator` | `string` | The operator to use when matching. | | yes -`values`| `list(string)` | The values used when matching. | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------- | ---------------------------------- | ------- | -------- | +| `key` | `string` | The label name to match against. | | yes | +| `operator` | `string` | The operator to use when matching. | | yes | +| `values` | `list(string)` | The values used when matching. | | no | The `operator` argument must be one of the following strings: -* `"In"` -* `"NotIn"` -* `"Exists"` -* `"DoesNotExist"` +- `"In"` +- `"NotIn"` +- `"Exists"` +- `"DoesNotExist"` If there are multiple `match_expressions` blocks inside of a `selector` block, they are combined together with AND clauses. ### clustering block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | ------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes | When {{< param "PRODUCT_NAME" >}} is using [using clustering][], and `enabled` is set to true, then this component instance opts-in to participating in @@ -264,6 +265,7 @@ prometheus.operator.servicemonitors "services" { } } ``` + ## Compatible components @@ -272,10 +274,9 @@ prometheus.operator.servicemonitors "services" { - Components that export [Prometheus `MetricsReceiver`](../../compatibility/#prometheus-metricsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/prometheus.receive_http.md b/docs/sources/flow/reference/components/prometheus.receive_http.md index dd78e88ad107..e2e1d93f55d6 100644 --- a/docs/sources/flow/reference/components/prometheus.receive_http.md +++ b/docs/sources/flow/reference/components/prometheus.receive_http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.receive_http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.receive_http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.receive_http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.receive_http/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.receive_http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.receive_http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.receive_http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.receive_http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.receive_http/ description: Learn about prometheus.receive_http title: prometheus.receive_http @@ -38,17 +38,17 @@ The component will start an HTTP server supporting the following endpoint: `prometheus.receive_http` supports the following arguments: -Name | Type | Description | Default | Required --------------|------------------|---------------------------------------|---------|--------- -`forward_to` | `list(MetricsReceiver)` | List of receivers to send metrics to. | | yes +| Name | Type | Description | Default | Required | +| ------------ | ----------------------- | ------------------------------------- | ------- | -------- | +| `forward_to` | `list(MetricsReceiver)` | List of receivers to send metrics to. | | yes | ## Blocks The following blocks are supported inside the definition of `prometheus.receive_http`: -Hierarchy | Name | Description | Required -----------|----------|----------------------------------------------------|--------- -`http` | [http][] | Configures the HTTP server that receives requests. | no +| Hierarchy | Name | Description | Required | +| --------- | -------- | -------------------------------------------------- | -------- | +| `http` | [http][] | Configures the HTTP server that receives requests. | no | [http]: #http @@ -68,12 +68,12 @@ Hierarchy | Name | Description | Requ The following are some of the metrics that are exposed when this component is used. Note that the metrics include labels such as `status_code` where relevant, which can be used to measure request success rates. -* `prometheus_receive_http_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. -* `prometheus_receive_http_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. -* `prometheus_receive_http_response_message_bytes` (histogram): Size (in bytes) of messages sent in response. -* `prometheus_receive_http_tcp_connections` (gauge): Current number of accepted TCP connections. -* `agent_prometheus_fanout_latency` (histogram): Write latency for sending metrics to other components. -* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. +- `prometheus_receive_http_request_duration_seconds` (histogram): Time (in seconds) spent serving HTTP requests. +- `prometheus_receive_http_request_message_bytes` (histogram): Size (in bytes) of messages received in the request. +- `prometheus_receive_http_response_message_bytes` (histogram): Size (in bytes) of messages sent in response. +- `prometheus_receive_http_tcp_connections` (gauge): Current number of accepted TCP connections. +- `agent_prometheus_fanout_latency` (histogram): Write latency for sending metrics to other components. +- `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. ## Example @@ -86,7 +86,7 @@ This example creates a `prometheus.receive_http` component which starts an HTTP prometheus.receive_http "api" { http { listen_address = "0.0.0.0" - listen_port = 9999 + listen_port = 9999 } forward_to = [prometheus.remote_write.local.receiver] } @@ -95,7 +95,7 @@ prometheus.receive_http "api" { prometheus.remote_write "local" { endpoint { url = "http://mimir:9009/api/v1/push" - + basic_auth { username = "example-user" password = "example-password" @@ -128,7 +128,8 @@ prometheus.remote_write "local" { ## Technical details -`prometheus.receive_http` uses [snappy](https://en.wikipedia.org/wiki/Snappy_(compression)) for compression. +`prometheus.receive_http` uses [snappy]() for compression. + ## Compatible components @@ -137,10 +138,9 @@ prometheus.remote_write "local" { - Components that export [Prometheus `MetricsReceiver`](../../compatibility/#prometheus-metricsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/prometheus.relabel.md b/docs/sources/flow/reference/components/prometheus.relabel.md index 6ff90a88f034..8cd0a90ced52 100644 --- a/docs/sources/flow/reference/components/prometheus.relabel.md +++ b/docs/sources/flow/reference/components/prometheus.relabel.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.relabel/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.relabel/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.relabel/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.relabel/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.relabel/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.relabel/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.relabel/ description: Learn about prometheus.relabel title: prometheus.relabel @@ -15,6 +15,7 @@ Prometheus metrics follow the [OpenMetrics](https://openmetrics.io/) format. Each time series is uniquely identified by its metric name, plus optional key-value pairs called labels. Each sample represents a datapoint in the time series and contains a value and an optional timestamp. + ``` {=, = ...} [timestamp] ``` @@ -53,18 +54,18 @@ prometheus.relabel "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`forward_to` | `list(MetricsReceiver)` | Where the metrics should be forwarded to, after relabeling takes place. | | yes -`max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache. | 100,000 | no +| Name | Type | Description | Default | Required | +| ---------------- | ----------------------- | ----------------------------------------------------------------------- | ------- | -------- | +| `forward_to` | `list(MetricsReceiver)` | Where the metrics should be forwarded to, after relabeling takes place. | | yes | +| `max_cache_size` | `int` | The maximum number of elements to hold in the relabeling cache. | 100,000 | no | ## Blocks The following blocks are supported inside the definition of `prometheus.relabel`: -Hierarchy | Name | Description | Required ---------- | ---- | ----------- | -------- -rule | [rule][] | Relabeling rules to apply to received metrics. | no +| Hierarchy | Name | Description | Required | +| --------- | -------- | ---------------------------------------------- | -------- | +| rule | [rule][] | Relabeling rules to apply to received metrics. | no | [rule]: #rule-block @@ -76,10 +77,10 @@ rule | [rule][] | Relabeling rules to apply to received metrics. | no The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `MetricsReceiver` | The input receiver where samples are sent to be relabeled. -`rules` | `RelabelRules` | The currently configured relabeling rules. +| Name | Type | Description | +| ---------- | ----------------- | ---------------------------------------------------------- | +| `receiver` | `MetricsReceiver` | The input receiver where samples are sent to be relabeled. | +| `rules` | `RelabelRules` | The currently configured relabeling rules. | ## Component health @@ -93,14 +94,13 @@ values. ## Debug metrics - -* `agent_prometheus_relabel_metrics_processed` (counter): Total number of metrics processed. -* `agent_prometheus_relabel_metrics_written` (counter): Total number of metrics written. -* `agent_prometheus_relabel_cache_misses` (counter): Total number of cache misses. -* `agent_prometheus_relabel_cache_hits` (counter): Total number of cache hits. -* `agent_prometheus_relabel_cache_size` (gauge): Total size of relabel cache. -* `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components. -* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. +- `agent_prometheus_relabel_metrics_processed` (counter): Total number of metrics processed. +- `agent_prometheus_relabel_metrics_written` (counter): Total number of metrics written. +- `agent_prometheus_relabel_cache_misses` (counter): Total number of cache misses. +- `agent_prometheus_relabel_cache_hits` (counter): Total number of cache hits. +- `agent_prometheus_relabel_cache_size` (gauge): Total size of relabel cache. +- `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components. +- `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. ## Example @@ -162,6 +162,7 @@ The third and final relabeling rule which uses the `labeldrop` action removes the `instance` label from the set of labels. So in this case, the initial set of metrics passed to the exported receiver is: + ``` metric_a{host = "localhost/development", __address__ = "localhost", app = "backend"} 2 metric_a{host = "cluster_a/production", __address__ = "cluster_a", app = "backend"} 9 @@ -169,6 +170,7 @@ metric_a{host = "cluster_a/production", __address__ = "cluster_a", app = "backe The two resulting metrics are then propagated to each receiver defined in the `forward_to` argument. + ## Compatible components diff --git a/docs/sources/flow/reference/components/prometheus.remote_write.md b/docs/sources/flow/reference/components/prometheus.remote_write.md index 12882a498e8a..0dc691d8a331 100644 --- a/docs/sources/flow/reference/components/prometheus.remote_write.md +++ b/docs/sources/flow/reference/components/prometheus.remote_write.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.remote_write/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.remote_write/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.remote_write/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.remote_write/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.remote_write/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.remote_write/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.remote_write/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.remote_write/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.remote_write/ description: Learn about prometheus.remote_write title: prometheus.remote_write @@ -39,30 +39,30 @@ prometheus.remote_write "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`external_labels` | `map(string)` | Labels to add to metrics sent over the network. | | no +| Name | Type | Description | Default | Required | +| ----------------- | ------------- | ----------------------------------------------- | ------- | -------- | +| `external_labels` | `map(string)` | Labels to add to metrics sent over the network. | | no | ## Blocks The following blocks are supported inside the definition of `prometheus.remote_write`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -endpoint | [endpoint][] | Location to send metrics to. | no -endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > sigv4 | [sigv4][] | Configure AWS Signature Verification 4 for authenticating to the endpoint. | no -endpoint > azuread | [azuread][] | Configure AzureAD for authenticating to the endpoint. | no -endpoint > azuread > managed_identity | [managed_identity][] | Configure Azure user-assigned managed identity. | yes -endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > queue_config | [queue_config][] | Configuration for how metrics are batched before sending. | no -endpoint > metadata_config | [metadata_config][] | Configuration for how metric metadata is sent. | no -endpoint > write_relabel_config | [write_relabel_config][] | Configuration for write_relabel_config. | no -wal | [wal][] | Configuration for the component's WAL. | no +| Hierarchy | Block | Description | Required | +| ------------------------------------- | ------------------------ | -------------------------------------------------------------------------- | -------- | +| endpoint | [endpoint][] | Location to send metrics to. | no | +| endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| endpoint > sigv4 | [sigv4][] | Configure AWS Signature Verification 4 for authenticating to the endpoint. | no | +| endpoint > azuread | [azuread][] | Configure AzureAD for authenticating to the endpoint. | no | +| endpoint > azuread > managed_identity | [managed_identity][] | Configure Azure user-assigned managed identity. | yes | +| endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| endpoint > queue_config | [queue_config][] | Configuration for how metrics are batched before sending. | no | +| endpoint > metadata_config | [metadata_config][] | Configuration for how metric metadata is sent. | no | +| endpoint > write_relabel_config | [write_relabel_config][] | Configuration for write_relabel_config. | no | +| wal | [wal][] | Configuration for the component's WAL. | no | The `>` symbol indicates deeper levels of nesting. For example, `endpoint > basic_auth` refers to a `basic_auth` block defined inside an @@ -88,31 +88,32 @@ The `endpoint` block describes a single location to send metrics to. Multiple The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`url` | `string` | Full URL to send metrics to. | | yes -`name` | `string` | Optional name to identify the endpoint in metrics. | | no -`remote_timeout` | `duration` | Timeout for requests made to the URL. | `"30s"` | no -`headers` | `map(string)` | Extra headers to deliver with the request. | | no -`send_exemplars` | `bool` | Whether exemplars should be sent. | `true` | no -`send_native_histograms` | `bool` | Whether native histograms should be sent. | `false` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#endpoint-block). - - [`bearer_token_file` argument](#endpoint-block). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. - - [`sigv4` block][sigv4]. - - [`azuread` block][azuread]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `url` | `string` | Full URL to send metrics to. | | yes | +| `name` | `string` | Optional name to identify the endpoint in metrics. | | no | +| `remote_timeout` | `duration` | Timeout for requests made to the URL. | `"30s"` | no | +| `headers` | `map(string)` | Extra headers to deliver with the request. | | no | +| `send_exemplars` | `bool` | Whether exemplars should be sent. | `true` | no | +| `send_native_histograms` | `bool` | Whether native histograms should be sent. | `false` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#endpoint-block). +- [`bearer_token_file` argument](#endpoint-block). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. +- [`sigv4` block][sigv4]. +- [`azuread` block][azuread]. When multiple `endpoint` blocks are provided, metrics are concurrently sent to all configured locations. Each endpoint has a _queue_ which is used to read metrics @@ -160,17 +161,17 @@ metrics fails. ### queue_config block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`capacity` | `number` | Number of samples to buffer per shard. | `10000` | no -`min_shards` | `number` | Minimum amount of concurrent shards sending samples to the endpoint. | `1` | no -`max_shards` | `number` | Maximum number of concurrent shards sending samples to the endpoint. | `50` | no -`max_samples_per_send` | `number` | Maximum number of samples per send. | `2000` | no -`batch_send_deadline` | `duration` | Maximum time samples will wait in the buffer before sending. | `"5s"` | no -`min_backoff` | `duration` | Initial retry delay. The backoff time gets doubled for each retry. | `"30ms"` | no -`max_backoff` | `duration` | Maximum retry delay. | `"5s"` | no -`retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no -`sample_age_limit` | `duration` | Maximum age of samples to send. | `"0s"` | no +| Name | Type | Description | Default | Required | +| ---------------------- | ---------- | -------------------------------------------------------------------- | -------- | -------- | +| `capacity` | `number` | Number of samples to buffer per shard. | `10000` | no | +| `min_shards` | `number` | Minimum amount of concurrent shards sending samples to the endpoint. | `1` | no | +| `max_shards` | `number` | Maximum number of concurrent shards sending samples to the endpoint. | `50` | no | +| `max_samples_per_send` | `number` | Maximum number of samples per send. | `2000` | no | +| `batch_send_deadline` | `duration` | Maximum time samples will wait in the buffer before sending. | `"5s"` | no | +| `min_backoff` | `duration` | Initial retry delay. The backoff time gets doubled for each retry. | `"30ms"` | no | +| `max_backoff` | `duration` | Maximum retry delay. | `"5s"` | no | +| `retry_on_http_429` | `bool` | Retry when an HTTP 429 status code is received. | `true` | no | +| `sample_age_limit` | `duration` | Maximum age of samples to send. | `"0s"` | no | Each queue then manages a number of concurrent _shards_ which is responsible for sending a fraction of data to their respective endpoints. The number of @@ -203,11 +204,11 @@ The default value is `0s`, which means that all samples are sent (feature is dis ### metadata_config block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`send` | `bool` | Controls whether metric metadata is sent to the endpoint. | `true` | no -`send_interval` | `duration` | How frequently metric metadata is sent to the endpoint. | `"1m"` | no -`max_samples_per_send` | `number` | Maximum number of metadata samples to send to the endpoint at once. | `2000` | no +| Name | Type | Description | Default | Required | +| ---------------------- | ---------- | ------------------------------------------------------------------- | ------- | -------- | +| `send` | `bool` | Controls whether metric metadata is sent to the endpoint. | `true` | no | +| `send_interval` | `duration` | How frequently metric metadata is sent to the endpoint. | `"1m"` | no | +| `max_samples_per_send` | `number` | Maximum number of metadata samples to send to the endpoint at once. | `2000` | no | ### write_relabel_config block @@ -218,16 +219,16 @@ Name | Type | Description | Default | Required The `wal` block customizes the Write-Ahead Log (WAL) used to temporarily store metrics before they are sent to the configured set of endpoints. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`truncate_frequency` | `duration` | How frequently to clean up the WAL. | `"2h"` | no -`min_keepalive_time` | `duration` | Minimum time to keep data in the WAL before it can be removed. | `"5m"` | no -`max_keepalive_time` | `duration` | Maximum time to keep data in the WAL before removing it. | `"8h"` | no +| Name | Type | Description | Default | Required | +| -------------------- | ---------- | -------------------------------------------------------------- | ------- | -------- | +| `truncate_frequency` | `duration` | How frequently to clean up the WAL. | `"2h"` | no | +| `min_keepalive_time` | `duration` | Minimum time to keep data in the WAL before it can be removed. | `"5m"` | no | +| `max_keepalive_time` | `duration` | Maximum time to keep data in the WAL before removing it. | `"8h"` | no | The WAL serves two primary purposes: -* Buffer unsent metrics in case of intermittent network issues. -* Populate in-memory cache after a process restart. +- Buffer unsent metrics in case of intermittent network issues. +- Populate in-memory cache after a process restart. The WAL is located inside a component-specific directory relative to the storage path {{< param "PRODUCT_NAME" >}} is configured to use. See the @@ -250,9 +251,9 @@ of data in the WAL; samples aren't removed until they are at least as old as The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `MetricsReceiver` | A value which other components can use to send metrics to. +| Name | Type | Description | +| ---------- | ----------------- | ---------------------------------------------------------- | +| `receiver` | `MetricsReceiver` | A value which other components can use to send metrics to. | ## Component health @@ -267,77 +268,77 @@ information. ## Debug metrics -* `agent_wal_storage_active_series` (gauge): Current number of active series +- `agent_wal_storage_active_series` (gauge): Current number of active series being tracked by the WAL. -* `agent_wal_storage_deleted_series` (gauge): Current number of series marked +- `agent_wal_storage_deleted_series` (gauge): Current number of series marked for deletion from memory. -* `agent_wal_out_of_order_samples_total` (counter): Total number of out of +- `agent_wal_out_of_order_samples_total` (counter): Total number of out of order samples ingestion failed attempts. -* `agent_wal_storage_created_series_total` (counter): Total number of created +- `agent_wal_storage_created_series_total` (counter): Total number of created series appended to the WAL. -* `agent_wal_storage_removed_series_total` (counter): Total number of series +- `agent_wal_storage_removed_series_total` (counter): Total number of series removed from the WAL. -* `agent_wal_samples_appended_total` (counter): Total number of samples +- `agent_wal_samples_appended_total` (counter): Total number of samples appended to the WAL. -* `agent_wal_exemplars_appended_total` (counter): Total number of exemplars +- `agent_wal_exemplars_appended_total` (counter): Total number of exemplars appended to the WAL. -* `prometheus_remote_storage_samples_total` (counter): Total number of samples +- `prometheus_remote_storage_samples_total` (counter): Total number of samples sent to remote storage. -* `prometheus_remote_storage_exemplars_total` (counter): Total number of +- `prometheus_remote_storage_exemplars_total` (counter): Total number of exemplars sent to remote storage. -* `prometheus_remote_storage_metadata_total` (counter): Total number of +- `prometheus_remote_storage_metadata_total` (counter): Total number of metadata entries sent to remote storage. -* `prometheus_remote_storage_samples_failed_total` (counter): Total number of +- `prometheus_remote_storage_samples_failed_total` (counter): Total number of samples that failed to send to remote storage due to non-recoverable errors. -* `prometheus_remote_storage_exemplars_failed_total` (counter): Total number of +- `prometheus_remote_storage_exemplars_failed_total` (counter): Total number of exemplars that failed to send to remote storage due to non-recoverable errors. -* `prometheus_remote_storage_metadata_failed_total` (counter): Total number of +- `prometheus_remote_storage_metadata_failed_total` (counter): Total number of metadata entries that failed to send to remote storage due to non-recoverable errors. -* `prometheus_remote_storage_samples_retries_total` (counter): Total number of +- `prometheus_remote_storage_samples_retries_total` (counter): Total number of samples that failed to send to remote storage but were retried due to recoverable errors. -* `prometheus_remote_storage_exemplars_retried_total` (counter): Total number of +- `prometheus_remote_storage_exemplars_retried_total` (counter): Total number of exemplars that failed to send to remote storage but were retried due to recoverable errors. -* `prometheus_remote_storage_metadata_retried_total` (counter): Total number of +- `prometheus_remote_storage_metadata_retried_total` (counter): Total number of metadata entries that failed to send to remote storage but were retried due to recoverable errors. -* `prometheus_remote_storage_samples_dropped_total` (counter): Total number of +- `prometheus_remote_storage_samples_dropped_total` (counter): Total number of samples which were dropped after being read from the WAL before being sent to remote_write because of an unknown reference ID. -* `prometheus_remote_storage_exemplars_dropped_total` (counter): Total number +- `prometheus_remote_storage_exemplars_dropped_total` (counter): Total number of exemplars which were dropped after being read from the WAL before being sent to remote_write because of an unknown reference ID. -* `prometheus_remote_storage_enqueue_retries_total` (counter): Total number of +- `prometheus_remote_storage_enqueue_retries_total` (counter): Total number of times enqueue has failed because a shard's queue was full. -* `prometheus_remote_storage_sent_batch_duration_seconds` (histogram): Duration +- `prometheus_remote_storage_sent_batch_duration_seconds` (histogram): Duration of send calls to remote storage. -* `prometheus_remote_storage_queue_highest_sent_timestamp_seconds` (gauge): +- `prometheus_remote_storage_queue_highest_sent_timestamp_seconds` (gauge): Unix timestamp of the latest WAL sample successfully sent by a queue. -* `prometheus_remote_storage_samples_pending` (gauge): The number of samples +- `prometheus_remote_storage_samples_pending` (gauge): The number of samples pending in shards to be sent to remote storage. -* `prometheus_remote_storage_exemplars_pending` (gauge): The number of +- `prometheus_remote_storage_exemplars_pending` (gauge): The number of exemplars pending in shards to be sent to remote storage. -* `prometheus_remote_storage_shard_capacity` (gauge): The capacity of shards +- `prometheus_remote_storage_shard_capacity` (gauge): The capacity of shards within a given queue. -* `prometheus_remote_storage_shards` (gauge): The number of shards used for +- `prometheus_remote_storage_shards` (gauge): The number of shards used for concurrent delivery of metrics to an endpoint. -* `prometheus_remote_storage_shards_min` (gauge): The minimum number of shards +- `prometheus_remote_storage_shards_min` (gauge): The minimum number of shards a queue is allowed to run. -* `prometheus_remote_storage_shards_max` (gauge): The maximum number of a +- `prometheus_remote_storage_shards_max` (gauge): The maximum number of a shards a queue is allowed to run. -* `prometheus_remote_storage_shards_desired` (gauge): The number of shards a +- `prometheus_remote_storage_shards_desired` (gauge): The number of shards a queue wants to run to be able to keep up with the amount of incoming metrics. -* `prometheus_remote_storage_bytes_total` (counter): Total number of bytes of +- `prometheus_remote_storage_bytes_total` (counter): Total number of bytes of data sent by queues after compression. -* `prometheus_remote_storage_metadata_bytes_total` (counter): Total number of +- `prometheus_remote_storage_metadata_bytes_total` (counter): Total number of bytes of metadata sent by queues after compression. -* `prometheus_remote_storage_max_samples_per_send` (gauge): The maximum number +- `prometheus_remote_storage_max_samples_per_send` (gauge): The maximum number of samples each shard is allowed to send in a single request. -* `prometheus_remote_storage_samples_in_total` (counter): Samples read into +- `prometheus_remote_storage_samples_in_total` (counter): Samples read into remote storage. -* `prometheus_remote_storage_exemplars_in_total` (counter): Exemplars read into +- `prometheus_remote_storage_exemplars_in_total` (counter): Exemplars read into remote storage. ## Examples @@ -372,7 +373,6 @@ prometheus.scrape "demo" { } ``` - ### Send metrics to a Mimir instance with a tenant specified You can create a `prometheus.remote_write` component that sends your metrics to a specific tenant within the Mimir instance. This is useful when your Mimir instance is using more than one tenant: @@ -405,9 +405,10 @@ prometheus.remote_write "default" { } } ``` + ## Technical details -`prometheus.remote_write` uses [snappy](https://en.wikipedia.org/wiki/Snappy_(compression)) for compression. +`prometheus.remote_write` uses [snappy]() for compression. Any labels that start with `__` will be removed before sending to the endpoint. diff --git a/docs/sources/flow/reference/components/prometheus.scrape.md b/docs/sources/flow/reference/components/prometheus.scrape.md index 6cd15ddb2553..62afb0c4b878 100644 --- a/docs/sources/flow/reference/components/prometheus.scrape.md +++ b/docs/sources/flow/reference/components/prometheus.scrape.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/prometheus.scrape/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.scrape/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.scrape/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.scrape/ + - /docs/grafana-cloud/agent/flow/reference/components/prometheus.scrape/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/prometheus.scrape/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/prometheus.scrape/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/prometheus.scrape/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/prometheus.scrape/ description: Learn about prometheus.scrape title: prometheus.scrape @@ -42,55 +42,57 @@ time), the component reports an error. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`targets` | `list(map(string))` | List of targets to scrape. | | yes -`forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes -`job_name` | `string` | The value to use for the job label if not already set. | component name | no -`extra_metrics` | `bool` | Whether extra metrics should be generated for scrape targets. | `false` | no -`enable_protobuf_negotiation` | `bool` | Whether to enable protobuf negotiation with the client. | `false` | no -`honor_labels` | `bool` | Indicator whether the scraped metrics should remain unmodified. | `false` | no -`honor_timestamps` | `bool` | Indicator whether the scraped timestamps should be respected. | `true` | no -`track_timestamps_staleness` | `bool` | Indicator whether to track the staleness of the scraped timestamps. | `false` | no -`params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no -`scrape_classic_histograms` | `bool` | Whether to scrape a classic histogram that is also exposed as a native histogram. | `false` | no -`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"60s"` | no -`scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. | `"10s"` | no -`metrics_path` | `string` | The HTTP resource path on which to fetch metrics from targets. | `/metrics` | no -`scheme` | `string` | The URL scheme with which to fetch metrics from targets. | | no -`body_size_limit` | `int` | An uncompressed response body larger than this many bytes causes the scrape to fail. 0 means no limit. | | no -`sample_limit` | `uint` | More than this many samples post metric-relabeling causes the scrape to fail | | no -`target_limit` | `uint` | More than this many targets after the target relabeling causes the scrapes to fail. | | no -`label_limit` | `uint` | More than this many labels post metric-relabeling causes the scrape to fail. | | no -`label_name_length_limit` | `uint` | More than this label name length post metric-relabeling causes the scrape to fail. | | no -`label_value_length_limit` | `uint` | More than this label value length post metric-relabeling causes the scrape to fail. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ----------------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------ | -------------- | -------- | +| `targets` | `list(map(string))` | List of targets to scrape. | | yes | +| `forward_to` | `list(MetricsReceiver)` | List of receivers to send scraped metrics to. | | yes | +| `job_name` | `string` | The value to use for the job label if not already set. | component name | no | +| `extra_metrics` | `bool` | Whether extra metrics should be generated for scrape targets. | `false` | no | +| `enable_protobuf_negotiation` | `bool` | Whether to enable protobuf negotiation with the client. | `false` | no | +| `honor_labels` | `bool` | Indicator whether the scraped metrics should remain unmodified. | `false` | no | +| `honor_timestamps` | `bool` | Indicator whether the scraped timestamps should be respected. | `true` | no | +| `track_timestamps_staleness` | `bool` | Indicator whether to track the staleness of the scraped timestamps. | `false` | no | +| `params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no | +| `scrape_classic_histograms` | `bool` | Whether to scrape a classic histogram that is also exposed as a native histogram. | `false` | no | +| `scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"60s"` | no | +| `scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. | `"10s"` | no | +| `metrics_path` | `string` | The HTTP resource path on which to fetch metrics from targets. | `/metrics` | no | +| `scheme` | `string` | The URL scheme with which to fetch metrics from targets. | | no | +| `body_size_limit` | `int` | An uncompressed response body larger than this many bytes causes the scrape to fail. 0 means no limit. | | no | +| `sample_limit` | `uint` | More than this many samples post metric-relabeling causes the scrape to fail | | no | +| `target_limit` | `uint` | More than this many targets after the target relabeling causes the scrapes to fail. | | no | +| `label_limit` | `uint` | More than this many labels post metric-relabeling causes the scrape to fail. | | no | +| `label_name_length_limit` | `uint` | More than this label name length post metric-relabeling causes the scrape to fail. | | no | +| `label_value_length_limit` | `uint` | More than this label value length post metric-relabeling causes the scrape to fail. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} `track_timestamps_staleness` controls whether Prometheus tracks [staleness][prom-staleness] of metrics which with an explicit timestamp present in scraped data. -* An "explicit timestamp" is an optional timestamp in the [Prometheus metrics exposition format][prom-text-exposition-format]. For example, this sample has a timestamp of `1395066363000`: + +- An "explicit timestamp" is an optional timestamp in the [Prometheus metrics exposition format][prom-text-exposition-format]. For example, this sample has a timestamp of `1395066363000`: ``` http_requests_total{method="post",code="200"} 1027 1395066363000 ``` -* If `track_timestamps_staleness` is set to `true`, a staleness marker will be inserted when a metric is no longer present or the target is down. -* A "staleness marker" is just a {{< term "sample" >}}sample{{< /term >}} with a specific NaN value which is reserved for internal use by Prometheus. -* It is recommended to set `track_timestamps_staleness` to `true` if the database where metrics are written to has enabled [out of order ingestion][mimir-ooo]. -* If `track_timestamps_staleness` is set to `false`, samples with explicit timestamps will only be labeled as stale after a certain time period, which in Prometheus is 5 minutes by default. +- If `track_timestamps_staleness` is set to `true`, a staleness marker will be inserted when a metric is no longer present or the target is down. +- A "staleness marker" is just a {{< term "sample" >}}sample{{< /term >}} with a specific NaN value which is reserved for internal use by Prometheus. +- It is recommended to set `track_timestamps_staleness` to `true` if the database where metrics are written to has enabled [out of order ingestion][mimir-ooo]. +- If `track_timestamps_staleness` is set to `false`, samples with explicit timestamps will only be labeled as stale after a certain time period, which in Prometheus is 5 minutes by default. [prom-text-exposition-format]: https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format [prom-staleness]: https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness @@ -100,14 +102,14 @@ Name | Type | Description | Default | Required The following blocks are supported inside the definition of `prometheus.scrape`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to targets. | no -authorization | [authorization][] | Configure generic authorization to targets. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to targets. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to targets via OAuth2. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to targets. | no -clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | ------------------------------------------------------------------------ | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to targets. | no | +| authorization | [authorization][] | Configure generic authorization to targets. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to targets. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to targets via OAuth2. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to targets. | no | +| clustering | [clustering][] | Configure the component for when the Agent is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside @@ -138,9 +140,9 @@ an `oauth2` block. ### clustering block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | ------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes | When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this `prometheus.scrape` component instance opts-in to participating in @@ -184,9 +186,9 @@ scrape job on the component's debug endpoint. ## Debug metrics -* `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components. -* `agent_prometheus_scrape_targets_gauge` (gauge): Number of targets this component is configured to scrape. -* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. +- `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components. +- `agent_prometheus_scrape_targets_gauge` (gauge): Number of targets this component is configured to scrape. +- `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components. ## Scraping behavior @@ -227,24 +229,23 @@ the labels last used for scraping. The following labels are automatically injected to the scraped time series and can help pin down a scrape target. -Label | Description ---------------------- | ---------- -job | The configured job name that the target belongs to. Defaults to the fully formed component name. -instance | The `__address__` or `:` of the scrape target's URL. - +| Label | Description | +| -------- | ------------------------------------------------------------------------------------------------ | +| job | The configured job name that the target belongs to. Defaults to the fully formed component name. | +| instance | The `__address__` or `:` of the scrape target's URL. | Similarly, these metrics that record the behavior of the scrape targets are also automatically available. -Metric Name | Description +Metric Name | Description -------------------------- | ----------- -`up` | 1 if the instance is healthy and reachable, or 0 if the scrape failed. -`scrape_duration_seconds` | Duration of the scrape in seconds. -`scrape_samples_scraped` | The number of samples the target exposed. +`up` | 1 if the instance is healthy and reachable, or 0 if the scrape failed. +`scrape_duration_seconds` | Duration of the scrape in seconds. +`scrape_samples_scraped` | The number of samples the target exposed. `scrape_samples_post_metric_relabeling` | The number of samples remaining after metric relabeling was applied. -`scrape_series_added` | The approximate number of new series in this scrape. -`scrape_timeout_seconds` | The configured scrape timeout for a target. Useful for measuring how close a target was to timing out using `scrape_duration_seconds / scrape_timeout_seconds` -`scrape_sample_limit` | The configured sample limit for a target. Useful for measuring how close a target was to reaching the sample limit using `scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)` -`scrape_body_size_bytes` | The uncompressed size of the most recent scrape response, if successful. Scrapes failing because the `body_size_limit` is exceeded report -1, other scrape failures report 0. +`scrape_series_added` | The approximate number of new series in this scrape. +`scrape_timeout_seconds` | The configured scrape timeout for a target. Useful for measuring how close a target was to timing out using `scrape_duration_seconds / scrape_timeout_seconds` +`scrape_sample_limit` | The configured sample limit for a target. Useful for measuring how close a target was to reaching the sample limit using `scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)` +`scrape_body_size_bytes` | The uncompressed size of the most recent scrape response, if successful. Scrapes failing because the `body_size_limit` is exceeded report -1, other scrape failures report 0. The `up` metric is particularly useful for monitoring and alerting on the health of a scrape job. It is set to `0` in case anything goes wrong with the @@ -286,6 +287,7 @@ prometheus.scrape "blackbox_scraper" { ``` Here are the endpoints that are being scraped every 10 seconds: + ``` http://blackbox-exporter:9115/probe?target=grafana.com&module=http_2xx http://blackbox-exporter:9116/probe?target=grafana.com&module=http_2xx @@ -296,17 +298,19 @@ http://blackbox-exporter:9116/probe?target=grafana.com&module=http_2xx `prometheus.scrape` supports [gzip](https://en.wikipedia.org/wiki/Gzip) compression. The following special labels can change the behavior of prometheus.scrape: -* `__address__` is the name of the label that holds the `:` address of a scrape target. -* `__metrics_path__` is the name of the label that holds the path on which to scrape a target. -* `__scheme__` is the name of the label that holds the scheme (http,https) on which to scrape a target. -* `__scrape_interval__` is the name of the label that holds the scrape interval used to scrape a target. -* `__scrape_timeout__` is the name of the label that holds the scrape timeout used to scrape a target. -* `__param_` is a prefix for labels that provide URL parameters `` used to scrape a target. + +- `__address__` is the name of the label that holds the `:` address of a scrape target. +- `__metrics_path__` is the name of the label that holds the path on which to scrape a target. +- `__scheme__` is the name of the label that holds the scheme (http,https) on which to scrape a target. +- `__scrape_interval__` is the name of the label that holds the scrape interval used to scrape a target. +- `__scrape_timeout__` is the name of the label that holds the scrape timeout used to scrape a target. +- `__param_` is a prefix for labels that provide URL parameters `` used to scrape a target. Special labels added after a scrape -* `__name__` is the label name indicating the metric name of a timeseries. -* `job` is the label name indicating the job from which a timeseries was scraped. -* `instance` is the label name used for the instance label. + +- `__name__` is the label name indicating the metric name of a timeseries. +- `job` is the label name indicating the job from which a timeseries was scraped. +- `instance` is the label name used for the instance label. @@ -317,7 +321,6 @@ Special labels added after a scrape - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Prometheus `MetricsReceiver`](../../compatibility/#prometheus-metricsreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/pyroscope.ebpf.md b/docs/sources/flow/reference/components/pyroscope.ebpf.md index 04e257cac338..d27ed7990ba5 100644 --- a/docs/sources/flow/reference/components/pyroscope.ebpf.md +++ b/docs/sources/flow/reference/components/pyroscope.ebpf.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/pyroscope.ebpf/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.ebpf/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.ebpf/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.ebpf/ + - /docs/grafana-cloud/agent/flow/reference/components/pyroscope.ebpf/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.ebpf/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.ebpf/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.ebpf/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/pyroscope.ebpf/ description: Learn about pyroscope.ebpf labels: @@ -19,7 +19,7 @@ title: pyroscope.ebpf to the list of receivers passed in `forward_to`. {{< admonition type="note" >}} -To use the `pyroscope.ebpf` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host pid namespace. +To use the `pyroscope.ebpf` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host pid namespace. {{< /admonition >}} You can specify multiple `pyroscope.ebpf` components by giving them different labels, however it is not recommended as @@ -43,7 +43,7 @@ You can use the following arguments to configure a `pyroscope.ebpf`. Only the values. | Name | Type | Description | Default | Required | -|---------------------------|--------------------------|-------------------------------------------------------------------------------------|---------|----------| +| ------------------------- | ------------------------ | ----------------------------------------------------------------------------------- | ------- | -------- | | `targets` | `list(map(string))` | List of targets to group profiles by container id | | yes | | `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes | | `collect_interval` | `duration` | How frequently to collect profiles | `15s` | no | @@ -71,17 +71,17 @@ configuration. ## Debug information -* `targets` currently tracked active targets. -* `pid_cache` per process elf symbol tables and their sizes in symbols count. -* `elf_cache` per build id and per same file symbol tables and their sizes in symbols count. +- `targets` currently tracked active targets. +- `pid_cache` per process elf symbol tables and their sizes in symbols count. +- `elf_cache` per build id and per same file symbol tables and their sizes in symbols count. ## Debug metrics -* `pyroscope_fanout_latency` (histogram): Write latency for sending to direct and indirect components. -* `pyroscope_ebpf_active_targets` (gauge): Number of active targets the component tracks. -* `pyroscope_ebpf_profiling_sessions_total` (counter): Number of profiling sessions completed. -* `pyroscope_ebpf_profiling_sessions_failing_total` (counter): Number of profiling sessions failed. -* `pyroscope_ebpf_pprofs_total` (counter): Number of pprof profiles collected by the ebpf component. +- `pyroscope_fanout_latency` (histogram): Write latency for sending to direct and indirect components. +- `pyroscope_ebpf_active_targets` (gauge): Number of active targets the component tracks. +- `pyroscope_ebpf_profiling_sessions_total` (counter): Number of profiling sessions completed. +- `pyroscope_ebpf_profiling_sessions_failing_total` (counter): Number of profiling sessions failed. +- `pyroscope_ebpf_pprofs_total` (counter): Number of pprof profiles collected by the ebpf component. ## Profile collecting behavior @@ -92,21 +92,21 @@ The following labels are automatically injected into the collected profiles if y can help you pin down a profiling target. | Label | Description | -|--------------------|----------------------------------------------------------------------------------------------------------------------------------| +| ------------------ | -------------------------------------------------------------------------------------------------------------------------------- | | `service_name` | Pyroscope service name. It's automatically selected from discovery meta labels if possible. Otherwise defaults to `unspecified`. | | `__name__` | pyroscope metric name. Defaults to `process_cpu`. | | `__container_id__` | The container ID derived from target. | -### Targets +### Targets One of the following special labels _must_ be included in each target of `targets` and the label must correspond to the container or process that is profiled: -* `__container_id__`: The container ID. -* `__meta_docker_container_id`: The ID of the Docker container. -* `__meta_kubernetes_pod_container_id`: The ID of the Kubernetes pod container. -* `__process_pid__` : The process ID. +- `__container_id__`: The container ID. +- `__meta_docker_container_id`: The ID of the Docker container. +- `__meta_kubernetes_pod_container_id`: The ID of the Kubernetes pod container. +- `__process_pid__` : The process ID. -Each process is then associated with a specified target from the targets list, determined by a container ID or process PID. +Each process is then associated with a specified target from the targets list, determined by a container ID or process PID. If a process's container ID matches a target's container ID label, the stack traces are aggregated per target based on the container ID. If a process's PID matches a target's process PID label, the stack traces are aggregated per target based on the process PID. @@ -289,11 +289,12 @@ pyroscope.write "staging" { } } -pyroscope.ebpf "default" { +pyroscope.ebpf "default" { forward_to = [ pyroscope.write.staging.receiver ] targets = discovery.relabel.local_containers.output } ``` + ## Compatible components @@ -303,7 +304,6 @@ pyroscope.ebpf "default" { - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Pyroscope `ProfilesReceiver`](../../compatibility/#pyroscope-profilesreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/pyroscope.java.md b/docs/sources/flow/reference/components/pyroscope.java.md index 3fdc8105291e..9a8692434a54 100644 --- a/docs/sources/flow/reference/components/pyroscope.java.md +++ b/docs/sources/flow/reference/components/pyroscope.java.md @@ -17,7 +17,7 @@ title: pyroscope.java using [async-profiler](https://github.com/async-profiler/async-profiler). {{< admonition type="note" >}} -To use the `pyroscope.java` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host PID namespace. +To use the `pyroscope.java` component you must run {{< param "PRODUCT_NAME" >}} as root and inside host PID namespace. {{< /admonition >}} ## Usage @@ -34,10 +34,10 @@ pyroscope.java "LABEL" { The following arguments are supported: | Name | Type | Description | Default | Required | -|--------------|--------------------------|--------------------------------------------------|---------|----------| +| ------------ | ------------------------ | ------------------------------------------------ | ------- | -------- | | `targets` | `list(map(string))` | List of java process targets to profile. | | yes | | `forward_to` | `list(ProfilesReceiver)` | List of receivers to send collected profiles to. | | yes | -| `tmp_dir` | `string` | Temporary directory to store async-profiler. | `/tmp` | no | +| `tmp_dir` | `string` | Temporary directory to store async-profiler. | `/tmp` | no | ## Profiling behavior @@ -73,6 +73,7 @@ Labels starting with a double underscore (`__`) are treated as _internal_, and a The special label `service_name` is required and must always be present. If it is not specified, `pyroscope.scrape` will attempt to infer it from either of the following sources, in this order: + 1. `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation. 2. `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name` 3. `__meta_docker_container_name` @@ -85,8 +86,8 @@ If `service_name` is not specified and could not be inferred, then it is set to The following blocks are supported inside the definition of `pyroscope.java`: -| Hierarchy | Block | Description | Required | -|------------------|----------------------|----------------------------------------|----------| +| Hierarchy | Block | Description | Required | +| ---------------- | -------------------- | --------------------------------------- | -------- | | profiling_config | [profiling_config][] | Describes java profiling configuration. | no | [profiling_config]: #profiling_config-block @@ -97,12 +98,12 @@ The `profiling_config` block describes how async-profiler is invoked. The following arguments are supported: -| Name | Type | Description | Default | Required | -|---------------|------------|---------------------------------------------------------------------------------------------------------|---------|----------| -| `interval` | `duration` | How frequently to collect profiles from the targets. | "60s" | no | -| `cpu` | `bool` | A flag to enable cpu profiling, using `itimer` async-profiler event. | true | no | +| Name | Type | Description | Default | Required | +| ------------- | ---------- | -------------------------------------------------------------------------------------------------------- | ------- | -------- | +| `interval` | `duration` | How frequently to collect profiles from the targets. | "60s" | no | +| `cpu` | `bool` | A flag to enable cpu profiling, using `itimer` async-profiler event. | true | no | | `sample_rate` | `int` | CPU profiling sample rate. It is converted from Hz to interval and passed as `-i` arg to async-profiler. | 100 | no | -| `alloc` | `string` | Allocation profiling sampling configuration It is passed as `--alloc` arg to async-profiler. | "512k" | no | +| `alloc` | `string` | Allocation profiling sampling configuration It is passed as `--alloc` arg to async-profiler. | "512k" | no | | `lock` | `string` | Lock profiling sampling configuration. It is passed as `--lock` arg to async-profiler. | "10ms" | no | For more information on async-profiler configuration, see [profiler-options](https://github.com/async-profiler/async-profiler?tab=readme-ov-file#profiler-options) @@ -180,10 +181,9 @@ pyroscope.java "java" { - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Pyroscope `ProfilesReceiver`](../../compatibility/#pyroscope-profilesreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/pyroscope.scrape.md b/docs/sources/flow/reference/components/pyroscope.scrape.md index 9d00df3a8c3b..750578fcf345 100644 --- a/docs/sources/flow/reference/components/pyroscope.scrape.md +++ b/docs/sources/flow/reference/components/pyroscope.scrape.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/pyroscope.scrape/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.scrape/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.scrape/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.scrape/ + - /docs/grafana-cloud/agent/flow/reference/components/pyroscope.scrape/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.scrape/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.scrape/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.scrape/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/pyroscope.scrape/ description: Learn about pyroscope.scrape labels: @@ -15,15 +15,15 @@ title: pyroscope.scrape {{< docs/shared lookup="flow/stability/beta.md" source="agent" version="" >}} -`pyroscope.scrape` collects [pprof] performance profiles for a given set of HTTP `targets`. +`pyroscope.scrape` collects [pprof] performance profiles for a given set of HTTP `targets`. `pyroscope.scrape` mimcks the scraping behavior of `prometheus.scrape`. Similarly to how Prometheus scrapes metrics via HTTP, `pyroscope.scrape` collects profiles via HTTP requests. -Unlike Prometheus, which usually only scrapes one `/metrics` endpoint per target, +Unlike Prometheus, which usually only scrapes one `/metrics` endpoint per target, `pyroscope.scrape` may need to scrape multiple endpoints for the same target. -This is because different types of profiles are scraped on different endpoints. -For example, "mutex" profiles may be scraped on a `/debug/pprof/delta_mutex` HTTP endpoint, whereas +This is because different types of profiles are scraped on different endpoints. +For example, "mutex" profiles may be scraped on a `/debug/pprof/delta_mutex` HTTP endpoint, whereas memory consumption may be scraped on a `/debug/pprof/allocs` HTTP endpoint. The profile paths, protocol scheme, scrape interval, scrape timeout, @@ -33,11 +33,12 @@ The `pyroscope.scrape` component regards a scrape as successful if it responded with an HTTP `200 OK` status code and returned the body of a valid [pprof] profile. If a scrape request fails, the [debug UI][] for `pyroscope.scrape` will show: -* Detailed information about the failure. -* The time of the last successful scrape. -* The labels last used for scraping. -The scraped performance profiles can be forwarded to components such as +- Detailed information about the failure. +- The time of the last successful scrape. +- The labels last used for scraping. + +The scraped performance profiles can be forwarded to components such as `pyroscope.write` via the `forward_to` argument. Multiple `pyroscope.scrape` components can be specified by giving them different labels. @@ -55,7 +56,7 @@ pyroscope.scrape "LABEL" { ## Arguments -`pyroscope.scrape` starts a new scrape job to scrape all of the input targets. +`pyroscope.scrape` starts a new scrape job to scrape all of the input targets. Multiple scrape jobs can be started for a single input target when scraping multiple profile types. @@ -63,35 +64,36 @@ The list of arguments that can be used to configure the block is presented below. Any omitted arguments take on their default values. If conflicting -arguments are being passed (for example, configuring both `bearer_token` +arguments are being passed (for example, configuring both `bearer_token` and `bearer_token_file`), then `pyroscope.scrape` will fail to start and will report an error. The following arguments are supported: -Name | Type | Description | Default | Required -------------------- | ------------------------ | ------------------------------------------------------------------ | -------------- | -------- -`targets` | `list(map(string))` | List of targets to scrape. | | yes -`forward_to` | `list(ProfilesReceiver)` | List of receivers to send scraped profiles to. | | yes -`job_name` | `string` | The job name to override the job label with. | component name | no -`params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no -`scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"15s"` | no -`scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. Must be larger than `scrape_interval`. | `"18s"` | no -`scheme` | `string` | The URL scheme with which to fetch metrics from targets. | `"http"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument](#arguments). - - [`bearer_token_file` argument](#arguments). - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------------ | ------------------------------------------------------------------------------------------------ | -------------- | -------- | +| `targets` | `list(map(string))` | List of targets to scrape. | | yes | +| `forward_to` | `list(ProfilesReceiver)` | List of receivers to send scraped profiles to. | | yes | +| `job_name` | `string` | The job name to override the job label with. | component name | no | +| `params` | `map(list(string))` | A set of query parameters with which the target is scraped. | | no | +| `scrape_interval` | `duration` | How frequently to scrape the targets of this scrape configuration. | `"15s"` | no | +| `scrape_timeout` | `duration` | The timeout for scraping targets of this configuration. Must be larger than `scrape_interval`. | `"18s"` | no | +| `scheme` | `string` | The URL scheme with which to fetch metrics from targets. | `"http"` | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument](#arguments). +- [`bearer_token_file` argument](#arguments). +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. [arguments]: #arguments @@ -105,7 +107,7 @@ For example, the `job_name` of `pyroscope.scrape "local" { ... }` will be `"pyro #### `targets` argument -The list of `targets` can be provided [statically][example_static_targets], [dynamically][example_dynamic_targets], +The list of `targets` can be provided [statically][example_static_targets], [dynamically][example_dynamic_targets], or a [combination of both][example_static_and_dynamic_targets]. The special `__address__` label _must always_ be present and corresponds to the @@ -113,9 +115,10 @@ The special `__address__` label _must always_ be present and corresponds to the Labels starting with a double underscore (`__`) are treated as _internal_, and are removed prior to scraping. -The special label `service_name` is required and must always be present. -If it is not specified, `pyroscope.scrape` will attempt to infer it from -either of the following sources, in this order: +The special label `service_name` is required and must always be present. +If it is not specified, `pyroscope.scrape` will attempt to infer it from +either of the following sources, in this order: + 1. `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation. 2. `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name` 3. `__meta_docker_container_name` @@ -123,75 +126,78 @@ either of the following sources, in this order: If `service_name` is not specified and could not be inferred, then it is set to `unspecified`. -The following labels are automatically injected to the scraped profiles +The following labels are automatically injected to the scraped profiles so that they can be linked to a scrape target: | Label | Description | -|------------------|----------------------------------------------------------------- | +| ---------------- | ---------------------------------------------------------------- | | `"job"` | The `job_name` that the target belongs to. | | `"instance"` | The `__address__` or `:` of the scrape target's URL. | | `"service_name"` | The inferred Pyroscope service name. | #### `scrape_interval` argument -The `scrape_interval` typically refers to the frequency with which {{< param "PRODUCT_NAME" >}} collects performance profiles from the monitored targets. -It represents the time interval between consecutive scrapes or data collection events. +The `scrape_interval` typically refers to the frequency with which {{< param "PRODUCT_NAME" >}} collects performance profiles from the monitored targets. +It represents the time interval between consecutive scrapes or data collection events. This parameter is important for controlling the trade-off between resource usage and the freshness of the collected data. If `scrape_interval` is short: -* Advantages: - * Fewer profiles may be lost if the application being scraped crashes. -* Disadvantages: - * Greater consumption of CPU, memory, and network resources during scrapes and remote writes. - * The backend database (Pyroscope) will consume more storage space. + +- Advantages: + - Fewer profiles may be lost if the application being scraped crashes. +- Disadvantages: + - Greater consumption of CPU, memory, and network resources during scrapes and remote writes. + - The backend database (Pyroscope) will consume more storage space. If `scrape_interval` is long: -* Advantages: - * Lower resource consumption. -* Disadvantages: - * More profiles may be lost if the application being scraped crashes. - * If the [delta argument][] is set to `true`, the batch size of + +- Advantages: + - Lower resource consumption. +- Disadvantages: + - More profiles may be lost if the application being scraped crashes. + - If the [delta argument][] is set to `true`, the batch size of each remote write to Pyroscope may be bigger. The Pyroscope database may need to be tuned with higher limits. - * If the [delta argument][] is set to `true`, there is a larger risk of + - If the [delta argument][] is set to `true`, there is a larger risk of reaching the HTTP server timeouts of the application being scraped. For example, consider this situation: -* `pyroscope.scrape` is configured with a `scrape_interval` of `"60s"`. -* The application being scraped is running an HTTP server with a timeout of 30 seconds. -* Any scrape HTTP requests where the [delta argument][] is set to `true` will fail, + +- `pyroscope.scrape` is configured with a `scrape_interval` of `"60s"`. +- The application being scraped is running an HTTP server with a timeout of 30 seconds. +- Any scrape HTTP requests where the [delta argument][] is set to `true` will fail, because they will attempt to run for 59 seconds. ## Blocks The following blocks are supported inside the definition of `pyroscope.scrape`: -| Hierarchy | Block | Description | Required | -|-----------------------------------------------|--------------------------------|--------------------------------------------------------------------------|----------| -| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to targets. | no | -| authorization | [authorization][] | Configure generic authorization to targets. | no | -| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to targets. | no | -| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to targets via OAuth2. | no | -| tls_config | [tls_config][] | Configure TLS settings for connecting to targets. | no | -| profiling_config | [profiling_config][] | Configure profiling settings for the scrape job. | no | -| profiling_config > profile.memory | [profile.memory][] | Collect memory profiles. | no | -| profiling_config > profile.block | [profile.block][] | Collect profiles on blocks. | no | -| profiling_config > profile.goroutine | [profile.goroutine][] | Collect goroutine profiles. | no | -| profiling_config > profile.mutex | [profile.mutex][] | Collect mutex profiles. | no | -| profiling_config > profile.process_cpu | [profile.process_cpu][] | Collect CPU profiles. | no | -| profiling_config > profile.fgprof | [profile.fgprof][] | Collect [fgprof][] profiles. | no | -| profiling_config > profile.godeltaprof_memory | [profile.godeltaprof_memory][] | Collect [godeltaprof][] memory profiles. | no | -| profiling_config > profile.godeltaprof_mutex | [profile.godeltaprof_mutex][] | Collect [godeltaprof][] mutex profiles. | no | -| profiling_config > profile.godeltaprof_block | [profile.godeltaprof_block][] | Collect [godeltaprof][] block profiles. | no | -| profiling_config > profile.custom | [profile.custom][] | Collect custom profiles. | no | +| Hierarchy | Block | Description | Required | +| --------------------------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to targets. | no | +| authorization | [authorization][] | Configure generic authorization to targets. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to targets. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to targets via OAuth2. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to targets. | no | +| profiling_config | [profiling_config][] | Configure profiling settings for the scrape job. | no | +| profiling_config > profile.memory | [profile.memory][] | Collect memory profiles. | no | +| profiling_config > profile.block | [profile.block][] | Collect profiles on blocks. | no | +| profiling_config > profile.goroutine | [profile.goroutine][] | Collect goroutine profiles. | no | +| profiling_config > profile.mutex | [profile.mutex][] | Collect mutex profiles. | no | +| profiling_config > profile.process_cpu | [profile.process_cpu][] | Collect CPU profiles. | no | +| profiling_config > profile.fgprof | [profile.fgprof][] | Collect [fgprof][] profiles. | no | +| profiling_config > profile.godeltaprof_memory | [profile.godeltaprof_memory][] | Collect [godeltaprof][] memory profiles. | no | +| profiling_config > profile.godeltaprof_mutex | [profile.godeltaprof_mutex][] | Collect [godeltaprof][] mutex profiles. | no | +| profiling_config > profile.godeltaprof_block | [profile.godeltaprof_block][] | Collect [godeltaprof][] block profiles. | no | +| profiling_config > profile.custom | [profile.custom][] | Collect custom profiles. | no | | clustering | [clustering][] | Configure the component for when {{< param "PRODUCT_NAME" >}} is running in clustered mode. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. -Any omitted blocks take on their default values. For example, -if `profile.mutex` is not specified in the config, +Any omitted blocks take on their default values. For example, +if `profile.mutex` is not specified in the config, the defaults documented in [profile.mutex][] will be used. [basic_auth]: #basic_auth-block @@ -211,10 +217,8 @@ the defaults documented in [profile.mutex][] will be used. [profile.custom]: #profilecustom-block [pprof]: https://github.com/google/pprof/blob/main/doc/README.md [clustering]: #clustering-block - [fgprof]: https://github.com/felixge/fgprof [godeltaprof]: https://github.com/grafana/pyroscope-go/tree/main/godeltaprof - [delta argument]: #delta-argument ### basic_auth block @@ -240,9 +244,9 @@ targets. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`path_prefix` | `string` | The path prefix to use when scraping targets. | | no +| Name | Type | Description | Default | Required | +| ------------- | -------- | --------------------------------------------- | ------- | -------- | +| `path_prefix` | `string` | The path prefix to use when scraping targets. | | no | ### profile.memory block @@ -250,11 +254,11 @@ The `profile.memory` block collects profiles on memory consumption. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/pprof/allocs"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ----------------------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/allocs"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -264,11 +268,11 @@ The `profile.block` block collects profiles on process blocking. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/pprof/block"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ---------------------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/block"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -278,11 +282,11 @@ The `profile.goroutine` block collects profiles on the number of goroutines. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/pprof/goroutine"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | -------------------------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/goroutine"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -292,11 +296,11 @@ The `profile.mutex` block collects profiles on mutexes. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/pprof/mutex"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ---------------------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/mutex"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -307,11 +311,11 @@ process. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/pprof/profile"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `true` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ------------------------ | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `true` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/profile"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `true` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -321,11 +325,11 @@ The `profile.fgprof` block collects profiles from an [fgprof][] endpoint. The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | `false` | no -`path` | `string` | The path to the profile type on the target. | `"/debug/fgprof"` | no -`delta` | `boolean` | Whether to scrape the profile as a delta. | `true` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ----------------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | `false` | no | +| `path` | `string` | The path to the profile type on the target. | `"/debug/fgprof"` | no | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `true` | no | For more information about the `delta` argument, see the [delta argument][] section. @@ -336,7 +340,7 @@ The `profile.godeltaprof_memory` block collects profiles from [godeltaprof][] me The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------|-----------|---------------------------------------------|-----------------------------|----------| +| --------- | --------- | ------------------------------------------- | --------------------------- | -------- | | `enabled` | `boolean` | Enable this profile type to be scraped. | `false` | no | | `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/delta_heap"` | no | @@ -347,7 +351,7 @@ The `profile.godeltaprof_mutex` block collects profiles from [godeltaprof][] mut The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------|-----------|---------------------------------------------|------------------------------|----------| +| --------- | --------- | ------------------------------------------- | ---------------------------- | -------- | | `enabled` | `boolean` | Enable this profile type to be scraped. | `false` | no | | `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/delta_mutex"` | no | @@ -358,11 +362,10 @@ The `profile.godeltaprof_block` block collects profiles from [godeltaprof][] blo The following arguments are supported: | Name | Type | Description | Default | Required | -|-----------|-----------|---------------------------------------------|------------------------------|----------| +| --------- | --------- | ------------------------------------------- | ---------------------------- | -------- | | `enabled` | `boolean` | Enable this profile type to be scraped. | `false` | no | | `path` | `string` | The path to the profile type on the target. | `"/debug/pprof/delta_block"` | no | - ### profile.custom block The `profile.custom` block allows for collecting profiles from custom @@ -380,20 +383,20 @@ Multiple `profile.custom` blocks can be specified. Labels assigned to The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `boolean` | Enable this profile type to be scraped. | | yes -`path` | `string` | The path to the profile type on the target. | | yes -`delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no +| Name | Type | Description | Default | Required | +| --------- | --------- | ------------------------------------------- | ------- | -------- | +| `enabled` | `boolean` | Enable this profile type to be scraped. | | yes | +| `path` | `string` | The path to the profile type on the target. | | yes | +| `delta` | `boolean` | Whether to scrape the profile as a delta. | `false` | no | When the `delta` argument is `true`, a `seconds` query parameter is automatically added to requests. The `seconds` used will be equal to `scrape_interval - 1`. ### clustering block -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes +| Name | Type | Description | Default | Required | +| --------- | ------ | ------------------------------------------------- | ------- | -------- | +| `enabled` | `bool` | Enables sharing targets with other cluster nodes. | `false` | yes | When {{< param "PRODUCT_NAME" >}} is [using clustering][], and `enabled` is set to true, then this `pyroscope.scrape` component instance opts-in to participating in the @@ -419,9 +422,10 @@ If {{< param "PRODUCT_NAME" >}} is _not_ running in clustered mode, this block i When the `delta` argument is `false`, the [pprof][] HTTP query will be instantaneous. When the `delta` argument is `true`: -* The [pprof][] HTTP query will run for a certain amount of time. -* A `seconds` parameter is automatically added to the HTTP request. -* The `seconds` used will be equal to `scrape_interval - 1`. + +- The [pprof][] HTTP query will run for a certain amount of time. +- A `seconds` parameter is automatically added to the HTTP request. +- The `seconds` used will be equal to `scrape_interval - 1`. For example, if `scrape_interval` is `"15s"`, `seconds` will be 14 seconds. If the HTTP endpoint is `/debug/pprof/profile`, then the HTTP query will become `/debug/pprof/profile?seconds=14` @@ -442,7 +446,7 @@ scrape job on the component's debug endpoint. ## Debug metrics -* `pyroscope_fanout_latency` (histogram): Write latency for sending to direct and indirect components. +- `pyroscope_fanout_latency` (histogram): Write latency for sending to direct and indirect components. ## Examples @@ -450,7 +454,7 @@ scrape job on the component's debug endpoint. ### Default endpoints of static targets -The following example sets up a scrape job of a statically configured +The following example sets up a scrape job of a statically configured list of targets - {{< param "PRODUCT_ROOT_NAME" >}} itself and Pyroscope. The scraped profiles are sent to `pyroscope.write` which remote writes them to a Pyroscope database. @@ -488,8 +492,9 @@ http://localhost:12345/debug/pprof/profile?seconds=14 ``` Note that `seconds=14` is added to the `/debug/pprof/profile` endpoint, because: -* The `delta` argument of the `profile.process_cpu` block is `true` by default. -* `scrape_interval` is `"15s"` by default. + +- The `delta` argument of the `profile.process_cpu` block is `true` by default. +- `scrape_interval` is `"15s"` by default. Also note that the `/debug/fgprof` endpoint will not be scraped, because the `enabled` argument of the `profile.fgprof` block is `false` by default. @@ -543,7 +548,6 @@ pyroscope.write "local" { } ``` - ### Enabling and disabling profiles ```river @@ -593,7 +597,6 @@ http://localhost:12345/debug/pprof/mutex - Components that export [Targets](../../compatibility/#targets-exporters) - Components that export [Pyroscope `ProfilesReceiver`](../../compatibility/#pyroscope-profilesreceiver-exporters) - {{< admonition type="note" >}} Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details. diff --git a/docs/sources/flow/reference/components/pyroscope.write.md b/docs/sources/flow/reference/components/pyroscope.write.md index 403aef0719e0..6a6aa11713ed 100644 --- a/docs/sources/flow/reference/components/pyroscope.write.md +++ b/docs/sources/flow/reference/components/pyroscope.write.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/pyroscope.write/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.write/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.write/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.write/ + - /docs/grafana-cloud/agent/flow/reference/components/pyroscope.write/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/pyroscope.write/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/pyroscope.write/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/pyroscope.write/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/pyroscope.write/ description: Learn about pyroscope.write labels: @@ -39,23 +39,23 @@ pyroscope.write "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`external_labels` | `map(string)` | Labels to add to profiles sent over the network. | | no +| Name | Type | Description | Default | Required | +| ----------------- | ------------- | ------------------------------------------------ | ------- | -------- | +| `external_labels` | `map(string)` | Labels to add to profiles sent over the network. | | no | ## Blocks The following blocks are supported inside the definition of `pyroscope.write`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -endpoint | [endpoint][] | Location to send profiles to. | no -endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------------------ | ----------------- | -------------------------------------------------------- | -------- | +| endpoint | [endpoint][] | Location to send profiles to. | no | +| endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `endpoint > basic_auth` refers to a `basic_auth` block defined inside an @@ -74,30 +74,31 @@ The `endpoint` block describes a single location to send profiles to. Multiple The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|-----------|--------- -`url` | `string` | Full URL to send metrics to. | | yes -`name` | `string` | Optional name to identify the endpoint in metrics. | | no -`remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no -`headers` | `map(string)` | Extra headers to deliver with the request. | | no -`min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no -`max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no -`max_backoff_retries` | `int` | Maximum number of retries. 0 to retry infinitely. | 10 | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][endpoint]. - - [`bearer_token_file` argument][endpoint]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | --------- | -------- | +| `url` | `string` | Full URL to send metrics to. | | yes | +| `name` | `string` | Optional name to identify the endpoint in metrics. | | no | +| `remote_timeout` | `duration` | Timeout for requests made to the URL. | `"10s"` | no | +| `headers` | `map(string)` | Extra headers to deliver with the request. | | no | +| `min_backoff_period` | `duration` | Initial backoff time between retries. | `"500ms"` | no | +| `max_backoff_period` | `duration` | Maximum backoff time between retries. | `"5m"` | no | +| `max_backoff_retries` | `int` | Maximum number of retries. 0 to retry infinitely. | 10 | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][endpoint]. +- [`bearer_token_file` argument][endpoint]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -124,9 +125,9 @@ configured locations. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`receiver` | `receiver` | A value that other components can use to send profiles to. +| Name | Type | Description | +| ---------- | ---------- | ---------------------------------------------------------- | +| `receiver` | `receiver` | A value that other components can use to send profiles to. | ## Component health @@ -164,6 +165,7 @@ pyroscope.scrape "default" { forward_to = [pyroscope.write.staging.receiver] } ``` + ## Compatible components @@ -177,4 +179,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/flow/reference/components/remote.http.md b/docs/sources/flow/reference/components/remote.http.md index e91fc6c409a0..54a48386b2db 100644 --- a/docs/sources/flow/reference/components/remote.http.md +++ b/docs/sources/flow/reference/components/remote.http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/remote.http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.http/ + - /docs/grafana-cloud/agent/flow/reference/components/remote.http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/remote.http/ description: Learn about remote.http title: remote.http @@ -32,23 +32,23 @@ remote.http "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`url` | `string` | URL to poll. | | yes -`method` | `string` | Define HTTP method for the request | `"GET"` | no -`headers` | `map(string)` | Custom headers for the request. | `{}` | no -`body` | `string` | The request body. | `""` | no -`poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no -`poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no -`is_secret` | `bool` | Whether the response body should be treated as a secret. | false | no +| Name | Type | Description | Default | Required | +| ---------------- | ------------- | -------------------------------------------------------- | ------- | -------- | +| `url` | `string` | URL to poll. | | yes | +| `method` | `string` | Define HTTP method for the request | `"GET"` | no | +| `headers` | `map(string)` | Custom headers for the request. | `{}` | no | +| `body` | `string` | The request body. | `""` | no | +| `poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no | +| `poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no | +| `is_secret` | `bool` | Whether the response body should be treated as a secret. | false | no | When `remote.http` performs a poll operation, an HTTP `GET` request is made against the URL specified by the `url` argument. A poll is triggered by the following: -* When the component first loads. -* Every time the component's arguments get re-evaluated. -* At the frequency specified by the `poll_frequency` argument. +- When the component first loads. +- Every time the component's arguments get re-evaluated. +- At the frequency specified by the `poll_frequency` argument. The poll is successful if the URL returns a `200 OK` response code. All other response codes are treated as errors and mark the component as unhealthy. After @@ -60,14 +60,14 @@ a successful poll, the response body from the URL is exported. The following blocks are supported inside the definition of `remote.http`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | HTTP client settings when connecting to the endpoint. | no -client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | -------------------------------------------------------- | -------- | +| client | [client][] | HTTP client settings when connecting to the endpoint. | no | +| client > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| client > authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to an `basic_auth` block defined inside a `client` block. @@ -116,9 +116,9 @@ The `tls_config` block configures TLS settings for connecting to HTTPS servers. The following field is exported and can be referenced by other components: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`content` | `string` or `secret` | The contents of the file. | | no +| Name | Type | Description | Default | Required | +| --------- | -------------------- | ------------------------- | ------- | -------- | +| `content` | `string` or `secret` | The contents of the file. | | no | If the `is_secret` argument was `true`, `content` is a secret type. diff --git a/docs/sources/flow/reference/components/remote.kubernetes.configmap.md b/docs/sources/flow/reference/components/remote.kubernetes.configmap.md index adbaf214d2c2..d81d883a09ca 100644 --- a/docs/sources/flow/reference/components/remote.kubernetes.configmap.md +++ b/docs/sources/flow/reference/components/remote.kubernetes.configmap.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.kubernetes.configmap/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.kubernetes.configmap/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.kubernetes.configmap/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.kubernetes.configmap/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/remote.kubernetes.configmap/ description: Learn about remote.kubernetes.configmap title: remote.kubernetes.configmap @@ -26,19 +26,19 @@ remote.kubernetes.configmap "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`namespace` | `string` | Kubernetes namespace containing the desired ConfigMap. | | yes -`name` | `string` | Name of the Kubernetes ConfigMap | | yes -`poll_frequency` | `duration` | Frequency to poll the Kubernetes API. | `"1m"` | no -`poll_timeout` | `duration` | Timeout when polling the Kubernetes API. | `"15s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | ------------------------------------------------------ | ------- | -------- | +| `namespace` | `string` | Kubernetes namespace containing the desired ConfigMap. | | yes | +| `name` | `string` | Name of the Kubernetes ConfigMap | | yes | +| `poll_frequency` | `duration` | Frequency to poll the Kubernetes API. | `"1m"` | no | +| `poll_timeout` | `duration` | Timeout when polling the Kubernetes API. | `"15s"` | no | When this component performs a poll operation, it requests the ConfigMap data from the Kubernetes API. A poll is triggered by the following: -* When the component first loads. -* Every time the component's arguments get re-evaluated. -* At the frequency specified by the `poll_frequency` argument. +- When the component first loads. +- Every time the component's arguments get re-evaluated. +- At the frequency specified by the `poll_frequency` argument. Any error while polling will mark the component as unhealthy. After a successful poll, all data is exported with the same field names as the source ConfigMap. @@ -47,14 +47,14 @@ a successful poll, all data is exported with the same field names as the source The following blocks are supported inside the definition of `remote.kubernetes.configmap`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to find Probes. | no -client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no -client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | ------------------------------------------------------------ | -------- | +| client | [client][] | Configures Kubernetes client used to find Probes. | no | +| client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no | +| client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined inside a `client` block. @@ -73,25 +73,26 @@ used. The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|---------|--------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -111,14 +112,13 @@ Name | Type | Description {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} - ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`data` | `map(string)` | Data from the ConfigMap obtained from Kubernetes. +| Name | Type | Description | +| ------ | ------------- | ------------------------------------------------- | +| `data` | `map(string)` | Data from the ConfigMap obtained from Kubernetes. | The `data` field contains a mapping from field names to values. @@ -162,4 +162,3 @@ prometheus.remote_write "default" { This example assumes that the Secret and ConfigMap have already been created, and that the appropriate field names exist in their data. - diff --git a/docs/sources/flow/reference/components/remote.kubernetes.secret.md b/docs/sources/flow/reference/components/remote.kubernetes.secret.md index 8e5a7cd966ec..e11532578fd0 100644 --- a/docs/sources/flow/reference/components/remote.kubernetes.secret.md +++ b/docs/sources/flow/reference/components/remote.kubernetes.secret.md @@ -1,7 +1,7 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.kubernetes.secret/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.kubernetes.secret/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.kubernetes.secret/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.kubernetes.secret/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/remote.kubernetes.secret/ description: Learn about remote.kubernetes.secret title: remote.kubernetes.secret @@ -26,19 +26,19 @@ remote.kubernetes.secret "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`namespace` | `string` | Kubernetes namespace containing the desired Secret. | | yes -`name` | `string` | Name of the Kubernetes Secret | | yes -`poll_frequency` | `duration` | Frequency to poll the Kubernetes API. | `"1m"` | no -`poll_timeout` | `duration` | Timeout when polling the Kubernetes API. | `"15s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | --------------------------------------------------- | ------- | -------- | +| `namespace` | `string` | Kubernetes namespace containing the desired Secret. | | yes | +| `name` | `string` | Name of the Kubernetes Secret | | yes | +| `poll_frequency` | `duration` | Frequency to poll the Kubernetes API. | `"1m"` | no | +| `poll_timeout` | `duration` | Timeout when polling the Kubernetes API. | `"15s"` | no | When this component performs a poll operation, it requests the Secret data from the Kubernetes API. A poll is triggered by the following: -* When the component first loads. -* Every time the component's arguments get re-evaluated. -* At the frequency specified by the `poll_frequency` argument. +- When the component first loads. +- Every time the component's arguments get re-evaluated. +- At the frequency specified by the `poll_frequency` argument. Any error while polling will mark the component as unhealthy. After a successful poll, all data is exported with the same field names as the source Secret. @@ -47,14 +47,14 @@ a successful poll, all data is exported with the same field names as the source The following blocks are supported inside the definition of `remote.kubernetes.secret`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures Kubernetes client used to find Probes. | no -client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no -client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no -client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no -client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no -client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no +| Hierarchy | Block | Description | Required | +| ---------------------------- | ----------------- | ------------------------------------------------------------ | -------- | +| client | [client][] | Configures Kubernetes client used to find Probes. | no | +| client > basic_auth | [basic_auth][] | Configure basic authentication to the Kubernetes API. | no | +| client > authorization | [authorization][] | Configure generic authorization to the Kubernetes API. | no | +| client > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the Kubernetes API. | no | +| client > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | +| client > tls_config | [tls_config][] | Configure TLS settings for connecting to the Kubernetes API. | no | The `>` symbol indicates deeper levels of nesting. For example, `client > basic_auth` refers to a `basic_auth` block defined inside a `client` block. @@ -72,25 +72,26 @@ configuration with the service account of the running {{< param "PRODUCT_ROOT_NA The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|---------------------------------------------------------------|---------|--------- -`api_server` | `string` | URL of the Kubernetes API server. | | no -`kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - - At most, one of the following can be provided: - - [`bearer_token` argument][client]. - - [`bearer_token_file` argument][client]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +| Name | Type | Description | Default | Required | +| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------ | ------- | -------- | +| `api_server` | `string` | URL of the Kubernetes API server. | | no | +| `kubeconfig_file` | `string` | Path of the `kubeconfig` file to use for connecting to Kubernetes. | | no | +| `bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no | +| `bearer_token` | `secret` | Bearer token to authenticate with. | | no | +| `enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no | +| `follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no | +| `proxy_url` | `string` | HTTP proxy to send requests through. | | no | +| `no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no | +| `proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no | +| `proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no | + +At most, one of the following can be provided: + +- [`bearer_token` argument][client]. +- [`bearer_token_file` argument][client]. +- [`basic_auth` block][basic_auth]. +- [`authorization` block][authorization]. +- [`oauth2` block][oauth2]. {{< docs/shared lookup="flow/reference/components/http-client-proxy-config-description.md" source="agent" version="" >}} @@ -110,14 +111,13 @@ Name | Type | Description {{< docs/shared lookup="flow/reference/components/tls-config-block.md" source="agent" version="" >}} - ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`data` | `map(secret)` | Data from the secret obtained from Kubernetes. +| Name | Type | Description | +| ------ | ------------- | ---------------------------------------------- | +| `data` | `map(secret)` | Data from the secret obtained from Kubernetes. | The `data` field contains a mapping from field names to values. diff --git a/docs/sources/flow/reference/components/remote.s3.md b/docs/sources/flow/reference/components/remote.s3.md index c4ec8e195e86..e88f2b591c4f 100644 --- a/docs/sources/flow/reference/components/remote.s3.md +++ b/docs/sources/flow/reference/components/remote.s3.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/components/remote.s3/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.s3/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.s3/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.s3/ + - /docs/grafana-cloud/agent/flow/reference/components/remote.s3/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.s3/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.s3/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.s3/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/remote.s3/ description: Learn about remote.s3 title: remote.s3 @@ -20,8 +20,8 @@ The most common use of `remote.s3` is to load secrets from files. Multiple `remote.s3` components can be specified using different name labels. By default, [AWS environment variables](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html) are used to authenticate against S3. The `key` and `secret` arguments inside `client` blocks can be used to provide custom authentication. -> **NOTE**: Other S3-compatible systems can be read with `remote.s3` but may require specific -> authentication environment variables. There is no guarantee that `remote.s3` will work with non-AWS S3 +> **NOTE**: Other S3-compatible systems can be read with `remote.s3` but may require specific +> authentication environment variables. There is no guarantee that `remote.s3` will work with non-AWS S3 > systems. ## Usage @@ -36,11 +36,11 @@ remote.s3 "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`path` | `string` | Path in the format of `"s3://bucket/file"`. | | yes -`poll_frequency` | `duration` | How often to poll the file for changes. Must be greater than 30 seconds. | `"10m"` | no -`is_secret` | `bool` | Marks the file as containing a [secret][]. | `false` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | ------------------------------------------------------------------------ | ------- | -------- | +| `path` | `string` | Path in the format of `"s3://bucket/file"`. | | yes | +| `poll_frequency` | `duration` | How often to poll the file for changes. Must be greater than 30 seconds. | `"10m"` | no | +| `is_secret` | `bool` | Marks the file as containing a [secret][]. | `false` | no | > **NOTE**: `path` must include a full path to a file. This does not support reading of directories. @@ -48,9 +48,9 @@ Name | Type | Description | Default | Required ## Blocks -Hierarchy | Name | Description | Required ---------- |------------| ----------- | -------- -client | [client][] | Additional options for configuring the S3 client. | no +| Hierarchy | Name | Description | Required | +| --------- | ---------- | ------------------------------------------------- | -------- | +| client | [client][] | Additional options for configuring the S3 client. | no | [client]: #client-block @@ -58,24 +58,23 @@ client | [client][] | Additional options for configuring the S3 client. | no The `client` block customizes options to connect to the S3 server. -Name | Type | Description | Default | Required ----- | ---- |-----------------------------------------------------------------------------------------| ------- | -------- -`key` | `string` | Used to override default access key. | | no -`secret` | `secret` | Used to override default secret value. | | no -`endpoint` | `string` | Specifies a custom url to access, used generally for S3-compatible systems. | | no -`disable_ssl` | `bool` | Used to disable SSL, generally used for testing. | | no -`use_path_style` | `string` | Path style is a deprecated setting that is generally enabled for S3 compatible systems. | `false` | no -`region` | `string` | Used to override default region. | | no -`signing_region` | `string` | Used to override the signing region when using a custom endpoint. | | no - +| Name | Type | Description | Default | Required | +| ---------------- | -------- | --------------------------------------------------------------------------------------- | ------- | -------- | +| `key` | `string` | Used to override default access key. | | no | +| `secret` | `secret` | Used to override default secret value. | | no | +| `endpoint` | `string` | Specifies a custom url to access, used generally for S3-compatible systems. | | no | +| `disable_ssl` | `bool` | Used to disable SSL, generally used for testing. | | no | +| `use_path_style` | `string` | Path style is a deprecated setting that is generally enabled for S3 compatible systems. | `false` | no | +| `region` | `string` | Used to override default region. | | no | +| `signing_region` | `string` | Used to override the signing region when using a custom endpoint. | | no | ## Exported fields The following fields are exported and can be referenced by other components: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`content` | `string` or `secret` | The contents of the file. | | no +| Name | Type | Description | Default | Required | +| --------- | -------------------- | ------------------------- | ------- | -------- | +| `content` | `string` or `secret` | The contents of the file. | | no | The `content` field will be secret if `is_secret` was set to true. diff --git a/docs/sources/flow/reference/components/remote.vault.md b/docs/sources/flow/reference/components/remote.vault.md index a4491bd25c66..e74b048f802f 100644 --- a/docs/sources/flow/reference/components/remote.vault.md +++ b/docs/sources/flow/reference/components/remote.vault.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/latest/flow/reference/components/remote.vault/ -- /docs/grafana-cloud/agent/flow/reference/components/remote.vault/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.vault/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.vault/ -- /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.vault/ + - /docs/agent/latest/flow/reference/components/remote.vault/ + - /docs/grafana-cloud/agent/flow/reference/components/remote.vault/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/remote.vault/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/remote.vault/ + - /docs/grafana-cloud/send-data/agent/flow/reference/components/remote.vault/ canonical: https://grafana.com/docs/agent/latest/flow/reference/components/remote.vault/ description: Learn about remote.vault title: remote.vault @@ -39,12 +39,12 @@ remote.vault "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`server` | `string` | The Vault server to connect to. | | yes -`namespace` | `string` | The Vault namespace to connect to (Vault Enterprise only). | | no -`path` | `string` | The path to retrieve a secret from. | | yes -`reread_frequency` | `duration` | Rate to re-read keys. | `"0s"` | no +| Name | Type | Description | Default | Required | +| ------------------ | ---------- | ---------------------------------------------------------- | ------- | -------- | +| `server` | `string` | The Vault server to connect to. | | yes | +| `namespace` | `string` | The Vault namespace to connect to (Vault Enterprise only). | | no | +| `path` | `string` | The path to retrieve a secret from. | | yes | +| `reread_frequency` | `duration` | Rate to re-read keys. | `"0s"` | no | Tokens with a lease will be automatically renewed roughly two-thirds through their lease duration. If the leased token isn't renewable, or renewing the @@ -58,18 +58,18 @@ at a frequency specified by the `reread_frequency` argument. Setting The following blocks are supported inside the definition of `remote.vault`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client_options | [client_options][] | Options for the Vault client. | no -auth.token | [auth.token][] | Authenticate to Vault with a token. | no -auth.approle | [auth.approle][] | Authenticate to Vault using AppRole. | no -auth.aws | [auth.aws][] | Authenticate to Vault using AWS. | no -auth.azure | [auth.azure][] | Authenticate to Vault using Azure. | no -auth.gcp | [auth.gcp][] | Authenticate to Vault using GCP. | no -auth.kubernetes | [auth.kubernetes][] | Authenticate to Vault using Kubernetes. | no -auth.ldap | [auth.ldap][] | Authenticate to Vault using LDAP. | no -auth.userpass | [auth.userpass][] | Authenticate to Vault using a username and password. | no -auth.custom | [auth.custom][] | Authenticate to Vault with custom authentication. | no +| Hierarchy | Block | Description | Required | +| --------------- | ------------------- | ---------------------------------------------------- | -------- | +| client_options | [client_options][] | Options for the Vault client. | no | +| auth.token | [auth.token][] | Authenticate to Vault with a token. | no | +| auth.approle | [auth.approle][] | Authenticate to Vault using AppRole. | no | +| auth.aws | [auth.aws][] | Authenticate to Vault using AWS. | no | +| auth.azure | [auth.azure][] | Authenticate to Vault using Azure. | no | +| auth.gcp | [auth.gcp][] | Authenticate to Vault using GCP. | no | +| auth.kubernetes | [auth.kubernetes][] | Authenticate to Vault using Kubernetes. | no | +| auth.ldap | [auth.ldap][] | Authenticate to Vault using LDAP. | no | +| auth.userpass | [auth.userpass][] | Authenticate to Vault using a username and password. | no | +| auth.custom | [auth.custom][] | Authenticate to Vault with custom authentication. | no | Exactly one `auth.*` block **must** be provided, otherwise the component will fail to load. @@ -89,12 +89,12 @@ fail to load. The `client_options` block customizes the connection to vault. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`min_retry_wait` | `duration` | Minimum time to wait before retrying failed requests. | `"1000ms"` | no -`max_retry_wait` | `duration` | Maximum time to wait before retrying failed requests. | `"1500ms"` | no -`max_retries` | `int` | Maximum number of times to retry after a 5xx error. | `2` | no -`timeout` | `duration` | Maximum time to wait before a request times out. | `"60s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | ----------------------------------------------------- | ---------- | -------- | +| `min_retry_wait` | `duration` | Minimum time to wait before retrying failed requests. | `"1000ms"` | no | +| `max_retry_wait` | `duration` | Maximum time to wait before retrying failed requests. | `"1500ms"` | no | +| `max_retries` | `int` | Maximum number of times to retry after a 5xx error. | `2` | no | +| `timeout` | `duration` | Maximum time to wait before a request times out. | `"60s"` | no | Requests which fail due to server errors (HTTP 5xx error codes) can be retried. The `max_retries` argument specifies how many times to retry failed requests. @@ -112,21 +112,21 @@ If the `max_retries` argument is set to `0`, failed requests are not retried. The `auth.token` block authenticates each request to Vault using a token. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`token` | `secret` | Authentication token to use. | | yes +| Name | Type | Description | Default | Required | +| ------- | -------- | ---------------------------- | ------- | -------- | +| `token` | `secret` | Authentication token to use. | | yes | ### auth.approle block The `auth.token` block auhenticates to Vault using the [AppRole auth method][AppRole]. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role_id` | `string` | Role ID to authenticate as. | | yes -`secret` | `secret` | Secret to authenticate with. | | yes -`wrapping_token` | `bool` | Whether to [unwrap][] the token. | `false` | no -`mount_path` | `string` | Mount path for the login. | `"approle"` | no +| Name | Type | Description | Default | Required | +| ---------------- | -------- | -------------------------------- | ----------- | -------- | +| `role_id` | `string` | Role ID to authenticate as. | | yes | +| `secret` | `secret` | Secret to authenticate with. | | yes | +| `wrapping_token` | `bool` | Whether to [unwrap][] the token. | `false` | no | +| `mount_path` | `string` | Mount path for the login. | `"approle"` | no | [AppRole]: https://www.vaultproject.io/docs/auth/approle [unwrap]: https://www.vaultproject.io/docs/concepts/response-wrapping @@ -140,14 +140,14 @@ Credentials used to connect to AWS are specified by the environment variables environment variable `AWS_SHARED_CREDENTIALS_FILE` may be specified to use a credentials file instead. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`type` | `string` | Mechanism to authenticate against AWS with. | | yes -`region` | `string` | AWS region to connect to. | `"us-east-1"` | no -`role` | `string` | Overrides the inferred role name inferred. | `""` | no -`iam_server_id_header` | `string` | Configures a `X-Vault-AWS-IAM-Server-ID` header. | `""` | no -`ec2_signature_type` | `string` | Signature to use when authenticating against EC2. | `"pkcs7"` | no -`mount_path` | `string` | Mount path for the login. | `"aws"` | no +| Name | Type | Description | Default | Required | +| ---------------------- | -------- | ------------------------------------------------- | ------------- | -------- | +| `type` | `string` | Mechanism to authenticate against AWS with. | | yes | +| `region` | `string` | AWS region to connect to. | `"us-east-1"` | no | +| `role` | `string` | Overrides the inferred role name inferred. | `""` | no | +| `iam_server_id_header` | `string` | Configures a `X-Vault-AWS-IAM-Server-ID` header. | `""` | no | +| `ec2_signature_type` | `string` | Signature to use when authenticating against EC2. | `"pkcs7"` | no | +| `mount_path` | `string` | Mount path for the login. | `"aws"` | no | The `type` argument must be set to one of `"ec2"` or `"iam"`. @@ -171,11 +171,11 @@ method][Azure]. Credentials are retrieved for the running Azure VM using Managed Identities for Azure Resources. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Role name to authenticate as. | | yes -`resource_url` | `string` | Resource URL to include with authentication request. | | no -`mount_path` | `string` | Mount path for the login. | `"azure"` | no +| Name | Type | Description | Default | Required | +| -------------- | -------- | ---------------------------------------------------- | --------- | -------- | +| `role` | `string` | Role name to authenticate as. | | yes | +| `resource_url` | `string` | Resource URL to include with authentication request. | | no | +| `mount_path` | `string` | Mount path for the login. | `"azure"` | no | [Azure]: https://www.vaultproject.io/docs/auth/azure @@ -183,12 +183,12 @@ Name | Type | Description | Default | Required The `auth.gcp` block authenticates to Vault using the [GCP auth method][GCP]. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Role name to authenticate as. | | yes -`type` | `string` | Mechanism to authenticate against GCP with | | yes -`iam_service_account` | `string` | IAM service account name to use. | | no -`mount_path` | `string` | Mount path for the login. | `"gcp"` | no +| Name | Type | Description | Default | Required | +| --------------------- | -------- | ------------------------------------------ | ------- | -------- | +| `role` | `string` | Role name to authenticate as. | | yes | +| `type` | `string` | Mechanism to authenticate against GCP with | | yes | +| `iam_service_account` | `string` | IAM service account name to use. | | no | +| `mount_path` | `string` | Mount path for the login. | `"gcp"` | no | The `type` argument must be set to `"gce"` or `"iam"`. When `type` is `"gce"`, credentials are retrieved using the metadata service on GCE VMs. When `type` is @@ -205,11 +205,11 @@ service account name to use. The `auth.kubernetes` block authenticates to Vault using the [Kubernetes auth method][Kubernetes]. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`role` | `string` | Role name to authenticate as. | | yes -`service_account_file` | `string` | Override service account token file to use. | | no -`mount_path` | `string` | Mount path for the login. | `"kubernetes"` | no +| Name | Type | Description | Default | Required | +| ---------------------- | -------- | ------------------------------------------- | -------------- | -------- | +| `role` | `string` | Role name to authenticate as. | | yes | +| `service_account_file` | `string` | Override service account token file to use. | | no | +| `mount_path` | `string` | Mount path for the login. | `"kubernetes"` | no | When `service_account_file` is not specified, the JWT token to authenticate with is retrieved from `/var/run/secrets/kubernetes.io/serviceaccount/token`. @@ -221,11 +221,11 @@ with is retrieved from `/var/run/secrets/kubernetes.io/serviceaccount/token`. The `auth.ldap` block authenticates to Vault using the [LDAP auth method][LDAP]. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | LDAP username to authenticate as. | | yes -`password` | `secret` | LDAP passsword for the user. | | yes -`mount_path` | `string` | Mount path for the login. | `"ldap"` | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | --------------------------------- | -------- | -------- | +| `username` | `string` | LDAP username to authenticate as. | | yes | +| `password` | `secret` | LDAP passsword for the user. | | yes | +| `mount_path` | `string` | Mount path for the login. | `"ldap"` | no | [LDAP]: https://www.vaultproject.io/docs/auth/ldap @@ -234,11 +234,11 @@ Name | Type | Description | Default | Required The `auth.userpass` block authenticates to Vault using the [UserPass auth method][UserPass]. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`username` | `string` | Username to authenticate as. | | yes -`password` | `secret` | Passsword for the user. | | yes -`mount_path` | `string` | Mount path for the login. | `"userpass"` | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | ---------------------------- | ------------ | -------- | +| `username` | `string` | Username to authenticate as. | | yes | +| `password` | `secret` | Passsword for the user. | | yes | +| `mount_path` | `string` | Mount path for the login. | `"userpass"` | no | [UserPass]: https://www.vaultproject.io/docs/auth/userpass @@ -250,10 +250,10 @@ authentication path like `auth/customservice/login`. Using `auth.custom` is equivalent to calling `vault write PATH DATA` on the command line. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`path` | `string` | Path to write to for creating an authentication token. | yes -`data` | `map(secret)` | Authentication data. | yes +| Name | Type | Description | Default | Required | +| ------ | ------------- | ------------------------------------------------------ | ------- | -------- | +| `path` | `string` | Path to write to for creating an authentication token. | yes | +| `data` | `map(secret)` | Authentication data. | yes | All values in the `data` attribute are considered secret, even if they contain nonsensitive information like usernames. @@ -262,9 +262,9 @@ nonsensitive information like usernames. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- -`data` | `map(secret)` | Data from the secret obtained from Vault. +| Name | Type | Description | +| ------ | ------------- | ----------------------------------------- | +| `data` | `map(secret)` | Data from the secret obtained from Vault. | The `data` field contains a mapping from data field names to values. There will be one mapping for each string-like field stored in the Vault secret. @@ -296,23 +296,23 @@ secrets was unsuccessful. `remote.vault` exposes debug information for the authentication token and secret around: -* The latest request ID used for retrieving or renewing the token. -* The most recent time when the token was retrieved or renewed. -* The expiration time for the token (if applicable). -* Whether the token is renewable. -* Warnings from Vault from when the token was retrieved. +- The latest request ID used for retrieving or renewing the token. +- The most recent time when the token was retrieved or renewed. +- The expiration time for the token (if applicable). +- Whether the token is renewable. +- Warnings from Vault from when the token was retrieved. ## Debug metrics `remote.vault` exposes the following metrics: -* `remote_vault_auth_total` (counter): Total number of times the component +- `remote_vault_auth_total` (counter): Total number of times the component authenticated to Vault. -* `remote_vault_secret_reads_total` (counter): Total number of times the secret +- `remote_vault_secret_reads_total` (counter): Total number of times the secret was read from Vault. -* `remote_vault_auth_lease_renewal_total` (counter): Total number of times the +- `remote_vault_auth_lease_renewal_total` (counter): Total number of times the component renewed its authentication token lease. -* `remote_vault_secret_lease_renewal_total` (counter): Total number of times +- `remote_vault_secret_lease_renewal_total` (counter): Total number of times the component renewed its secret token lease. ## Example diff --git a/docs/sources/flow/reference/config-blocks/_index.md b/docs/sources/flow/reference/config-blocks/_index.md index bf528e3a16e5..68284c7754ba 100644 --- a/docs/sources/flow/reference/config-blocks/_index.md +++ b/docs/sources/flow/reference/config-blocks/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/ description: Learn about configuration blocks title: Configuration blocks diff --git a/docs/sources/flow/reference/config-blocks/argument.md b/docs/sources/flow/reference/config-blocks/argument.md index f1a1617daaca..7ba59584f85e 100644 --- a/docs/sources/flow/reference/config-blocks/argument.md +++ b/docs/sources/flow/reference/config-blocks/argument.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/argument/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/argument/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/argument/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/argument/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/argument/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/argument/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/argument/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/argument/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/argument/ description: Learn about the argument configuration block menuTitle: argument @@ -33,6 +33,7 @@ In [classic modules][], the `argument` block is valid as a top-level block in a Classic modules are deprecated and scheduled to be removed in the release after v0.40. [classic modules]: https://grafana.com/docs/agent//flow/concepts/modules/#classic-modules-deprecated + {{< /admonition >}} ## Example @@ -50,11 +51,11 @@ For clarity, "argument" in this section refers to arguments which can be given t The following arguments are supported: -Name | Type | Description | Default | Required ------------|----------|--------------------------------------|---------|--------- -`comment` | `string` | Description for the argument. | `false` | no -`default` | `any` | Default value for the argument. | `null` | no -`optional` | `bool` | Whether the argument may be omitted. | `false` | no +| Name | Type | Description | Default | Required | +| ---------- | -------- | ------------------------------------ | ------- | -------- | +| `comment` | `string` | Description for the argument. | `false` | no | +| `default` | `any` | Default value for the argument. | `null` | no | +| `optional` | `bool` | Whether the argument may be omitted. | `false` | no | By default, all module arguments are required. The `optional` argument can be used to mark the module argument as optional. @@ -64,9 +65,9 @@ When `optional` is `true`, the initial value for the module argument is specifie The following fields are exported and can be referenced by other components: -Name | Type | Description ---------|-------|----------------------------------- -`value` | `any` | The current value of the argument. +| Name | Type | Description | +| ------- | ----- | ---------------------------------- | +| `value` | `any` | The current value of the argument. | If you use a custom component, you are responsible for determining the values for arguments. Other expressions within a custom component may use `argument.ARGUMENT_NAME.value` to retrieve the value you provide. @@ -91,4 +92,3 @@ declare "self_collect" { } } ``` - diff --git a/docs/sources/flow/reference/config-blocks/declare.md b/docs/sources/flow/reference/config-blocks/declare.md index 2a8c579f4a40..d2ad8cbf34d2 100644 --- a/docs/sources/flow/reference/config-blocks/declare.md +++ b/docs/sources/flow/reference/config-blocks/declare.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/declare/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/declare/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/declare/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/declare/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/declare/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/declare/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/declare/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/declare/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/declare/ description: Learn about the declare configuration block menuTitle: declare @@ -44,11 +44,11 @@ The `declare` block has no predefined schema for its arguments. The body of the `declare` block is used as the component definition. The body can contain the following: -* [argument](ref:argument) blocks -* [export](ref:export) blocks -* [declare](ref:declare) blocks -* [import](ref:import) blocks -* Component definitions (either built-in or custom components) +- [argument](ref:argument) blocks +- [export](ref:export) blocks +- [declare](ref:declare) blocks +- [import](ref:import) blocks +- Component definitions (either built-in or custom components) The `declare` block may not contain any configuration blocks that aren't listed above. @@ -87,4 +87,3 @@ prometheus.remote_write "example" { } } ``` - diff --git a/docs/sources/flow/reference/config-blocks/export.md b/docs/sources/flow/reference/config-blocks/export.md index 06d049716901..0d7a454648ff 100644 --- a/docs/sources/flow/reference/config-blocks/export.md +++ b/docs/sources/flow/reference/config-blocks/export.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/export/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/export/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/export/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/export/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/export/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/export/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/export/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/export/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/export/ description: Learn about the export configuration block menuTitle: export @@ -32,6 +32,7 @@ The `export` block may only be specified inside the definition of [a `declare` b In [classic modules][], the `export` block is valid as a top-level block in a classic module. Classic modules are deprecated and scheduled to be removed in the release after v0.40. [classic modules]: https://grafana.com/docs/agent//flow/concepts/modules/#classic-modules-deprecated + {{< /admonition >}} ## Example @@ -46,9 +47,9 @@ export "ARGUMENT_NAME" { The following arguments are supported: -Name | Type | Description | Default | Required ---------|-------|------------------|---------|--------- -`value` | `any` | Value to export. | | yes +| Name | Type | Description | Default | Required | +| ------- | ----- | ---------------- | ------- | -------- | +| `value` | `any` | Value to export. | | yes | The `value` argument determines what the value of the export is. To expose an exported field of another component, set `value` to an expression that references that exported value. @@ -79,4 +80,3 @@ declare "pods_and_nodes" { } } ``` - diff --git a/docs/sources/flow/reference/config-blocks/http.md b/docs/sources/flow/reference/config-blocks/http.md index 3b023deb6b52..fa2e4c138bbb 100644 --- a/docs/sources/flow/reference/config-blocks/http.md +++ b/docs/sources/flow/reference/config-blocks/http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/http/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/http/ description: Learn about the http configuration block menuTitle: http @@ -34,12 +34,12 @@ The `http` block supports no arguments and is configured completely through inne The following blocks are supported inside the definition of `http`: -Hierarchy | Block | Description | Required -------------------------------------------|--------------------------------|---------------------------------------------------------------|--------- -tls | [tls][] | Define TLS settings for the HTTP server. | no -tls > windows_certificate_filter | [windows_certificate_filter][] | Configure Windows certificate store for all certificates. | no -tls > windows_certificate_filter > client | [client][] | Configure client certificates for Windows certificate filter. | no -tls > windows_certificate_filter > server | [server][] | Configure server certificates for Windows certificate filter. | no +| Hierarchy | Block | Description | Required | +| ----------------------------------------- | ------------------------------ | ------------------------------------------------------------- | -------- | +| tls | [tls][] | Define TLS settings for the HTTP server. | no | +| tls > windows_certificate_filter | [windows_certificate_filter][] | Configure Windows certificate store for all certificates. | no | +| tls > windows_certificate_filter > client | [client][] | Configure client certificates for Windows certificate filter. | no | +| tls > windows_certificate_filter > server | [server][] | Configure server certificates for Windows certificate filter. | no | [tls]: #tls-block [windows_certificate_filter]: #windows-certificate-filter-block @@ -57,19 +57,19 @@ Similarly, if you remove the `tls` block and reload the configuration when {{< p To ensure all connections use TLS, configure the `tls` block before you start {{< param "PRODUCT_NAME" >}}. {{< /admonition >}} -Name | Type | Description | Default | Required ---------------------|----------------|------------------------------------------------------------------|------------------|-------------- -`cert_pem` | `string` | PEM data of the server TLS certificate. | `""` | conditionally -`cert_file` | `string` | Path to the server TLS certificate on disk. | `""` | conditionally -`key_pem` | `string` | PEM data of the server TLS key. | `""` | conditionally -`key_file` | `string` | Path to the server TLS key on disk. | `""` | conditionally -`client_ca_pem` | `string` | PEM data of the client CA to validate requests against. | `""` | no -`client_ca_file` | `string` | Path to the client CA file on disk to validate requests against. | `""` | no -`client_auth_type` | `string` | Client authentication to use. | `"NoClientCert"` | no -`cipher_suites` | `list(string)` | Set of cipher suites to use. | `[]` | no -`curve_preferences` | `list(string)` | Set of elliptic curves to use in a handshake. | `[]` | no -`min_version` | `string` | Oldest TLS version to accept from clients. | `""` | no -`max_version` | `string` | Newest TLS version to accept from clients. | `""` | no +| Name | Type | Description | Default | Required | +| ------------------- | -------------- | ---------------------------------------------------------------- | ---------------- | ------------- | +| `cert_pem` | `string` | PEM data of the server TLS certificate. | `""` | conditionally | +| `cert_file` | `string` | Path to the server TLS certificate on disk. | `""` | conditionally | +| `key_pem` | `string` | PEM data of the server TLS key. | `""` | conditionally | +| `key_file` | `string` | Path to the server TLS key on disk. | `""` | conditionally | +| `client_ca_pem` | `string` | PEM data of the client CA to validate requests against. | `""` | no | +| `client_ca_file` | `string` | Path to the client CA file on disk to validate requests against. | `""` | no | +| `client_auth_type` | `string` | Client authentication to use. | `"NoClientCert"` | no | +| `cipher_suites` | `list(string)` | Set of cipher suites to use. | `[]` | no | +| `curve_preferences` | `list(string)` | Set of elliptic curves to use in a handshake. | `[]` | no | +| `min_version` | `string` | Oldest TLS version to accept from clients. | `""` | no | +| `max_version` | `string` | Newest TLS version to accept from clients. | `""` | no | When the `tls` block is specified, arguments for the TLS certificate (using `cert_pem` or `cert_file`) and for the TLS key (using `key_pem` or `key_file`) @@ -77,9 +77,9 @@ are required. The following pairs of arguments are mutually exclusive, and only one may be configured at a time: -* `cert_pem` and `cert_file` -* `key_pem` and `key_file` -* `client_ca_pem` and `client_ca_file` +- `cert_pem` and `cert_file` +- `key_pem` and `key_file` +- `client_ca_pem` and `client_ca_file` The `client_auth_type` argument determines whether to validate client certificates. The default value, `NoClientCert`, indicates that the client certificate is not validated. @@ -87,11 +87,11 @@ The `client_ca_pem` and `client_ca_file` arguments may only be configured when ` The following values are accepted for `client_auth_type`: -* `NoClientCert`: client certificates are neither requested nor validated. -* `RequestClientCert`: requests clients to send an optional certificate. Certificates provided by clients are not validated. -* `RequireAnyClientCert`: requires at least one certificate from clients. Certificates provided by clients are not validated. -* `VerifyClientCertIfGiven`: requests clients to send an optional certificate. If a certificate is sent, it must be valid. -* `RequireAndVerifyClientCert`: requires clients to send a valid certificate. +- `NoClientCert`: client certificates are neither requested nor validated. +- `RequestClientCert`: requests clients to send an optional certificate. Certificates provided by clients are not validated. +- `RequireAnyClientCert`: requires at least one certificate from clients. Certificates provided by clients are not validated. +- `VerifyClientCertIfGiven`: requests clients to send an optional certificate. If a certificate is sent, it must be valid. +- `RequireAndVerifyClientCert`: requires clients to send a valid certificate. The `client_ca_pem` or `client_ca_file` arguments may be used to perform client certificate validation. These arguments may only be provided when `client_auth_type` is not set to `NoClientCert`. @@ -136,23 +136,22 @@ If you don't provide the min and max TLS version, a default value is used. The following versions are recognized: -* `TLS13` for TLS 1.3 -* `TLS12` for TLS 1.2 -* `TLS11` for TLS 1.1 -* `TLS10` for TLS 1.0 - +- `TLS13` for TLS 1.3 +- `TLS12` for TLS 1.2 +- `TLS11` for TLS 1.1 +- `TLS10` for TLS 1.0 ### windows certificate filter block The `windows_certificate_filter` block is used to configure retrieving certificates from the built-in Windows certificate store. When you use the `windows_certificate_filter` block the following TLS settings are overridden and cause an error if defined. -* `cert_pem` -* `cert_file` -* `key_pem` -* `key_file` -* `client_ca` -* `client_ca_file` +- `cert_pem` +- `cert_file` +- `key_pem` +- `key_file` +- `client_ca` +- `client_ca_file` {{< admonition type="warning" >}} This feature is only available on Windows. @@ -161,28 +160,25 @@ TLS min and max may not be compatible with the certificate stored in the Windows The `windows_certificate_filter` serves the certificate even if it isn't compatible with the specified TLS version. {{< /admonition >}} - ### server block The `server` block is used to find the certificate to check the signer. If multiple certificates are found, the `windows_certificate_filter` chooses the certificate with the expiration farthest in the future. -Name | Type | Description | Default | Required -----------------------|----------------|------------------------------------------------------------------------------------------------------|---------|--------- -`store` | `string` | Name of the system store to look for the server Certificate, for example, LocalMachine, CurrentUser. | `""` | yes -`system_store` | `string` | Name of the store to look for the server Certificate, for example, My, CA. | `""` | yes -`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no -`template_id` | `string` | Server Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no -`refresh_interval` | `string` | How often to check for a new server certificate. | `"5m"` | no - - +| Name | Type | Description | Default | Required | +| --------------------- | -------------- | ---------------------------------------------------------------------------------------------------- | ------- | -------- | +| `store` | `string` | Name of the system store to look for the server Certificate, for example, LocalMachine, CurrentUser. | `""` | yes | +| `system_store` | `string` | Name of the store to look for the server Certificate, for example, My, CA. | `""` | yes | +| `issuer_common_names` | `list(string)` | Issuer common names to check against. | | no | +| `template_id` | `string` | Server Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no | +| `refresh_interval` | `string` | How often to check for a new server certificate. | `"5m"` | no | ### client block The `client` block is used to check the certificate presented to the server. -Name | Type | Description | Default | Required -----------------------|----------------|-------------------------------------------------------------------|---------|--------- -`issuer_common_names` | `list(string)` | Issuer common names to check against. | | no -`subject_regex` | `string` | Regular expression to match Subject name. | `""` | no -`template_id` | `string` | Client Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no +| Name | Type | Description | Default | Required | +| --------------------- | -------------- | ----------------------------------------------------------------- | ------- | -------- | +| `issuer_common_names` | `list(string)` | Issuer common names to check against. | | no | +| `subject_regex` | `string` | Regular expression to match Subject name. | `""` | no | +| `template_id` | `string` | Client Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no | diff --git a/docs/sources/flow/reference/config-blocks/import.file.md b/docs/sources/flow/reference/config-blocks/import.file.md index 63befc6e2007..4bff4a05c49d 100644 --- a/docs/sources/flow/reference/config-blocks/import.file.md +++ b/docs/sources/flow/reference/config-blocks/import.file.md @@ -79,4 +79,3 @@ math.add "default" { ``` {{< /collapse >}} - diff --git a/docs/sources/flow/reference/config-blocks/import.git.md b/docs/sources/flow/reference/config-blocks/import.git.md index 508a0aaec1eb..fe8cfa8997e5 100644 --- a/docs/sources/flow/reference/config-blocks/import.git.md +++ b/docs/sources/flow/reference/config-blocks/import.git.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/import.git/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.git/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.git/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.git/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/import.git/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.git/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.git/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.git/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/import.git/ description: Learn about the import.git configuration block title: import.git @@ -31,12 +31,12 @@ import.git "NAMESPACE" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------|------------|---------------------------------------------------------|----------|--------- -`repository` | `string` | The Git repository address to retrieve the module from. | | yes -`revision` | `string` | The Git revision to retrieve the module from. | `"HEAD"` | no -`path` | `string` | The path in the repository where the module is stored. | | yes -`pull_frequency` | `duration` | The frequency to pull the repository for updates. | `"60s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ---------- | ------------------------------------------------------- | -------- | -------- | +| `repository` | `string` | The Git repository address to retrieve the module from. | | yes | +| `revision` | `string` | The Git revision to retrieve the module from. | `"HEAD"` | no | +| `path` | `string` | The path in the repository where the module is stored. | | yes | +| `pull_frequency` | `duration` | The frequency to pull the repository for updates. | `"60s"` | no | The `repository` attribute must be set to a repository address that would be recognized by Git with a `git clone REPOSITORY_ADDRESS` command, such as @@ -64,10 +64,10 @@ Pulling hosted Git repositories too often can result in throttling. The following blocks are supported inside the definition of `import.git`: -Hierarchy | Block | Description | Required ------------|----------------|------------------------------------------------------------|--------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the repository. | no -ssh_key | [ssh_key][] | Configure an SSH Key for authenticating to the repository. | no +| Hierarchy | Block | Description | Required | +| ---------- | -------------- | ---------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the repository. | no | +| ssh_key | [ssh_key][] | Configure an SSH Key for authenticating to the repository. | no | ### basic_auth block @@ -75,12 +75,12 @@ ssh_key | [ssh_key][] | Configure an SSH Key for authenticating to the rep ### ssh_key block -Name | Type | Description | Default | Required --------------|----------|-----------------------------------|---------|--------- -`username` | `string` | SSH username. | | yes -`key` | `secret` | SSH private key. | | no -`key_file` | `string` | SSH private key path. | | no -`passphrase` | `secret` | Passphrase for SSH key if needed. | | no +| Name | Type | Description | Default | Required | +| ------------ | -------- | --------------------------------- | ------- | -------- | +| `username` | `string` | SSH username. | | yes | +| `key` | `secret` | SSH private key. | | no | +| `key_file` | `string` | SSH private key path. | | no | +| `passphrase` | `secret` | Passphrase for SSH key if needed. | | no | ## Examples @@ -116,4 +116,3 @@ math.add "default" { [basic_auth]: #basic_auth-block [ssh_key]: #ssh_key-block - diff --git a/docs/sources/flow/reference/config-blocks/import.http.md b/docs/sources/flow/reference/config-blocks/import.http.md index f5a677b8631e..9b2110086d7a 100644 --- a/docs/sources/flow/reference/config-blocks/import.http.md +++ b/docs/sources/flow/reference/config-blocks/import.http.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/import.http/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.http/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.http/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.http/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/import.http/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.http/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.http/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.http/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/import.http/ description: Learn about the import.http configuration block title: import.http @@ -25,19 +25,20 @@ import.http "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required ------------------|---------------|-----------------------------------------|---------|--------- -`url` | `string` | URL to poll. | | yes -`method` | `string` | Define the HTTP method for the request. | `"GET"` | no -`headers` | `map(string)` | Custom headers for the request. | `{}` | no -`poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no -`poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ------------- | --------------------------------------- | ------- | -------- | +| `url` | `string` | URL to poll. | | yes | +| `method` | `string` | Define the HTTP method for the request. | `"GET"` | no | +| `headers` | `map(string)` | Custom headers for the request. | `{}` | no | +| `poll_frequency` | `duration` | Frequency to poll the URL. | `"1m"` | no | +| `poll_timeout` | `duration` | Timeout when polling the URL. | `"10s"` | no | ## Example This example imports custom components from an HTTP response and instantiates a custom component for adding two numbers: {{< collapse title="HTTP response" >}} + ```river declare "add" { argument "a" {} @@ -48,9 +49,11 @@ declare "add" { } } ``` + {{< /collapse >}} {{< collapse title="importer.river" >}} + ```river import.http "math" { url = SERVER_URL @@ -61,4 +64,5 @@ math.add "default" { b = 45 } ``` + {{< /collapse >}} diff --git a/docs/sources/flow/reference/config-blocks/import.string.md b/docs/sources/flow/reference/config-blocks/import.string.md index f0aae61e9bc8..a6e489e7a791 100644 --- a/docs/sources/flow/reference/config-blocks/import.string.md +++ b/docs/sources/flow/reference/config-blocks/import.string.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/import.string/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.string/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.string/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.string/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/import.string/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/import.string/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/import.string/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/import.string/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/import.string/ description: Learn about the import.string configuration block title: import.string @@ -30,9 +30,9 @@ import.string "NAMESPACE" { The following arguments are supported: -Name | Type | Description | Default | Required -----------|----------------------|-------------------------------------------------------------|---------|--------- -`content` | `secret` or `string` | The contents of the module to import as a secret or string. | | yes +| Name | Type | Description | Default | Required | +| --------- | -------------------- | ----------------------------------------------------------- | ------- | -------- | +| `content` | `secret` or `string` | The contents of the module to import as a secret or string. | | yes | `content` is a string that contains the configuration of the module to import. `content` is typically loaded by using the exports of another component. For example, @@ -59,4 +59,3 @@ math.add "default" { b = 45 } ``` - diff --git a/docs/sources/flow/reference/config-blocks/logging.md b/docs/sources/flow/reference/config-blocks/logging.md index 23f3e84e90e8..331c02c5ff26 100644 --- a/docs/sources/flow/reference/config-blocks/logging.md +++ b/docs/sources/flow/reference/config-blocks/logging.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/logging/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/logging/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/logging/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/logging/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/logging/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/logging/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/logging/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/logging/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/logging/ description: Learn about the logging configuration block menuTitle: logging @@ -28,27 +28,27 @@ logging { The following arguments are supported: -Name | Type | Description | Default | Required ------------|----------------------|--------------------------------------------|------------|--------- -`level` | `string` | Level at which log lines should be written | `"info"` | no -`format` | `string` | Format to use for writing log lines | `"logfmt"` | no -`write_to` | `list(LogsReceiver)` | List of receivers to send log entries to | | no +| Name | Type | Description | Default | Required | +| ---------- | -------------------- | ------------------------------------------ | ---------- | -------- | +| `level` | `string` | Level at which log lines should be written | `"info"` | no | +| `format` | `string` | Format to use for writing log lines | `"logfmt"` | no | +| `write_to` | `list(LogsReceiver)` | List of receivers to send log entries to | | no | ### Log level The following strings are recognized as valid log levels: -* `"error"`: Only write logs at the _error_ level. -* `"warn"`: Only write logs at the _warn_ level or above. -* `"info"`: Only write logs at _info_ level or above. -* `"debug"`: Write all logs, including _debug_ level logs. +- `"error"`: Only write logs at the _error_ level. +- `"warn"`: Only write logs at the _warn_ level or above. +- `"info"`: Only write logs at _info_ level or above. +- `"debug"`: Write all logs, including _debug_ level logs. ### Log format The following strings are recognized as valid log line formats: -* `"logfmt"`: Write logs as [logfmt][] lines. -* `"json"`: Write logs as JSON objects. +- `"logfmt"`: Write logs as [logfmt][] lines. +- `"json"`: Write logs as JSON objects. [logfmt]: https://brandur.org/logfmt diff --git a/docs/sources/flow/reference/config-blocks/remotecfg.md b/docs/sources/flow/reference/config-blocks/remotecfg.md index a175c9e1694f..7459a06ff958 100644 --- a/docs/sources/flow/reference/config-blocks/remotecfg.md +++ b/docs/sources/flow/reference/config-blocks/remotecfg.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/remotecfg/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/remotecfg/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/remotecfg/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/remotecfg/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/remotecfg/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/remotecfg/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/remotecfg/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/remotecfg/ canonical: remotecfgs://grafana.com/docs/agent/latest/flow/reference/config-blocks/remotecfg/ description: Learn about the remotecfg configuration block menuTitle: remotecfg @@ -41,12 +41,12 @@ remotecfg { The following arguments are supported: -Name | Type | Description | Default | Required ------------------|----------------------|---------------------------------------------------|-------------|--------- -`url` | `string` | The address of the API to poll for configuration. | `""` | no -`id` | `string` | A self-reported ID. | `see below` | no -`metadata` | `map(string)` | A set of self-reported metadata. | `{}` | no -`poll_frequency` | `duration` | How often to poll the API for new configuration. | `"1m"` | no +| Name | Type | Description | Default | Required | +| ---------------- | ------------- | ------------------------------------------------- | ----------- | -------- | +| `url` | `string` | The address of the API to poll for configuration. | `""` | no | +| `id` | `string` | A self-reported ID. | `see below` | no | +| `metadata` | `map(string)` | A set of self-reported metadata. | `{}` | no | +| `poll_frequency` | `duration` | How often to poll the API for new configuration. | `"1m"` | no | If the `url` is not set, then the service block is a no-op. @@ -61,13 +61,13 @@ remote endpoint so that the API can decide what configuration to serve. The following blocks are supported inside the definition of `remotecfg`: -Hierarchy | Block | Description | Required ---------------------|-------------------|----------------------------------------------------------|--------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +| Hierarchy | Block | Description | Required | +| ------------------- | ----------------- | -------------------------------------------------------- | -------- | +| basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no | +| authorization | [authorization][] | Configure generic authorization to the endpoint. | no | +| oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no | +| oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | +| tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no | The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. diff --git a/docs/sources/flow/reference/config-blocks/tracing.md b/docs/sources/flow/reference/config-blocks/tracing.md index 6d4a0a2cf314..53fdab1d8ea0 100644 --- a/docs/sources/flow/reference/config-blocks/tracing.md +++ b/docs/sources/flow/reference/config-blocks/tracing.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/config-blocks/tracing/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/tracing/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/tracing/ -- /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/tracing/ + - /docs/grafana-cloud/agent/flow/reference/config-blocks/tracing/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/config-blocks/tracing/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/config-blocks/tracing/ + - /docs/grafana-cloud/send-data/agent/flow/reference/config-blocks/tracing/ canonical: https://grafana.com/docs/agent/latest/flow/reference/config-blocks/tracing/ description: Learn about the tracing configuration block menuTitle: tracing @@ -40,10 +40,10 @@ otelcol.exporter.otlp "tempo" { The following arguments are supported: -Name | Type | Description | Default | Required ---------------------|--------------------------|-----------------------------------------------------|---------|--------- -`sampling_fraction` | `number` | Fraction of traces to keep. | `0.1` | no -`write_to` | `list(otelcol.Consumer)` | Inputs from `otelcol` components to send traces to. | `[]` | no +| Name | Type | Description | Default | Required | +| ------------------- | ------------------------ | --------------------------------------------------- | ------- | -------- | +| `sampling_fraction` | `number` | Fraction of traces to keep. | `0.1` | no | +| `write_to` | `list(otelcol.Consumer)` | Inputs from `otelcol` components to send traces to. | `[]` | no | The `write_to` argument controls which components to send traces to for processing. The elements in the array can be any `otelcol` component that @@ -62,10 +62,10 @@ kept. The following blocks are supported inside the definition of `tracing`: -Hierarchy | Block | Description | Required -------------------------|-------------------|--------------------------------------------------------------|--------- -sampler | [sampler][] | Define custom sampling on top of the base sampling fraction. | no -sampler > jaeger_remote | [jaeger_remote][] | Retrieve sampling information via a Jaeger remote sampler. | no +| Hierarchy | Block | Description | Required | +| ----------------------- | ----------------- | ------------------------------------------------------------ | -------- | +| sampler | [sampler][] | Define custom sampling on top of the base sampling fraction. | no | +| sampler > jaeger_remote | [jaeger_remote][] | Retrieve sampling information via a Jaeger remote sampler. | no | The `>` symbol indicates deeper levels of nesting. For example, `sampler > jaeger_remote` refers to a `jaeger_remote` block defined inside an `sampler` @@ -87,11 +87,11 @@ It is invalid to define more than one sampler to use in the `sampler` block. The `jaeger_remote` block configures the retrieval of sampling information through a remote server that exposes Jaeger sampling strategies. -Name | Type | Description | Default | Required --------------------|------------|------------------------------------------------------------|------------------------------------|--------- -`url` | `string` | URL to retrieve sampling strategies from. | `"http://127.0.0.1:5778/sampling"` | no -`max_operations` | `number` | Limit number of operations which can have custom sampling. | `256` | no -`refresh_interval` | `duration` | Frequency to poll the URL for new sampling strategies. | `"1m"` | no +| Name | Type | Description | Default | Required | +| ------------------ | ---------- | ---------------------------------------------------------- | ---------------------------------- | -------- | +| `url` | `string` | URL to retrieve sampling strategies from. | `"http://127.0.0.1:5778/sampling"` | no | +| `max_operations` | `number` | Limit number of operations which can have custom sampling. | `256` | no | +| `refresh_interval` | `duration` | Frequency to poll the URL for new sampling strategies. | `"1m"` | no | The remote sampling strategies are retrieved from the URL specified by the `url` argument, and polled for updates on a timer. The frequency for how often @@ -101,8 +101,7 @@ Requests to the remote sampling strategies server are made through an HTTP `GET` request to the configured `url` argument. A `service=grafana-agent` query parameter is always added to the URL to allow the server to respond with service-specific strategies. The HTTP response body is read as JSON matching -the schema specified by Jaeger's [`strategies.json` file][Jaeger sampling -strategies]. +the schema specified by Jaeger's [`strategies.json` file][Jaeger sampling strategies]. The `max_operations` limits the amount of custom span names that can have custom sampling rules. If the remote sampling strategy exceeds the limit, diff --git a/docs/sources/flow/reference/stdlib/_index.md b/docs/sources/flow/reference/stdlib/_index.md index 8f42f4bc28d4..537db01869e0 100644 --- a/docs/sources/flow/reference/stdlib/_index.md +++ b/docs/sources/flow/reference/stdlib/_index.md @@ -1,12 +1,13 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/reference/stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/ -- standard-library/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/ + - standard-library/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/ -description: The standard library is a list of functions used in expressions when +description: + The standard library is a list of functions used in expressions when assigning values to attributes title: Standard library weight: 400 diff --git a/docs/sources/flow/reference/stdlib/coalesce.md b/docs/sources/flow/reference/stdlib/coalesce.md index 73f5cd444821..72a860ec38ca 100644 --- a/docs/sources/flow/reference/stdlib/coalesce.md +++ b/docs/sources/flow/reference/stdlib/coalesce.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/coalesce/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/coalesce/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/coalesce/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/coalesce/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/coalesce/ + - ../../configuration-language/standard-library/coalesce/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/coalesce/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/coalesce/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/coalesce/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/coalesce/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/coalesce/ description: Learn about coalesce title: coalesce diff --git a/docs/sources/flow/reference/stdlib/concat.md b/docs/sources/flow/reference/stdlib/concat.md index 36e7eba906a6..a84338036476 100644 --- a/docs/sources/flow/reference/stdlib/concat.md +++ b/docs/sources/flow/reference/stdlib/concat.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/concat/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/concat/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/concat/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/concat/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/concat/ + - ../../configuration-language/standard-library/concat/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/concat/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/concat/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/concat/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/concat/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/concat/ description: Learn about concat title: concat diff --git a/docs/sources/flow/reference/stdlib/constants.md b/docs/sources/flow/reference/stdlib/constants.md index 3caf5c336a7c..e6ee84eda330 100644 --- a/docs/sources/flow/reference/stdlib/constants.md +++ b/docs/sources/flow/reference/stdlib/constants.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/constants/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/constants/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/constants/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/constants/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/constants/ + - ../../configuration-language/standard-library/constants/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/constants/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/constants/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/constants/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/constants/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/constants/ description: Learn about constants title: constants @@ -15,10 +15,10 @@ title: constants The `constants` object exposes a list of constant values about the system {{< param "PRODUCT_NAME" >}} is running on: -* `constants.hostname`: The hostname of the machine {{< param "PRODUCT_NAME" >}} is running +- `constants.hostname`: The hostname of the machine {{< param "PRODUCT_NAME" >}} is running on. -* `constants.os`: The operating system {{< param "PRODUCT_NAME" >}} is running on. -* `constants.arch`: The architecture of the system {{< param "PRODUCT_NAME" >}} is running on. +- `constants.os`: The operating system {{< param "PRODUCT_NAME" >}} is running on. +- `constants.arch`: The architecture of the system {{< param "PRODUCT_NAME" >}} is running on. ## Examples diff --git a/docs/sources/flow/reference/stdlib/env.md b/docs/sources/flow/reference/stdlib/env.md index 49a65d1a6a8b..f2726477918e 100644 --- a/docs/sources/flow/reference/stdlib/env.md +++ b/docs/sources/flow/reference/stdlib/env.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/env/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/env/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/env/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/env/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/env/ + - ../../configuration-language/standard-library/env/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/env/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/env/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/env/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/env/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/env/ description: Learn about env title: env diff --git a/docs/sources/flow/reference/stdlib/format.md b/docs/sources/flow/reference/stdlib/format.md index be5d9cd754c1..4b91e8877dc5 100644 --- a/docs/sources/flow/reference/stdlib/format.md +++ b/docs/sources/flow/reference/stdlib/format.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/format/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/format/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/format/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/format/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/format/ + - ../../configuration-language/standard-library/format/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/format/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/format/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/format/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/format/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/format/ description: Learn about format title: format @@ -54,7 +54,7 @@ for an unsupported format verb. The specification may contain the following verbs. | Verb | Result | -|------|-------------------------------------------------------------------------------------------| +| ---- | ----------------------------------------------------------------------------------------- | | `%%` | Literal percent sign, consuming no value. | | `%t` | Convert to boolean and produce `true` or `false`. | | `%b` | Convert to integer number and produce binary representation. | diff --git a/docs/sources/flow/reference/stdlib/join.md b/docs/sources/flow/reference/stdlib/join.md index 3203585c81c1..35f42aaa8706 100644 --- a/docs/sources/flow/reference/stdlib/join.md +++ b/docs/sources/flow/reference/stdlib/join.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/join/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/join/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/join/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/join/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/join/ + - ../../configuration-language/standard-library/join/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/join/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/join/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/join/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/join/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/join/ description: Learn about join title: join diff --git a/docs/sources/flow/reference/stdlib/json_decode.md b/docs/sources/flow/reference/stdlib/json_decode.md index d56fc45dabab..e7ebc874817a 100644 --- a/docs/sources/flow/reference/stdlib/json_decode.md +++ b/docs/sources/flow/reference/stdlib/json_decode.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/json_decode/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/json_decode/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/json_decode/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/json_decode/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/json_decode/ + - ../../configuration-language/standard-library/json_decode/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/json_decode/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/json_decode/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/json_decode/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/json_decode/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/json_decode/ description: Learn about json_decode title: json_decode diff --git a/docs/sources/flow/reference/stdlib/json_path.md b/docs/sources/flow/reference/stdlib/json_path.md index 91058e6e31fe..031e4919e0ac 100644 --- a/docs/sources/flow/reference/stdlib/json_path.md +++ b/docs/sources/flow/reference/stdlib/json_path.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/json_path/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/json_path/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/json_path/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/json_path/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/json_path/ + - ../../configuration-language/standard-library/json_path/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/json_path/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/json_path/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/json_path/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/json_path/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/json_path/ description: Learn about json_path title: json_path diff --git a/docs/sources/flow/reference/stdlib/nonsensitive.md b/docs/sources/flow/reference/stdlib/nonsensitive.md index a2bb0bd31d49..ac1ab944705e 100644 --- a/docs/sources/flow/reference/stdlib/nonsensitive.md +++ b/docs/sources/flow/reference/stdlib/nonsensitive.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/nonsensitive/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/nonsensitive/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/nonsensitive/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/nonsensitive/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/nonsensitive/ + - ../../configuration-language/standard-library/nonsensitive/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/nonsensitive/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/nonsensitive/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/nonsensitive/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/nonsensitive/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/nonsensitive/ description: Learn about nonsensitive title: nonsensitive diff --git a/docs/sources/flow/reference/stdlib/replace.md b/docs/sources/flow/reference/stdlib/replace.md index 2c1eb383f390..bf4bf6bc0222 100644 --- a/docs/sources/flow/reference/stdlib/replace.md +++ b/docs/sources/flow/reference/stdlib/replace.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/replace/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/replace/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/replace/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/replace/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/replace/ + - ../../configuration-language/standard-library/replace/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/replace/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/replace/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/replace/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/replace/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/replace/ description: Learn about replace title: replace diff --git a/docs/sources/flow/reference/stdlib/split.md b/docs/sources/flow/reference/stdlib/split.md index 3087ca153669..eb7a6a6e938d 100644 --- a/docs/sources/flow/reference/stdlib/split.md +++ b/docs/sources/flow/reference/stdlib/split.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/split/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/split/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/split/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/split/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/split/ + - ../../configuration-language/standard-library/split/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/split/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/split/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/split/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/split/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/split/ description: Learn about split title: split diff --git a/docs/sources/flow/reference/stdlib/to_lower.md b/docs/sources/flow/reference/stdlib/to_lower.md index 8c252fb354a8..8775660faa5c 100644 --- a/docs/sources/flow/reference/stdlib/to_lower.md +++ b/docs/sources/flow/reference/stdlib/to_lower.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/to_lower/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/to_lower/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/to_lower/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/to_lower/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/to_lower/ + - ../../configuration-language/standard-library/to_lower/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/to_lower/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/to_lower/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/to_lower/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/to_lower/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/to_lower/ description: Learn about to_lower title: to_lower diff --git a/docs/sources/flow/reference/stdlib/to_upper.md b/docs/sources/flow/reference/stdlib/to_upper.md index aef26d5ff669..3a7067c26f34 100644 --- a/docs/sources/flow/reference/stdlib/to_upper.md +++ b/docs/sources/flow/reference/stdlib/to_upper.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/to_upper/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/to_upper/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/to_upper/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/to_upper/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/to_upper/ + - ../../configuration-language/standard-library/to_upper/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/to_upper/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/to_upper/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/to_upper/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/to_upper/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/to_upper/ description: Learn about to_upper title: to_upper diff --git a/docs/sources/flow/reference/stdlib/trim.md b/docs/sources/flow/reference/stdlib/trim.md index 5023d1f21328..d1fde8bbed50 100644 --- a/docs/sources/flow/reference/stdlib/trim.md +++ b/docs/sources/flow/reference/stdlib/trim.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/trim/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/trim/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim/ + - ../../configuration-language/standard-library/trim/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/trim/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/trim/ description: Learn about trim title: trim diff --git a/docs/sources/flow/reference/stdlib/trim_prefix.md b/docs/sources/flow/reference/stdlib/trim_prefix.md index 33d716f133e4..44e21a9a4fc1 100644 --- a/docs/sources/flow/reference/stdlib/trim_prefix.md +++ b/docs/sources/flow/reference/stdlib/trim_prefix.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/trim_prefix/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/trim_prefix/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_prefix/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_prefix/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_prefix/ + - ../../configuration-language/standard-library/trim_prefix/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/trim_prefix/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_prefix/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_prefix/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_prefix/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/trim_prefix/ description: Learn about trim_prefix title: trim_prefix diff --git a/docs/sources/flow/reference/stdlib/trim_space.md b/docs/sources/flow/reference/stdlib/trim_space.md index 5e13e0ba0df3..74a9efea43d5 100644 --- a/docs/sources/flow/reference/stdlib/trim_space.md +++ b/docs/sources/flow/reference/stdlib/trim_space.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/trim_space/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/trim_space/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_space/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_space/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_space/ + - ../../configuration-language/standard-library/trim_space/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/trim_space/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_space/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_space/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_space/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/trim_space/ description: Learn about trim_space title: trim_space diff --git a/docs/sources/flow/reference/stdlib/trim_suffix.md b/docs/sources/flow/reference/stdlib/trim_suffix.md index 4741007ebe4b..0d4831c6a113 100644 --- a/docs/sources/flow/reference/stdlib/trim_suffix.md +++ b/docs/sources/flow/reference/stdlib/trim_suffix.md @@ -1,10 +1,10 @@ --- aliases: -- ../../configuration-language/standard-library/trim_suffix/ -- /docs/grafana-cloud/agent/flow/reference/stdlib/trim_suffix/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_suffix/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_suffix/ -- /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_suffix/ + - ../../configuration-language/standard-library/trim_suffix/ + - /docs/grafana-cloud/agent/flow/reference/stdlib/trim_suffix/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/stdlib/trim_suffix/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/stdlib/trim_suffix/ + - /docs/grafana-cloud/send-data/agent/flow/reference/stdlib/trim_suffix/ canonical: https://grafana.com/docs/agent/latest/flow/reference/stdlib/trim_suffix/ description: Learn about trim_suffix title: trim_suffix diff --git a/docs/sources/flow/release-notes.md b/docs/sources/flow/release-notes.md index 984e76c68ddf..dd57d9b880a3 100644 --- a/docs/sources/flow/release-notes.md +++ b/docs/sources/flow/release-notes.md @@ -1,10 +1,10 @@ --- aliases: -- ./upgrade-guide/ -- /docs/grafana-cloud/agent/flow/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/release-notes/ -- /docs/grafana-cloud/send-data/agent/flow/release-notes/ + - ./upgrade-guide/ + - /docs/grafana-cloud/agent/flow/release-notes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/release-notes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/release-notes/ + - /docs/grafana-cloud/send-data/agent/flow/release-notes/ canonical: https://grafana.com/docs/agent/latest/flow/release-notes/ description: Release notes for Grafana Agent Flow menuTitle: Release notes @@ -22,8 +22,8 @@ For a complete list of changes to {{< param "PRODUCT_ROOT_NAME" >}}, with links These release notes are specific to {{< param "PRODUCT_NAME" >}}. Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants are contained on separate pages: -* [Static mode release notes][release-notes-static] -* [Static mode Kubernetes operator release notes][release-notes-operator] +- [Static mode release notes][release-notes-static] +- [Static mode Kubernetes operator release notes][release-notes-operator] [release-notes-static]: {{< relref "../static/release-notes.md" >}} [release-notes-operator]: {{< relref "../operator/release-notes.md" >}} @@ -80,9 +80,9 @@ Support for `prometheus.exporter.vsphere` will be removed in a future release. ### Breaking change: `otelcol.receiver.prometheus` will drop all `otel_scope_info` metrics when converting them to OTLP -* If the `otel_scope_info` metric has the `otel_scope_name` and `otel_scope_version` labels, - their values are used to set the OTLP Instrumentation Scope name and version, respectively. -* Labels for `otel_scope_info` metrics other than `otel_scope_name` and `otel_scope_version` +- If the `otel_scope_info` metric has the `otel_scope_name` and `otel_scope_version` labels, + their values are used to set the OTLP Instrumentation Scope name and version, respectively. +- Labels for `otel_scope_info` metrics other than `otel_scope_name` and `otel_scope_version` are added as scope attributes with the matching name and version. ### Breaking change: label for `target` block in `prometheus.exporter.blackbox` is removed @@ -146,6 +146,7 @@ stage.non_indexed_labels { ``` New configuration example: + ```river stage.structured_metadata { values = {"app" = ""} @@ -156,13 +157,13 @@ stage.structured_metadata { There are 2 changes to the way scope labels work for this component. -* Previously, the `include_scope_info` argument would trigger including -`otel_scope_name` and `otel_scope_version` in metrics. This is now defaulted -to `true` and controlled via the `include_scope_labels` argument. +- Previously, the `include_scope_info` argument would trigger including + `otel_scope_name` and `otel_scope_version` in metrics. This is now defaulted + to `true` and controlled via the `include_scope_labels` argument. -* A bugfix was made to rename `otel_scope_info` metric labels from -`name` to `otel_scope_name` and `version` to `otel_scope_version`. This is -now correct with the OTLP Instrumentation Scope specification. +- A bugfix was made to rename `otel_scope_info` metric labels from + `name` to `otel_scope_name` and `version` to `otel_scope_version`. This is + now correct with the OTLP Instrumentation Scope specification. ### Breaking change: `prometheus.exporter.unix` now requires a label. @@ -188,8 +189,8 @@ prometheus.exporter.unix "example" { /* ... */ } The default value of `retry_on_http_429` is changed from `false` to `true` for the `queue_config` block in `prometheus.remote_write` so that {{< param "PRODUCT_ROOT_NAME" >}} can retry sending and avoid data being lost for metric pipelines by default. -* If you set the `retry_on_http_429` explicitly - no action is required. -* If you do not set `retry_on_http_429` explicitly and you do *not* want to retry on HTTP 429, make sure you set it to `false` as you upgrade to this new version. +- If you set the `retry_on_http_429` explicitly - no action is required. +- If you do not set `retry_on_http_429` explicitly and you do _not_ want to retry on HTTP 429, make sure you set it to `false` as you upgrade to this new version. ### Breaking change: `loki.source.file` no longer automatically extracts logs from compressed files @@ -202,28 +203,29 @@ format. By default, the decompression of files is entirely disabled. How to migrate: -* If {{< param "PRODUCT_NAME" >}} never reads logs from files with +- If {{< param "PRODUCT_NAME" >}} never reads logs from files with extensions `.gz`, `.tar.gz`, `.z` or `.bz2` then no action is required. + > You can check what are the file extensions {{< param "PRODUCT_NAME" >}} reads from by looking - at the `path` label on `loki_source_file_file_bytes_total` metric. + > at the `path` label on `loki_source_file_file_bytes_total` metric. -* If {{< param "PRODUCT_NAME" >}} extracts data from compressed files, please add the following +- If {{< param "PRODUCT_NAME" >}} extracts data from compressed files, please add the following configuration block to your `loki.source.file` component: - ```river - loki.source.file "example" { - ... - decompression { - enabled = true - format = "" - } + ```river + loki.source.file "example" { + ... + decompression { + enabled = true + format = "" } - ``` + } + ``` - where the `` is the appropriate compression format - - see [`loki.source.file` documentation][loki-source-file-docs] for details. + where the `` is the appropriate compression format - + see [`loki.source.file` documentation][loki-source-file-docs] for details. - [loki-source-file-docs]: {{< relref "./reference/components/loki.source.file.md" >}} + [loki-source-file-docs]: {{< relref "./reference/components/loki.source.file.md" >}} ## v0.35 @@ -353,8 +355,8 @@ HTTP-based discovery methods. However, the Prometheus discovery mechanisms have more functionality than `discovery_target_decode`: -* Prometheus' `file_sd_configs` can use many files based on pattern matching. -* Prometheus' `http_sd_configs` also support YAML files. +- Prometheus' `file_sd_configs` can use many files based on pattern matching. +- Prometheus' `http_sd_configs` also support YAML files. Additionally, it is no longer an accepted pattern to have component-specific functions to the River standard library. @@ -389,12 +391,14 @@ prometehus.scrape "example" { ``` ### Breaking change: The algorithm for the "hash" action of `otelcol.processor.attributes` has changed + The hash produced when using `action = "hash"` in the `otelcol.processor.attributes` flow component is now using the more secure SHA-256 algorithm. The change was made in PR [#22831](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/22831) of opentelemetry-collector-contrib. ### Breaking change: `otelcol.exporter.loki` now includes instrumentation scope in its output Additional `instrumentation_scope` information will be added to the OTLP log signal, like this: + ``` { "body": "Example log", diff --git a/docs/sources/flow/tasks/_index.md b/docs/sources/flow/tasks/_index.md index 4ca62e8c1331..483e3eeb64a4 100644 --- a/docs/sources/flow/tasks/_index.md +++ b/docs/sources/flow/tasks/_index.md @@ -1,16 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/ -- getting_started/ # /docs/agent/latest/flow/getting_started/ -- getting-started/ # /docs/agent/latest/flow/getting-started/ + - /docs/grafana-cloud/agent/flow/tasks/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/ + - getting_started/ # /docs/agent/latest/flow/getting_started/ + - getting-started/ # /docs/agent/latest/flow/getting-started/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/ description: How to perform common tasks with Grafana Agent Flow menuTitle: Tasks diff --git a/docs/sources/flow/tasks/collect-opentelemetry-data.md b/docs/sources/flow/tasks/collect-opentelemetry-data.md index 17624679b17a..bca4f170958e 100644 --- a/docs/sources/flow/tasks/collect-opentelemetry-data.md +++ b/docs/sources/flow/tasks/collect-opentelemetry-data.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/collect-opentelemetry-data/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/collect-opentelemetry-data/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/collect-opentelemetry-data/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/collect-opentelemetry-data/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/collect-opentelemetry-data/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/collect-opentelemetry-data/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/collect-opentelemetry-data/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/collect-opentelemetry-data/ -- ../getting-started/collect-opentelemetry-data/ # /docs/agent/latest/flow/getting-started/collect-opentelemetry-data/ + - /docs/grafana-cloud/agent/flow/tasks/collect-opentelemetry-data/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/collect-opentelemetry-data/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/collect-opentelemetry-data/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/collect-opentelemetry-data/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/collect-opentelemetry-data/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/collect-opentelemetry-data/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/collect-opentelemetry-data/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/collect-opentelemetry-data/ + - ../getting-started/collect-opentelemetry-data/ # /docs/agent/latest/flow/getting-started/collect-opentelemetry-data/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/collect-opentelemetry-data/ description: Learn how to collect OpenTelemetry data title: Collect OpenTelemetry data @@ -54,24 +54,24 @@ data and forward it to any OpenTelemetry-compatible endpoint. This topic describes how to: -* Configure OpenTelemetry data delivery. -* Configure batching. -* Receive OpenTelemetry data over OTLP. +- Configure OpenTelemetry data delivery. +- Configure batching. +- Receive OpenTelemetry data over OTLP. ## Components used in this topic -* [otelcol.auth.basic](ref:otelcol.auth.basic) -* [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) -* [otelcol.exporter.otlphttp](ref:otelcol.exporter.otlphttp) -* [otelcol.processor.batch](ref:otelcol.processor.batch) -* [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) +- [otelcol.auth.basic](ref:otelcol.auth.basic) +- [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) +- [otelcol.exporter.otlphttp](ref:otelcol.exporter.otlphttp) +- [otelcol.processor.batch](ref:otelcol.processor.batch) +- [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) ## Before you begin -* Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. -* Have a set of OpenTelemetry applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. -* Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. -* Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. +- Have a set of OpenTelemetry applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. +- Identify where {{< param "PRODUCT_NAME" >}} writes received telemetry data. +- Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Configure an OpenTelemetry Protocol exporter @@ -100,38 +100,40 @@ To configure an `otelcol.exporter.otlp` component for exporting OpenTelemetry da - _``_: The label for the component, such as `default`. The label you use must be unique across all `otelcol.exporter.otlp` components in the same configuration file. + * _``_: The hostname or IP address of the server to send OTLP requests to. + - _``_: The port of the server to send OTLP requests to. 2. If your server requires basic authentication, complete the following: - 1. Add the following `otelcol.auth.basic` component to your configuration file: + 1. Add the following `otelcol.auth.basic` component to your configuration file: - ```river - otelcol.auth.basic "" { - username = "" - password = "" - } - ``` + ```river + otelcol.auth.basic "" { + username = "" + password = "" + } + ``` - Replace the following: + Replace the following: - - _``_: The label for the component, such as `default`. - The label you use must be unique across all `otelcol.auth.basic` components in the same configuration file. - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + - _``_: The label for the component, such as `default`. + The label you use must be unique across all `otelcol.auth.basic` components in the same configuration file. + - _``_: The basic authentication username. + - _``_: The basic authentication password or API key. - 1. Add the following line inside of the `client` block of your `otelcol.exporter.otlp` component: + 1. Add the following line inside of the `client` block of your `otelcol.exporter.otlp` component: - ```river - auth = otelcol.auth.basic..handler - ``` + ```river + auth = otelcol.auth.basic..handler + ``` - Replace the following: + Replace the following: - - _``_: The label for the `otelcol.auth.basic` component. + - _``_: The label for the `otelcol.auth.basic` component. -1. If you have more than one server to export metrics to, create a new `otelcol.exporter.otlp` component for each additional server. +3. If you have more than one server to export metrics to, create a new `otelcol.exporter.otlp` component for each additional server. > `otelcol.exporter.otlp` sends data using OTLP over gRPC (HTTP/2). > To send to a server using HTTP/1.1, follow the preceding steps, @@ -349,4 +351,3 @@ For more information on receiving OpenTelemetry data using the OpenTelemetry Pro [OpenTelemetry]: https://opentelemetry.io [Configure an OpenTelemetry Protocol exporter]: #configure-an-opentelemetry-protocol-exporter [Configure batching]: #configure-batching - diff --git a/docs/sources/flow/tasks/collect-prometheus-metrics.md b/docs/sources/flow/tasks/collect-prometheus-metrics.md index 0fd225cf5ddc..3137d47ca784 100644 --- a/docs/sources/flow/tasks/collect-prometheus-metrics.md +++ b/docs/sources/flow/tasks/collect-prometheus-metrics.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/collect-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/collect-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/collect-prometheus-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/collect-prometheus-metrics/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/collect-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/collect-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/collect-prometheus-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/collect-prometheus-metrics/ -- ../getting-started/collect-prometheus-metrics/ # /docs/agent/latest/flow/getting-started/collect-prometheus-metrics/ + - /docs/grafana-cloud/agent/flow/tasks/collect-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/collect-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/collect-prometheus-metrics/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/collect-prometheus-metrics/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/collect-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/collect-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/collect-prometheus-metrics/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/collect-prometheus-metrics/ + - ../getting-started/collect-prometheus-metrics/ # /docs/agent/latest/flow/getting-started/collect-prometheus-metrics/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/collect-prometheus-metrics/ description: Learn how to collect and forward Prometheus metrics title: Collect and forward Prometheus metrics @@ -48,22 +48,22 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [Prometheus][] metrics This topic describes how to: -* Configure metrics delivery. -* Collect metrics from Kubernetes Pods. +- Configure metrics delivery. +- Collect metrics from Kubernetes Pods. ## Components used in this topic -* [discovery.kubernetes](ref:discovery.kubernetes) -* [prometheus.remote_write](ref:prometheus.remote_write) -* [prometheus.scrape](ref:prometheus.scrape) +- [discovery.kubernetes](ref:discovery.kubernetes) +- [prometheus.remote_write](ref:prometheus.remote_write) +- [prometheus.scrape](ref:prometheus.scrape) ## Before you begin -* Ensure that you have basic familiarity with instrumenting applications with Prometheus. -* Have a set of Prometheus exports or applications exposing Prometheus metrics that you want to collect metrics from. -* Identify where you will write collected metrics. +- Ensure that you have basic familiarity with instrumenting applications with Prometheus. +- Have a set of Prometheus exports or applications exposing Prometheus metrics that you want to collect metrics from. +- Identify where you will write collected metrics. Metrics can be written to Prometheus or Prometheus-compatible endpoints such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. -* Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Configure metrics delivery @@ -150,86 +150,86 @@ To collect metrics from Kubernetes Pods, complete the following steps: 1. Discover Kubernetes Pods: - 1. Add the following `discovery.kubernetes` component to your configuration file to discover every Pod in the cluster across all Namespaces. + 1. Add the following `discovery.kubernetes` component to your configuration file to discover every Pod in the cluster across all Namespaces. - ```river - discovery.kubernetes "" { - role = "pod" - } - ``` + ```river + discovery.kubernetes "" { + role = "pod" + } + ``` - Replace the following + Replace the following - - _``_: The label for the component, such as `pods`. - The label you use must be unique across all `discovery.kubernetes` components in the same configuration file. + - _``_: The label for the component, such as `pods`. + The label you use must be unique across all `discovery.kubernetes` components in the same configuration file. - This generates one Prometheus target for every exposed port on every discovered Pod. + This generates one Prometheus target for every exposed port on every discovered Pod. - 1. To limit the Namespaces that Pods are discovered in, add the following block inside the `discovery.kubernetes` component. + 1. To limit the Namespaces that Pods are discovered in, add the following block inside the `discovery.kubernetes` component. - ```river - namespaces { - own_namespace = true - names = [] - } - ``` + ```river + namespaces { + own_namespace = true + names = [] + } + ``` - Replace the following: + Replace the following: - - _``_: A comma-delimited list of strings representing Namespaces to search. - Each string must be wrapped in double quotes. For example, `"default","kube-system"`. + - _``_: A comma-delimited list of strings representing Namespaces to search. + Each string must be wrapped in double quotes. For example, `"default","kube-system"`. - If you don't want to search for Pods in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`. + If you don't want to search for Pods in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`. - 1. To use a field selector to limit the number of discovered Pods, add the following block inside the `discovery.kubernetes` component. + 1. To use a field selector to limit the number of discovered Pods, add the following block inside the `discovery.kubernetes` component. - ```river - selectors { - role = "pod" - field = "" - } - ``` + ```river + selectors { + role = "pod" + field = "" + } + ``` - Replace the following: + Replace the following: - - _``_: The Kubernetes field selector to use, such as `metadata.name=my-service`. - For more information on field selectors, refer to the Kubernetes documentation on [Field Selectors][]. + - _``_: The Kubernetes field selector to use, such as `metadata.name=my-service`. + For more information on field selectors, refer to the Kubernetes documentation on [Field Selectors][]. - Create additional `selectors` blocks for each field selector you want to apply. + Create additional `selectors` blocks for each field selector you want to apply. - 1. To use a label selector to limit the number of discovered Pods, add the following block inside the `discovery.kubernetes` component. + 1. To use a label selector to limit the number of discovered Pods, add the following block inside the `discovery.kubernetes` component. - ```river - selectors { - role = "pod" - label = "LABEL_SELECTOR" - } - ``` + ```river + selectors { + role = "pod" + label = "LABEL_SELECTOR" + } + ``` - Replace the following: + Replace the following: - - _``_: The Kubernetes label selector, such as `environment in (production, qa)`. - For more information on label selectors, refer to the Kubernetes documentation on [Labels and Selectors][]. + - _``_: The Kubernetes label selector, such as `environment in (production, qa)`. + For more information on label selectors, refer to the Kubernetes documentation on [Labels and Selectors][]. - Create additional `selectors` blocks for each label selector you want to apply. + Create additional `selectors` blocks for each label selector you want to apply. 1. Collect metrics from discovered Pods: - 1. Add the following `prometheus.scrape` component to your configuration file. + 1. Add the following `prometheus.scrape` component to your configuration file. - ```river - prometheus.scrape "" { - targets = discovery.kubernetes..targets - forward_to = [prometheus.remote_write..receiver] - } - ``` + ```river + prometheus.scrape "" { + targets = discovery.kubernetes..targets + forward_to = [prometheus.remote_write..receiver] + } + ``` - Replace the following: + Replace the following: - - _``_: The label for the component, such as `pods`. - The label you use must be unique across all `prometheus.scrape` components in the same configuration file. - - _``_: The label for the `discovery.kubernetes` component. - - _``_: The label for your existing `prometheus.remote_write` component. + - _``_: The label for the component, such as `pods`. + The label you use must be unique across all `prometheus.scrape` components in the same configuration file. + - _``_: The label for the `discovery.kubernetes` component. + - _``_: The label for your existing `prometheus.remote_write` component. The following example demonstrates configuring {{< param "PRODUCT_NAME" >}} to collect metrics from running production Kubernetes Pods in the `default` Namespace. @@ -276,86 +276,86 @@ To collect metrics from Kubernetes Services, complete the following steps. 1. Discover Kubernetes Services: - 1. Add the following `discovery.kubernetes` component to your configuration file to discover every Services in the cluster across all Namespaces. + 1. Add the following `discovery.kubernetes` component to your configuration file to discover every Services in the cluster across all Namespaces. - ```river - discovery.kubernetes "" { - role = "service" - } - ``` + ```river + discovery.kubernetes "" { + role = "service" + } + ``` - Replace the following: + Replace the following: - - _``_: A label for the component, such as `services`. - The label you use must be unique across all `discovery.kubernetes` components in the same configuration file. + - _``_: A label for the component, such as `services`. + The label you use must be unique across all `discovery.kubernetes` components in the same configuration file. - This will generate one Prometheus target for every exposed port on every discovered Service. + This will generate one Prometheus target for every exposed port on every discovered Service. - 1. To limit the Namespaces that Services are discovered in, add the following block inside the `discovery.kubernetes` component. + 1. To limit the Namespaces that Services are discovered in, add the following block inside the `discovery.kubernetes` component. - ```river - namespaces { - own_namespace = true - names = [] - } - ``` + ```river + namespaces { + own_namespace = true + names = [] + } + ``` - Replace the following: + Replace the following: - - _``_: A comma-delimited list of strings representing Namespaces to search. - Each string must be wrapped in double quotes. For example, `"default","kube-system"`. + - _``_: A comma-delimited list of strings representing Namespaces to search. + Each string must be wrapped in double quotes. For example, `"default","kube-system"`. - If you don't want to search for Services in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`. + If you don't want to search for Services in the Namespace {{< param "PRODUCT_NAME" >}} is running in, set `own_namespace` to `false`. - 1. To use a field selector to limit the number of discovered Services, add the following block inside the `discovery.kubernetes` component. + 1. To use a field selector to limit the number of discovered Services, add the following block inside the `discovery.kubernetes` component. - ```river - selectors { - role = "service" - field = "" - } - ``` + ```river + selectors { + role = "service" + field = "" + } + ``` - Replace the following: + Replace the following: - - _``_: The Kubernetes field selector, such as `metadata.name=my-service`. - For more information on field selectors, refer to the Kubernetes documentation on [Field Selectors][]. + - _``_: The Kubernetes field selector, such as `metadata.name=my-service`. + For more information on field selectors, refer to the Kubernetes documentation on [Field Selectors][]. - Create additional `selectors` blocks for each field selector you want to apply. + Create additional `selectors` blocks for each field selector you want to apply. - 1. To use a label selector to limit the number of discovered Services, add the following block inside the `discovery.kubernetes` component. + 1. To use a label selector to limit the number of discovered Services, add the following block inside the `discovery.kubernetes` component. - ```river - selectors { - role = "service" - label = "" - } - ``` + ```river + selectors { + role = "service" + label = "" + } + ``` - Replace the following: + Replace the following: - - _``_: The Kubernetes label selector, such as `environment in (production, qa)`. - For more information on label selectors, refer to the Kubernetes documentation on [Labels and Selectors][]. + - _``_: The Kubernetes label selector, such as `environment in (production, qa)`. + For more information on label selectors, refer to the Kubernetes documentation on [Labels and Selectors][]. - Create additional `selectors` blocks for each label selector you want to apply. + Create additional `selectors` blocks for each label selector you want to apply. 1. Collect metrics from discovered Services: - 1. Add the following `prometheus.scrape` component to your configuration file. + 1. Add the following `prometheus.scrape` component to your configuration file. - ```river - prometheus.scrape "" { - targets = discovery.kubernetes..targets - forward_to = [prometheus.remote_write..receiver] - } - ``` + ```river + prometheus.scrape "" { + targets = discovery.kubernetes..targets + forward_to = [prometheus.remote_write..receiver] + } + ``` - Replace the following: + Replace the following: - - _``_: The label for the component, such as `services`. - The label you use must be unique across all `prometeus.scrape` components in the same configuration file. - - _``_: The label for the `discovery.kubernetes` component. - - _``_: The label for your existing `prometheus.remote_write` component. + - _``_: The label for the component, such as `services`. + The label you use must be unique across all `prometeus.scrape` components in the same configuration file. + - _``_: The label for the `discovery.kubernetes` component. + - _``_: The label for your existing `prometheus.remote_write` component. The following example demonstrates configuring {{< param "PRODUCT_NAME" >}} to collect metrics from running production Kubernetes Services in the `default` Namespace. @@ -408,17 +408,17 @@ To collect metrics from a custom set of targets, complete the following steps. Replace the following: - - _``: The label for the component, such as `custom_targets`. + - \_``: The label for the component, such as `custom_targets`. The label you use must be unique across all `prometheus.scrape` components in the same configuration file. - _``_: A comma-delimited list of [Objects](ref:objects) denoting the Prometheus target. Each object must conform to the following rules: - * There must be an `__address__` key denoting the `HOST:PORT` of the target to collect metrics from. - * To explicitly specify which protocol to use, set the `__scheme__` key to `"http"` or `"https"`. + - There must be an `__address__` key denoting the `HOST:PORT` of the target to collect metrics from. + - To explicitly specify which protocol to use, set the `__scheme__` key to `"http"` or `"https"`. If the `__scheme__` key isn't provided, the protocol to use is inherited by the settings of the `prometheus.scrape` component. The default is `"http"`. - * To explicitly specify which HTTP path to collect metrics from, set the `__metrics_path__` key to the HTTP path to use. + - To explicitly specify which HTTP path to collect metrics from, set the `__metrics_path__` key to the HTTP path to use. If the `__metrics_path__` key isn't provided, the path to use is inherited by the settings of the `prometheus.scrape` component. The default is `"/metrics"`. - * Add additional keys as desired to inject extra labels to collected metrics. + - Add additional keys as desired to inject extra labels to collected metrics. Any label starting with two underscores (`__`) will be dropped prior to scraping. - _``_: The label for your existing `prometheus.remote_write` component. @@ -462,4 +462,3 @@ prometheus.remote_write "default" { [Field Selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ [Labels and Selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement [Configure metrics delivery]: #configure-metrics-delivery - diff --git a/docs/sources/flow/tasks/configure-agent-clustering.md b/docs/sources/flow/tasks/configure-agent-clustering.md index 89ab83d8329e..ff59932071f1 100644 --- a/docs/sources/flow/tasks/configure-agent-clustering.md +++ b/docs/sources/flow/tasks/configure-agent-clustering.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure-agent-clustering/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure-agent-clustering/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure-agent-clustering/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure-agent-clustering/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/configure-agent-clustering/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/configure-agent-clustering/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/configure-agent-clustering/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/configure-agent-clustering/ -- ../getting-started/configure-agent-clustering/ # /docs/agent/latest/flow/getting-started/configure-agent-clustering/ + - /docs/grafana-cloud/agent/flow/tasks/configure-agent-clustering/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure-agent-clustering/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure-agent-clustering/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure-agent-clustering/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/configure-agent-clustering/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/configure-agent-clustering/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/configure-agent-clustering/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/configure-agent-clustering/ + - ../getting-started/configure-agent-clustering/ # /docs/agent/latest/flow/getting-started/configure-agent-clustering/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure-agent-clustering/ description: Learn how to configure Grafana Agent clustering in an existing installation menuTitle: Configure Grafana Agent clustering @@ -74,4 +74,3 @@ To configure clustering: 1. Click **Clustering** in the navigation bar. 1. Ensure that all expected nodes appear in the resulting table. - diff --git a/docs/sources/flow/tasks/configure/_index.md b/docs/sources/flow/tasks/configure/_index.md index f0a353f75097..4a00aa481fc0 100644 --- a/docs/sources/flow/tasks/configure/_index.md +++ b/docs/sources/flow/tasks/configure/_index.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/configure/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/ -- /docs/grafana-cloud/send-data/agent/flow/setup/configure/ -- ../setup/configure/ # /docs/agent/latest/flow/setup/configure/ + - /docs/grafana-cloud/agent/flow/tasks/configure/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/configure/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/ + - /docs/grafana-cloud/send-data/agent/flow/setup/configure/ + - ../setup/configure/ # /docs/agent/latest/flow/setup/configure/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure/ description: Configure Grafana Agent Flow after it is installed menuTitle: Configure @@ -25,14 +25,13 @@ refs: # Configure {{% param "PRODUCT_NAME" %}} -You can configure {{< param "PRODUCT_NAME" >}} after it is [installed](ref:install). +You can configure {{< param "PRODUCT_NAME" >}} after it is [installed](ref:install). The default River configuration file for {{< param "PRODUCT_NAME" >}} is located at: -* Linux: `/etc/grafana-agent-flow.river` -* macOS: `$(brew --prefix)/etc/grafana-agent-flow/config.river` -* Windows: `C:\Program Files\Grafana Agent Flow\config.river` +- Linux: `/etc/grafana-agent-flow.river` +- macOS: `$(brew --prefix)/etc/grafana-agent-flow/config.river` +- Windows: `C:\Program Files\Grafana Agent Flow\config.river` This section includes information that helps you configure {{< param "PRODUCT_NAME" >}}. {{< section >}} - diff --git a/docs/sources/flow/tasks/configure/configure-kubernetes.md b/docs/sources/flow/tasks/configure/configure-kubernetes.md index 8f05f8cdf8b6..11ef48a1d022 100644 --- a/docs/sources/flow/tasks/configure/configure-kubernetes.md +++ b/docs/sources/flow/tasks/configure/configure-kubernetes.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure/configure-kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-kubernetes/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/configure/configure-kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-kubernetes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-kubernetes/ -- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-kubernetes/ -- ../../setup/configure/configure-kubernetes/ # /docs/agent/latest/flow/setup/configure/configure-kubernetes/ + - /docs/grafana-cloud/agent/flow/tasks/configure/configure-kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-kubernetes/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/configure/configure-kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-kubernetes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-kubernetes/ + - /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-kubernetes/ + - ../../setup/configure/configure-kubernetes/ # /docs/agent/latest/flow/setup/configure/configure-kubernetes/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure/configure-kubernetes/ description: Learn how to configure Grafana Agent Flow on Kubernetes menuTitle: Kubernetes @@ -28,6 +28,7 @@ when running on Kubernetes with the Helm chart. It assumes that: If instead you're looking for help in configuring {{< param "PRODUCT_NAME" >}} to perform a specific task, consult the following guides instead: + - [Collect and forward Prometheus metrics][prometheus], - [Collect OpenTelemetry data][otel], - or the [tasks section][tasks] for all the remaining configuration guides. @@ -57,7 +58,9 @@ To modify {{< param "PRODUCT_NAME" >}}'s Helm chart configuration, perform the f ```shell helm upgrade --namespace grafana/grafana-agent -f ``` + Replace the following: + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The path to your copy of `values.yaml` to use. @@ -93,13 +96,13 @@ This section describes how to modify the {{< param "PRODUCT_NAME" >}} configurat There are two methods to perform this task. ### Method 1: Modify the configuration in the values.yaml file + Use this method if you prefer to embed your {{< param "PRODUCT_NAME" >}} configuration in the Helm chart's `values.yaml` file. 1. Modify the configuration file contents directly in the `values.yaml` file: ```yaml agent: - mode: "flow" configMap: content: |- // Write your Agent config here: @@ -114,12 +117,15 @@ Use this method if you prefer to embed your {{< param "PRODUCT_NAME" >}} configu ```shell helm upgrade --namespace grafana/grafana-agent -f ``` + Replace the following: + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The path to your copy of `values.yaml` to use. ### Method 2: Create a separate ConfigMap from a file + Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configuration in a separate file. 1. Write your configuration to a file, for example, `config.river`. @@ -137,18 +143,19 @@ Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configu ```shell kubectl create configmap --namespace agent-config "--from-file=config.river=./config.river" ``` + Replace the following: + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. 1. Modify Helm Chart's configuration in your `values.yaml` to use the existing ConfigMap: ```yaml agent: - mode: "flow" - configMap: - create: false - name: agent-config - key: config.river + configMap: + create: false + name: agent-config + key: config.river ``` 1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: @@ -156,7 +163,9 @@ Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configu ```shell helm upgrade --namespace grafana/grafana-agent -f ``` + Replace the following: + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. - _``_: The path to your copy of `values.yaml` to use. diff --git a/docs/sources/flow/tasks/configure/configure-linux.md b/docs/sources/flow/tasks/configure/configure-linux.md index 67b870e3a62d..452fa7d858de 100644 --- a/docs/sources/flow/tasks/configure/configure-linux.md +++ b/docs/sources/flow/tasks/configure/configure-linux.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure/configure-linux/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-linux/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-linux/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-linux/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/configure/configure-linux/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-linux/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-linux/ -- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-linux/ -- ../../setup/configure/configure-linux/ # /docs/agent/latest/flow/setup/configure/configure-linux/ + - /docs/grafana-cloud/agent/flow/tasks/configure/configure-linux/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-linux/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-linux/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-linux/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/configure/configure-linux/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-linux/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-linux/ + - /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-linux/ + - ../../setup/configure/configure-linux/ # /docs/agent/latest/flow/setup/configure/configure-linux/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure/configure-linux/ description: Learn how to configure Grafana Agent Flow on Linux menuTitle: Linux @@ -44,8 +44,8 @@ To change the configuration file used by the service, perform the following step 1. Edit the environment file for the service: - * Debian or Ubuntu: edit `/etc/default/grafana-agent-flow` - * RHEL/Fedora or SUSE/openSUSE: edit `/etc/sysconfig/grafana-agent-flow` + - Debian or Ubuntu: edit `/etc/default/grafana-agent-flow` + - RHEL/Fedora or SUSE/openSUSE: edit `/etc/sysconfig/grafana-agent-flow` 1. Change the contents of the `CONFIG_FILE` environment variable to point to the new configuration file to use. @@ -61,15 +61,15 @@ To change the configuration file used by the service, perform the following step By default, the {{< param "PRODUCT_NAME" >}} service launches with the [run](ref:run) command, passing the following flags: -* `--storage.path=/var/lib/grafana-agent-flow` +- `--storage.path=/var/lib/grafana-agent-flow` To pass additional command-line flags to the {{< param "PRODUCT_NAME" >}} binary, perform the following steps: 1. Edit the environment file for the service: - * Debian-based systems: edit `/etc/default/grafana-agent-flow` - * RedHat or SUSE-based systems: edit `/etc/sysconfig/grafana-agent-flow` + - Debian-based systems: edit `/etc/default/grafana-agent-flow` + - RedHat or SUSE-based systems: edit `/etc/sysconfig/grafana-agent-flow` 1. Change the contents of the `CUSTOM_ARGS` environment variable to specify command-line flags to pass. @@ -95,15 +95,14 @@ To expose the UI to other machines, complete the following steps: to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the following customizations: - 1. Add the following command line argument to `CUSTOM_ARGS`: + 1. Add the following command line argument to `CUSTOM_ARGS`: - ```shell - --server.http.listen-addr=LISTEN_ADDR:12345 - ``` + ```shell + --server.http.listen-addr=LISTEN_ADDR:12345 + ``` - Replace `LISTEN_ADDR` with an address which other machines on the - network have access to, like the network IP address of the machine - {{< param "PRODUCT_NAME" >}} is running on. - - To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`. + Replace `LISTEN_ADDR` with an address which other machines on the + network have access to, like the network IP address of the machine + {{< param "PRODUCT_NAME" >}} is running on. + To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`. diff --git a/docs/sources/flow/tasks/configure/configure-macos.md b/docs/sources/flow/tasks/configure/configure-macos.md index da71e2e49c45..745240d79260 100644 --- a/docs/sources/flow/tasks/configure/configure-macos.md +++ b/docs/sources/flow/tasks/configure/configure-macos.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure/configure-macos/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-macos/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-macos/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-macos/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/configure/configure-macos/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-macos/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-macos/ -- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos/ -- ../../setup/configure/configure-macos/ # /docs/agent/latest/flow/setup/configure/configure-macos/ + - /docs/grafana-cloud/agent/flow/tasks/configure/configure-macos/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-macos/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-macos/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-macos/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/configure/configure-macos/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-macos/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-macos/ + - /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-macos/ + - ../../setup/configure/configure-macos/ # /docs/agent/latest/flow/setup/configure/configure-macos/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure/configure-macos/ description: Learn how to configure Grafana Agent Flow on macOS menuTitle: macOS @@ -56,9 +56,9 @@ steps: 1. Modify the `service` section as desired to change things such as: - * The River configuration file used by {{< param "PRODUCT_NAME" >}}. - * Flags passed to the {{< param "PRODUCT_NAME" >}} binary. - * Location of log files. + - The River configuration file used by {{< param "PRODUCT_NAME" >}}. + - Flags passed to the {{< param "PRODUCT_NAME" >}} binary. + - Location of log files. When you are done, save the file. @@ -86,10 +86,9 @@ To expose the UI to other machines, complete the following steps: to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the following customizations: - 1. Modify the line inside the `service` block containing - `--server.http.listen-addr=127.0.0.1:12345`, replacing `127.0.0.1` with - the address which other machines on the network have access to, like the - network IP address of the machine {{< param "PRODUCT_NAME" >}} is running on. - - To listen on all interfaces, replace `127.0.0.1` with `0.0.0.0`. + 1. Modify the line inside the `service` block containing + `--server.http.listen-addr=127.0.0.1:12345`, replacing `127.0.0.1` with + the address which other machines on the network have access to, like the + network IP address of the machine {{< param "PRODUCT_NAME" >}} is running on. + To listen on all interfaces, replace `127.0.0.1` with `0.0.0.0`. diff --git a/docs/sources/flow/tasks/configure/configure-windows.md b/docs/sources/flow/tasks/configure/configure-windows.md index ab2136bed3c3..bc276a834fed 100644 --- a/docs/sources/flow/tasks/configure/configure-windows.md +++ b/docs/sources/flow/tasks/configure/configure-windows.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/configure/configure-windows/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-windows/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-windows/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-windows/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/setup/configure/configure-windows/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-windows/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-windows/ -- /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-windows/ -- ../../setup/configure/configure-windows/ # /docs/agent/latest/flow/setup/configure/configure-windows/ + - /docs/grafana-cloud/agent/flow/tasks/configure/configure-windows/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/configure/configure-windows/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/configure/configure-windows/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/configure/configure-windows/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/setup/configure/configure-windows/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/setup/configure/configure-windows/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/setup/configure/configure-windows/ + - /docs/grafana-cloud/send-data/agent/flow/setup/configure/configure-windows/ + - ../../setup/configure/configure-windows/ # /docs/agent/latest/flow/setup/configure/configure-windows/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/configure/configure-windows/ description: Learn how to configure Grafana Agent Flow on Windows menuTitle: Windows @@ -46,9 +46,9 @@ To configure {{< param "PRODUCT_NAME" >}} on Windows, perform the following step By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the following arguments to the {{< param "PRODUCT_NAME" >}} binary: -* `run` -* `C:\Program Files\Grafana Agent Flow\config.river` -* `--storage.path=C:\ProgramData\Grafana Agent Flow\data` +- `run` +- `C:\Program Files\Grafana Agent Flow\config.river` +- `--storage.path=C:\ProgramData\Grafana Agent Flow\data` To change the set of command-line arguments passed to the {{< param "PRODUCT_ROOT_NAME" >}} binary, perform the following steps: @@ -61,7 +61,7 @@ binary, perform the following steps: 1. Navigate to the key at the path `HKEY_LOCAL_MACHINE\SOFTWARE\Grafana\Grafana Agent Flow`. -1. Double-click on the value called **Arguments***. +1. Double-click on the value called **Arguments\***. 1. In the dialog box, enter the new set of arguments to pass to the {{< param "PRODUCT_ROOT_NAME" >}} binary. @@ -89,16 +89,14 @@ To expose the UI to other machines, complete the following steps: to edit command line flags passed to {{< param "PRODUCT_NAME" >}}, including the following customizations: - 1. Add the following command line argument: + 1. Add the following command line argument: - ```shell - --server.http.listen-addr=LISTEN_ADDR:12345 - ``` - - Replace `LISTEN_ADDR` with an address which other machines on the - network have access to, like the network IP address of the machine - {{< param "PRODUCT_NAME" >}} is running on. - - To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`. + ```shell + --server.http.listen-addr=LISTEN_ADDR:12345 + ``` + Replace `LISTEN_ADDR` with an address which other machines on the + network have access to, like the network IP address of the machine + {{< param "PRODUCT_NAME" >}} is running on. + To listen on all interfaces, replace `LISTEN_ADDR` with `0.0.0.0`. diff --git a/docs/sources/flow/tasks/debug.md b/docs/sources/flow/tasks/debug.md index 69d4090f057b..00e5bd9d12ed 100644 --- a/docs/sources/flow/tasks/debug.md +++ b/docs/sources/flow/tasks/debug.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/debug/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/debug/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/debug/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/debug/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/monitoring/debugging/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/debugging/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/debugging/ -- /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging/ -- ../monitoring/debugging/ # /docs/agent/latest/flow/monitoring/debugging/ + - /docs/grafana-cloud/agent/flow/tasks/debug/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/debug/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/debug/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/debug/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/monitoring/debugging/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/debugging/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/debugging/ + - /docs/grafana-cloud/send-data/agent/flow/monitoring/debugging/ + - ../monitoring/debugging/ # /docs/agent/latest/flow/monitoring/debugging/ canonical: https://grafana.com/docs/agent/latest/flow/monitoring/debugging/ description: Learn about debugging issues with Grafana Agent Flow title: Debug issues with Grafana Agent Flow @@ -86,10 +86,10 @@ Clicking a component in the graph navigates to the [Component detail page](#comp The component detail page shows the following information for each component: -* The health of the component with a message explaining the health. -* The current evaluated arguments for the component. -* The current exports for the component. -* The current debug info for the component (if the component has debug info). +- The health of the component with a message explaining the health. +- The current evaluated arguments for the component. +- The current exports for the component. +- The current debug info for the component (if the component has debug info). > Values marked as a [secret](ref:secret) are obfuscated and display as the text `(secret)`. @@ -99,17 +99,17 @@ The component detail page shows the following information for each component: The Clustering page shows the following information for each cluster node: -* The node's name. -* The node's advertised address. -* The node's current state (Viewer/Participant/Terminating). -* The local node that serves the UI. +- The node's name. +- The node's advertised address. +- The node's current state (Viewer/Participant/Terminating). +- The local node that serves the UI. ## Debugging using the UI To debug using the UI: -* Ensure that no component is reported as unhealthy. -* Ensure that the arguments and exports for misbehaving components appear correct. +- Ensure that no component is reported as unhealthy. +- Ensure that the arguments and exports for misbehaving components appear correct. ## Examining logs diff --git a/docs/sources/flow/tasks/distribute-prometheus-scrape-load.md b/docs/sources/flow/tasks/distribute-prometheus-scrape-load.md index 95b9eaa262cd..72b020052801 100644 --- a/docs/sources/flow/tasks/distribute-prometheus-scrape-load.md +++ b/docs/sources/flow/tasks/distribute-prometheus-scrape-load.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/distribute-prometheus-scrape-load/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/distribute-prometheus-scrape-load/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/distribute-prometheus-scrape-load/ -- ../getting-started/distribute-prometheus-scrape-load/ # /docs/agent/latest/flow/getting-started/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/agent/flow/tasks/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/distribute-prometheus-scrape-load/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/distribute-prometheus-scrape-load/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/distribute-prometheus-scrape-load/ + - ../getting-started/distribute-prometheus-scrape-load/ # /docs/agent/latest/flow/getting-started/distribute-prometheus-scrape-load/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/distribute-prometheus-scrape-load/ description: Learn how to distribute your Prometheus metrics scrape load menuTitle: Distribute Prometheus metrics scrape load @@ -74,4 +74,3 @@ To distribute Prometheus metrics scrape load with clustering: 1. Using the {{< param "PRODUCT_ROOT_NAME" >}} [UI](ref:ui) on each {{< param "PRODUCT_ROOT_NAME" >}}, navigate to the details page for one of the `prometheus.scrape` components you modified. 1. Compare the Debug Info sections between two different {{< param "PRODUCT_ROOT_NAME" >}} to ensure that they're not scraping the same sets of targets. - diff --git a/docs/sources/flow/tasks/estimate-resource-usage.md b/docs/sources/flow/tasks/estimate-resource-usage.md index f3ed1b7aed05..e18892af05a9 100644 --- a/docs/sources/flow/tasks/estimate-resource-usage.md +++ b/docs/sources/flow/tasks/estimate-resource-usage.md @@ -40,9 +40,9 @@ series that need to be scraped and the scrape interval. As a rule of thumb, **per each 1 million active series** and with the default scrape interval, you can expect to use approximately: -* 0.4 CPU cores -* 11 GiB of memory -* 1.5 MiB/s of total network bandwidth, send and receive +- 0.4 CPU cores +- 11 GiB of memory +- 1.5 MiB/s of total network bandwidth, send and receive These recommendations are based on deployments that use [clustering][], but they will broadly apply to other deployment modes. For more information on how to @@ -58,8 +58,8 @@ Loki logs resource usage depends mainly on the volume of logs ingested. As a rule of thumb, **per each 1 MiB/second of logs ingested**, you can expect to use approximately: -* 1 CPU core -* 120 MiB of memory +- 1 CPU core +- 120 MiB of memory These recommendations are based on Kubernetes DaemonSet deployments on clusters with relatively small number of nodes and high logs volume on each. The resource @@ -76,8 +76,8 @@ Pyroscope profiles resource usage depends mainly on the volume of profiles. As a rule of thumb, **per each 100 profiles/second**, you can expect to use approximately: -* 1 CPU core -* 10 GiB of memory +- 1 CPU core +- 10 GiB of memory Factors such as size of each profile and frequency of fetching them also play a role in the overall resource usage. diff --git a/docs/sources/flow/tasks/metamonitoring.md b/docs/sources/flow/tasks/metamonitoring.md index 13c4ca176eaa..305b3a9d4cfd 100644 --- a/docs/sources/flow/tasks/metamonitoring.md +++ b/docs/sources/flow/tasks/metamonitoring.md @@ -29,21 +29,21 @@ This topic describes how to collect and forward {{< param "PRODUCT_NAME" >}}'s m ## Components and configuration blocks used in this topic -* [prometheus.exporter.self](ref:prometheus.exporter.self) -* [prometheus.scrape](ref:prometheus.scrape) -* [logging](ref:logging) -* [tracing](ref:tracing) +- [prometheus.exporter.self](ref:prometheus.exporter.self) +- [prometheus.scrape](ref:prometheus.scrape) +- [logging](ref:logging) +- [tracing](ref:tracing) ## Before you begin -* Identify where to send {{< param "PRODUCT_NAME" >}}'s telemetry data. -* Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- Identify where to send {{< param "PRODUCT_NAME" >}}'s telemetry data. +- Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Meta-monitoring metrics {{< param "PRODUCT_NAME" >}} exposes its internal metrics using the Prometheus exposition format. -In this task, you will use the [prometheus.exporter.self](ref:prometheus.exporter.self) and [prometheus.scrape](ref:prometheus.scrape) components to scrape {{< param "PRODUCT_NAME" >}}'s internal metrics and forward it to compatible {{< param "PRODUCT_NAME" >}} components. +In this task, you will use the [prometheus.exporter.self](ref:prometheus.exporter.self) and [prometheus.scrape](ref:prometheus.scrape) components to scrape {{< param "PRODUCT_NAME" >}}'s internal metrics and forward it to compatible {{< param "PRODUCT_NAME" >}} components. 1. Add the following `prometheus.exporter.self` component to your configuration. The component accepts no arguments. @@ -53,6 +53,7 @@ In this task, you will use the [prometheus.exporter.self](ref:prometheus.exporte ``` 1. Add the following `prometheus.scrape` component to your configuration file. + ```river prometheus.scrape "" { targets = prometheus.exporter..default.targets @@ -61,6 +62,7 @@ In this task, you will use the [prometheus.exporter.self](ref:prometheus.exporte ``` Replace the following: + - _``_: The label for the component such as `default` or `metamonitoring`. The label must be unique across all `prometheus.exporter.self` components in the same configuration file. - _``_: The label for the scrape component such as `default`. The label must be unique across all `prometheus.scrape` components in the same configuration file. - _``_: A comma-delimited list of component receivers to forward metrics to. @@ -104,6 +106,7 @@ The block is specified without a label and can only be provided once per configu ``` Replace the following: + - _``_: The log level to use for {{< param "PRODUCT_NAME" >}}'s logs. If the attribute isn't set, it defaults to `info`. - _``_: The log format to use for {{< param "PRODUCT_NAME" >}}'s logs. If the attribute isn't set, it defaults to `logfmt`. - _``_: A comma-delimited list of component receivers to forward logs to. @@ -115,7 +118,7 @@ The following example demonstrates configuring the logging block and sending to ```river logging { - level = "warn" + level = "warn" format = "json" write_to = [loki.write.default.receiver] } @@ -144,6 +147,7 @@ In this task you will use the [tracing](ref:tracing) block to forward {{< param ``` Replace the following: + - _``_: The fraction of traces to keep. If the attribute isn't set, it defaults to `0.1`. - _``_: A comma-delimited list of component receivers to forward traces to. For example, to send to an existing OpenTelemetry exporter component use `otelcol.exporter.otlp.EXPORT_LABEL.input`. @@ -162,4 +166,3 @@ otelcol.exporter.otlp "default" { } } ``` - diff --git a/docs/sources/flow/tasks/migrate/_index.md b/docs/sources/flow/tasks/migrate/_index.md index b32237d14777..b074b81e6767 100644 --- a/docs/sources/flow/tasks/migrate/_index.md +++ b/docs/sources/flow/tasks/migrate/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/migrate/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/migrate/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/migrate/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/migrate/ + - /docs/grafana-cloud/agent/flow/tasks/migrate/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/migrate/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/migrate/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/migrate/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/migrate/ description: How to migrate to Grafana Agent Flow menuTitle: Migrate diff --git a/docs/sources/flow/tasks/migrate/from-operator.md b/docs/sources/flow/tasks/migrate/from-operator.md index b8f7c1053995..5ed61ebe0d13 100644 --- a/docs/sources/flow/tasks/migrate/from-operator.md +++ b/docs/sources/flow/tasks/migrate/from-operator.md @@ -1,11 +1,11 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/migrate/from-operator/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/migrate/from-operator/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/migrating-from-operator/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/migrating-from-operator/ -- ../../getting-started/migrating-from-operator/ # /docs/agent/latest/flow/getting-started/migrating-from-operator/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/migrate/from-operator/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/migrate/from-operator/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/migrating-from-operator/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/migrating-from-operator/ + - ../../getting-started/migrating-from-operator/ # /docs/agent/latest/flow/getting-started/migrating-from-operator/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/migrate/from-operator/ description: Migrate from Grafana Agent Operator to Grafana Agent Flow menuTitle: Migrate from Operator @@ -101,43 +101,43 @@ This guide provides some steps to get started with {{< param "PRODUCT_NAME" >}} 1. Create a `values.yaml` file, which contains options for deploying your {{< param "PRODUCT_ROOT_NAME" >}}. You can start with the [default values][] and customize as you see fit, or start with this snippet, which should be a good starting point for what the Operator does. - ```yaml - agent: - mode: 'flow' - configMap: - create: true - clustering: - enabled: true - controller: - type: 'statefulset' - replicas: 2 - crds: - create: false - ``` - - This configuration deploys {{< param "PRODUCT_NAME" >}} as a `StatefulSet` using the built-in [clustering](ref:clustering) functionality to allow distributing scrapes across all {{< param "PRODUCT_ROOT_NAME" >}} Pods. - - This is one of many deployment possible modes. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. - See the {{< param "PRODUCT_NAME" >}} [deployment guide](ref:deployment-guide) for more details about different topologies. + ```yaml + agent: + mode: "flow" + configMap: + create: true + clustering: + enabled: true + controller: + type: "statefulset" + replicas: 2 + crds: + create: false + ``` + + This configuration deploys {{< param "PRODUCT_NAME" >}} as a `StatefulSet` using the built-in [clustering](ref:clustering) functionality to allow distributing scrapes across all {{< param "PRODUCT_ROOT_NAME" >}} Pods. + + This is one of many deployment possible modes. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. + See the {{< param "PRODUCT_NAME" >}} [deployment guide](ref:deployment-guide) for more details about different topologies. 1. Create a {{< param "PRODUCT_ROOT_NAME" >}} configuration file, `agent.river`. - In the next step, you add to this configuration as you convert `MetricsInstances`. You can add any additional configuration to this file as you need. + In the next step, you add to this configuration as you convert `MetricsInstances`. You can add any additional configuration to this file as you need. 1. Install the Grafana Helm repository: - ``` - helm repo add grafana https://grafana.github.io/helm-charts - helm repo update - ``` + ``` + helm repo add grafana https://grafana.github.io/helm-charts + helm repo update + ``` 1. Create a Helm release. You can name the release anything you like. The following command installs a release called `grafana-agent-metrics` in the `monitoring` namespace. - ```shell - helm upgrade grafana-agent-metrics grafana/grafana-agent -i -n monitoring -f values.yaml --set-file agent.configMap.content=agent.river - ``` + ```shell + helm upgrade grafana-agent-metrics grafana/grafana-agent -i -n monitoring -f values.yaml --set-file agent.configMap.content=agent.river + ``` - This command uses the `--set-file` flag to pass the configuration file as a Helm value so that you can continue to edit it as a regular River file. + This command uses the `--set-file` flag to pass the configuration file as a Helm value so that you can continue to edit it as a regular River file. ## Convert `MetricsIntances` to {{% param "PRODUCT_NAME" %}} components @@ -215,13 +215,13 @@ These values are close to what the Operator currently deploys for logs: ```yaml agent: - mode: 'flow' + mode: "flow" configMap: create: true clustering: enabled: false controller: - type: 'daemonset' + type: "daemonset" mounts: # -- Mount /var/log from the host into the container for log collection. varlog: true @@ -357,4 +357,3 @@ However, all static mode integrations have an equivalent component in the [`prom The [reference documentation][component documentation] should help convert those integrations to their {{< param "PRODUCT_NAME" >}} equivalent. [default values]: https://github.com/grafana/agent/blob/main/operations/helm/charts/grafana-agent/values.yaml - diff --git a/docs/sources/flow/tasks/migrate/from-otelcol.md b/docs/sources/flow/tasks/migrate/from-otelcol.md index 6d5c55bb11f3..0e3b72284415 100644 --- a/docs/sources/flow/tasks/migrate/from-otelcol.md +++ b/docs/sources/flow/tasks/migrate/from-otelcol.md @@ -62,20 +62,20 @@ The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your This topic describes how to: -* Convert an OpenTelemetry Collector configuration to a {{< param "PRODUCT_NAME" >}} configuration. -* Run an OpenTelemetry Collector configuration natively using {{< param "PRODUCT_NAME" >}}. +- Convert an OpenTelemetry Collector configuration to a {{< param "PRODUCT_NAME" >}} configuration. +- Run an OpenTelemetry Collector configuration natively using {{< param "PRODUCT_NAME" >}}. ## Components used in this topic -* [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) -* [otelcol.processor.memory_limiter](ref:otelcol.processor.memory_limiter) -* [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) +- [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) +- [otelcol.processor.memory_limiter](ref:otelcol.processor.memory_limiter) +- [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) ## Before you begin -* You must have an existing OpenTelemetry Collector configuration. -* You must have a set of OpenTelemetry Collector applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. -* You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- You must have an existing OpenTelemetry Collector configuration. +- You must have a set of OpenTelemetry Collector applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. +- You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Convert an OpenTelemetry Collector configuration @@ -111,10 +111,10 @@ This conversion will enable you to take full advantage of the many additional fe 1. If the `convert` command can't convert an OpenTelemetry Collector configuration, diagnostic information is sent to `stderr`.\ You can bypass any non-critical issues and output the {{< param "PRODUCT_NAME" >}} configuration using a best-effort conversion by including the `--bypass-errors` flag. - {{< admonition type="caution" >}} - If you bypass the errors, the behavior of the converted configuration may not match the original OpenTelemetry Collector configuration. - Make sure you fully test the converted configuration before using it in a production environment. - {{< /admonition >}} + {{< admonition type="caution" >}} + If you bypass the errors, the behavior of the converted configuration may not match the original OpenTelemetry Collector configuration. + Make sure you fully test the converted configuration before using it in a production environment. + {{< /admonition >}} {{< code >}} @@ -153,15 +153,15 @@ This conversion will enable you to take full advantage of the many additional fe - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - _``_: The output path for the report. - Using the [example][] OpenTelemetry Collector configuration below, the diagnostic report provides the following information: + Using the [example][] OpenTelemetry Collector configuration below, the diagnostic report provides the following information: - ```plaintext - (Info) Converted receiver/otlp into otelcol.receiver.otlp.default - (Info) Converted processor/memory_limiter into otelcol.processor.memory_limiter.default - (Info) Converted exporter/otlp into otelcol.exporter.otlp.default + ```plaintext + (Info) Converted receiver/otlp into otelcol.receiver.otlp.default + (Info) Converted processor/memory_limiter into otelcol.processor.memory_limiter.default + (Info) Converted exporter/otlp into otelcol.exporter.otlp.default - A configuration file was generated successfully. - ``` + A configuration file was generated successfully. + ``` ## Run an OpenTelemetry Collector configuration @@ -211,7 +211,6 @@ processors: limit_percentage: 90 check_interval: 1s - service: pipelines: metrics: @@ -288,16 +287,15 @@ After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} co The following list is specific to the convert command and not {{< param "PRODUCT_NAME" >}}: -* Components are supported which directly embed upstream OpenTelemetry Collector features. You can get a general idea of which exist in +- Components are supported which directly embed upstream OpenTelemetry Collector features. You can get a general idea of which exist in {{< param "PRODUCT_NAME" >}} for conversion by reviewing the `otelcol.*` components in the [Component Reference](ref:component-reference). Any additional unsupported features are returned as errors during conversion. -* Check if you are using any extra command line arguments with OpenTelemetry Collector that aren't present in your configuration file. -* Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match OpenTelemetry Collector metamonitoring metrics but will use a different name. +- Check if you are using any extra command line arguments with OpenTelemetry Collector that aren't present in your configuration file. +- Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match OpenTelemetry Collector metamonitoring metrics but will use a different name. Make sure that you use the new metric names, for example, in your alerts and dashboards queries. -* The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced by OpenTelemetry Collector. -* {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). +- The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced by OpenTelemetry Collector. +- {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). [OpenTelemetry Collector]: https://opentelemetry.io/docs/collector/configuration/ [debugging]: #debugging [example]: #example - diff --git a/docs/sources/flow/tasks/migrate/from-prometheus.md b/docs/sources/flow/tasks/migrate/from-prometheus.md index 65fa724d6783..1f19b33f0e5d 100644 --- a/docs/sources/flow/tasks/migrate/from-prometheus.md +++ b/docs/sources/flow/tasks/migrate/from-prometheus.md @@ -69,19 +69,19 @@ The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your This topic describes how to: -* Convert a Prometheus configuration to a {{< param "PRODUCT_NAME" >}} configuration. -* Run a Prometheus configuration natively using {{< param "PRODUCT_NAME" >}}. +- Convert a Prometheus configuration to a {{< param "PRODUCT_NAME" >}} configuration. +- Run a Prometheus configuration natively using {{< param "PRODUCT_NAME" >}}. ## Components used in this topic -* [prometheus.scrape](ref:prometheus.scrape) -* [prometheus.remote_write](ref:prometheus.remote_write) +- [prometheus.scrape](ref:prometheus.scrape) +- [prometheus.remote_write](ref:prometheus.remote_write) ## Before you begin -* You must have an existing Prometheus configuration. -* You must have a set of Prometheus applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. -* You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- You must have an existing Prometheus configuration. +- You must have a set of Prometheus applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. +- You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Convert a Prometheus configuration @@ -117,10 +117,10 @@ This conversion will enable you to take full advantage of the many additional fe 1. If the `convert` command can't convert a Prometheus configuration, diagnostic information is sent to `stderr`.\ You can bypass any non-critical issues and output the {{< param "PRODUCT_NAME" >}} configuration using a best-effort conversion by including the `--bypass-errors` flag. - {{< admonition type="caution" >}} - If you bypass the errors, the behavior of the converted configuration may not match the original Prometheus configuration. - Make sure you fully test the converted configuration before using it in a production environment. - {{< /admonition >}} + {{< admonition type="caution" >}} + If you bypass the errors, the behavior of the converted configuration may not match the original Prometheus configuration. + Make sure you fully test the converted configuration before using it in a production environment. + {{< /admonition >}} {{< code >}} @@ -159,14 +159,14 @@ This conversion will enable you to take full advantage of the many additional fe - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - _``_: The output path for the report. - Using the [example][] Prometheus configuration below, the diagnostic report provides the following information: + Using the [example][] Prometheus configuration below, the diagnostic report provides the following information: - ```plaintext - (Info) Converted scrape_configs job_name "prometheus" into... - A prometheus.scrape.prometheus component - (Info) Converted 1 remote_write[s] "grafana-cloud" into... - A prometheus.remote_write.default component - ``` + ```plaintext + (Info) Converted scrape_configs job_name "prometheus" into... + A prometheus.scrape.prometheus component + (Info) Converted 1 remote_write[s] "grafana-cloud" into... + A prometheus.remote_write.default component + ``` ## Run a Prometheus configuration @@ -202,7 +202,7 @@ The following Prometheus configuration file provides the input for the conversio ```yaml global: - scrape_timeout: 45s + scrape_timeout: 45s scrape_configs: - job_name: "prometheus" @@ -279,15 +279,14 @@ After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} co The following list is specific to the convert command and not {{< param "PRODUCT_NAME" >}}: -* The following configurations aren't available for conversion to {{< param "PRODUCT_NAME" >}}: `rule_files`, `alerting`, `remote_read`, `storage`, and `tracing`. +- The following configurations aren't available for conversion to {{< param "PRODUCT_NAME" >}}: `rule_files`, `alerting`, `remote_read`, `storage`, and `tracing`. Any additional unsupported features are returned as errors during conversion. -* Check if you are using any extra command line arguments with Prometheus that aren't present in your configuration file. For example, `--web.listen-address`. -* Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Prometheus metamonitoring metrics but will use a different name. +- Check if you are using any extra command line arguments with Prometheus that aren't present in your configuration file. For example, `--web.listen-address`. +- Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Prometheus metamonitoring metrics but will use a different name. Make sure that you use the new metric names, for example, in your alerts and dashboards queries. -* The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced by Prometheus. -* {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). +- The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced by Prometheus. +- {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). [Prometheus]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/ [debugging]: #debugging [example]: #example - diff --git a/docs/sources/flow/tasks/migrate/from-promtail.md b/docs/sources/flow/tasks/migrate/from-promtail.md index efbfe2762d5f..1829a1d20e52 100644 --- a/docs/sources/flow/tasks/migrate/from-promtail.md +++ b/docs/sources/flow/tasks/migrate/from-promtail.md @@ -74,19 +74,19 @@ The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your This topic describes how to: -* Convert a Promtail configuration to a {{< param "PRODUCT_NAME" >}} configuration. -* Run a Promtail configuration natively using {{< param "PRODUCT_NAME" >}}. +- Convert a Promtail configuration to a {{< param "PRODUCT_NAME" >}} configuration. +- Run a Promtail configuration natively using {{< param "PRODUCT_NAME" >}}. ## Components used in this topic -* [local.file_match](ref:local.file_match) -* [loki.source.file](ref:loki.source.file) -* [loki.write](ref:loki.write) +- [local.file_match](ref:local.file_match) +- [loki.source.file](ref:loki.source.file) +- [loki.write](ref:loki.write) ## Before you begin -* You must have an existing Promtail configuration. -* You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- You must have an existing Promtail configuration. +- You must be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. ## Convert a Promtail configuration @@ -110,10 +110,10 @@ This conversion will enable you to take full advantage of the many additional fe {{< /code >}} - Replace the following: - * _``_: The full path to the Promtail configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + + - _``_: The full path to the Promtail configuration. + - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. [Run](ref:run) {{< param "PRODUCT_NAME" >}} using the new configuration from _``_: @@ -140,8 +140,9 @@ This conversion will enable you to take full advantage of the many additional fe {{< /code >}} Replace the following: - * _``_: The full path to the Promtail configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + + - _``_: The full path to the Promtail configuration. + - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. You can also output a diagnostic report by including the `--report` flag. @@ -159,16 +160,16 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the Promtail configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - * _``_: The output path for the report. + - _``_: The full path to the Promtail configuration. + - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + - _``_: The output path for the report. If you use the [example](#example) Promtail configuration below, the diagnostic report provides the following information: - ```plaintext - (Warning) If you have a tracing set up for Promtail, it cannot be migrated to {{< param "PRODUCT_NAME" >}} automatically. Refer to the documentation on how to configure tracing in {{< param "PRODUCT_NAME" >}}. - (Warning) The metrics from {{< param "PRODUCT_NAME" >}} are different from the metrics emitted by Promtail. If you rely on Promtail's metrics, you must change your configuration, for example, your alerts and dashboards. - ``` + ```plaintext + (Warning) If you have a tracing set up for Promtail, it cannot be migrated to {{< param "PRODUCT_NAME" >}} automatically. Refer to the documentation on how to configure tracing in {{< param "PRODUCT_NAME" >}}. + (Warning) The metrics from {{< param "PRODUCT_NAME" >}} are different from the metrics emitted by Promtail. If you rely on Promtail's metrics, you must change your configuration, for example, your alerts and dashboards. + ``` ## Run a Promtail configuration @@ -185,7 +186,7 @@ Your configuration file must be a valid Promtail configuration file rather than 1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. -1. Refer to the {{< param "PRODUCT_NAME" >}} [Debugging](ref:debuggingui) for more information about running {{< param "PRODUCT_NAME" >}}. +1. Refer to the {{< param "PRODUCT_NAME" >}} [Debugging](ref:debuggingui) for more information about running {{< param "PRODUCT_NAME" >}}. 1. If your Promtail configuration can't be converted and loaded directly into {{< param "PRODUCT_ROOT_NAME" >}}, diagnostic information is sent to `stderr`. You can bypass any non-critical issues and start {{< param "PRODUCT_ROOT_NAME" >}} by including the `--config.bypass-conversion-errors` flag in addition to `--config.format=promtail`. @@ -229,8 +230,8 @@ grafana-agent-flow convert --source-format=promtail --output=`_: The full path to the Promtail configuration. -* _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. +- _``_: The full path to the Promtail configuration. +- _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: @@ -263,17 +264,16 @@ After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} co The following list is specific to the convert command and not {{< param "PRODUCT_NAME" >}}: -* Check if you are using any extra command line arguments with Promtail that aren't present in your configuration file. For example, `-max-line-size`. -* Check if you are setting any environment variables, whether [expanded in the configuration file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`. -* In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location. +- Check if you are using any extra command line arguments with Promtail that aren't present in your configuration file. For example, `-max-line-size`. +- Check if you are setting any environment variables, whether [expanded in the configuration file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`. +- In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location. Refer to the [loki.source.file](ref:loki.source.file) documentation for more details. Check if you have any existing setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions file path. -* Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Promtail metamonitoring metrics but will use a different name. +- Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Promtail metamonitoring metrics but will use a different name. Make sure that you use the new metric names, for example, in your alerts and dashboards queries. -* The logs produced by {{< param "PRODUCT_NAME" >}} will differ from those produced by Promtail. -* {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui), which differs from Promtail's Web UI. +- The logs produced by {{< param "PRODUCT_NAME" >}} will differ from those produced by Promtail. +- {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui), which differs from Promtail's Web UI. [Promtail]: https://www.grafana.com/docs/loki//clients/promtail/ [debugging]: #debugging [expanded in the configuration file]: https://www.grafana.com/docs/loki//clients/promtail/configuration/#use-environment-variables-in-the-configuration - diff --git a/docs/sources/flow/tasks/migrate/from-static.md b/docs/sources/flow/tasks/migrate/from-static.md index 44f062c80e65..7336c4a66a99 100644 --- a/docs/sources/flow/tasks/migrate/from-static.md +++ b/docs/sources/flow/tasks/migrate/from-static.md @@ -132,22 +132,22 @@ The built-in {{< param "PRODUCT_ROOT_NAME" >}} convert command can migrate your This topic describes how to: -* Convert a Grafana Agent Static configuration to a {{< param "PRODUCT_NAME" >}} configuration. -* Run a Grafana Agent Static configuration natively using {{< param "PRODUCT_NAME" >}}. +- Convert a Grafana Agent Static configuration to a {{< param "PRODUCT_NAME" >}} configuration. +- Run a Grafana Agent Static configuration natively using {{< param "PRODUCT_NAME" >}}. ## Components used in this topic -* [prometheus.scrape](ref:prometheus.scrape) -* [prometheus.remote_write](ref:prometheus.remote_write) -* [local.file_match](ref:local.file_match) -* [loki.process](ref:loki.process) -* [loki.source.file](ref:loki.source.file) -* [loki.write](ref:loki.write) +- [prometheus.scrape](ref:prometheus.scrape) +- [prometheus.remote_write](ref:prometheus.remote_write) +- [local.file_match](ref:local.file_match) +- [loki.process](ref:loki.process) +- [loki.source.file](ref:loki.source.file) +- [loki.write](ref:loki.write) ## Before you begin -* You must have an existing Grafana Agent Static configuration. -* You must be familiar with the [Components](ref:components) concept in {{< param "PRODUCT_NAME" >}}. +- You must have an existing Grafana Agent Static configuration. +- You must be familiar with the [Components](ref:components) concept in {{< param "PRODUCT_NAME" >}}. ## Convert a Grafana Agent Static configuration @@ -173,8 +173,8 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the [Static](ref:static) configuration. - * _`_`: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + - _``_: The full path to the [Static](ref:static) configuration. + - _`_`: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. [Run](ref:run) {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: @@ -202,8 +202,8 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the [Static](ref:static) configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + - _``_: The full path to the [Static](ref:static) configuration. + - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. You can use the `--report` flag to output a diagnostic report. @@ -215,21 +215,21 @@ This conversion will enable you to take full advantage of the many additional fe ```flow-binary grafana-agent-flow convert --source-format=static --report= --output= - ``` + ``` {{< /code >}} Replace the following: - * _``_: The full path to the [Static](ref:static) configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - * _``_: The output path for the report. + - _``_: The full path to the [Static](ref:static) configuration. + - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + - _``_: The output path for the report. Using the [example][] Grafana Agent Static configuration below, the diagnostic report provides the following information. - ```plaintext - (Warning) Please review your agent command line flags and ensure they are set in your {{< param "PRODUCT_NAME" >}} configuration file where necessary. - ``` + ```plaintext + (Warning) Please review your agent command line flags and ensure they are set in your {{< param "PRODUCT_NAME" >}} configuration file where necessary. + ``` ## Run a Static mode configuration @@ -280,9 +280,9 @@ metrics: scrape_configs: - job_name: local-agent static_configs: - - targets: ['127.0.0.1:12345'] + - targets: ["127.0.0.1:12345"] labels: - cluster: 'localhost' + cluster: "localhost" logs: global: @@ -296,7 +296,7 @@ logs: - job_name: varlogs static_configs: - targets: - - localhost + - localhost labels: job: varlogs host: mylocalhost @@ -305,13 +305,13 @@ logs: - match: selector: '{filename="/var/log/*.log"}' stages: - - drop: - expression: '^[^0-9]{4}' - - regex: - expression: '^(?P\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?P[[:alpha:]]+)\] (?:\d+)\#(?:\d+): \*(?:\d+) (?P.+)$' - - pack: - labels: - - level + - drop: + expression: "^[^0-9]{4}" + - regex: + expression: '^(?P\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?P[[:alpha:]]+)\] (?:\d+)\#(?:\d+): \*(?:\d+) (?P.+)$' + - pack: + labels: + - level clients: - url: https://USER_ID:API_KEY@logs-prod3.grafana.net/loki/api/v1/push ``` @@ -332,8 +332,8 @@ grafana-agent-flow convert --source-format=static --output= Replace the following: -* _``_: The full path to the [Static](ref:static) configuration. -* _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. +- _``_: The full path to the [Static](ref:static) configuration. +- _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: @@ -428,10 +428,11 @@ grafana-agent-flow convert --source-format=static --extra-args="-enable-features {{< /code >}} - Replace the following: - * _``_: The full path to the [Static](ref:static) configuration. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - +Replace the following: + +- _``_: The full path to the [Static](ref:static) configuration. +- _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + ## Environment Vars You can use the `-config.expand-env` command line flag to interpret environment variables in your Grafana Agent Static configuration. @@ -448,17 +449,16 @@ After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} co The following list is specific to the convert command and not {{< param "PRODUCT_NAME" >}}: -* The [Agent Management](ref:agent-management) configuration options can't be automatically converted to {{< param "PRODUCT_NAME" >}}. +- The [Agent Management](ref:agent-management) configuration options can't be automatically converted to {{< param "PRODUCT_NAME" >}}. Any additional unsupported features are returned as errors during conversion. -* There is no gRPC server to configure for {{< param "PRODUCT_NAME" >}}, as any non-default configuration will show as unsupported during the conversion. -* Check if you are using any extra command line arguments with Static that aren't present in your configuration file. For example, `-server.http.address`. -* Check if you are using any environment variables in your [Static](ref:static) configuration. +- There is no gRPC server to configure for {{< param "PRODUCT_NAME" >}}, as any non-default configuration will show as unsupported during the conversion. +- Check if you are using any extra command line arguments with Static that aren't present in your configuration file. For example, `-server.http.address`. +- Check if you are using any environment variables in your [Static](ref:static) configuration. These will be evaluated during conversion and you may want to replace them with the {{< param "PRODUCT_NAME" >}} Standard library [env](ref:env) function after conversion. -* Review additional [Prometheus Limitations](ref:prometheus-limitations) for limitations specific to your [Metrics](ref:metrics) configuration. -* Review additional [Promtail Limitations](ref:promtail-limitations) for limitations specific to your [Logs](ref:logs) configuration. -* The logs produced by {{< param "PRODUCT_NAME" >}} mode will differ from those produced by Static. -* {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). +- Review additional [Prometheus Limitations](ref:prometheus-limitations) for limitations specific to your [Metrics](ref:metrics) configuration. +- Review additional [Promtail Limitations](ref:promtail-limitations) for limitations specific to your [Logs](ref:logs) configuration. +- The logs produced by {{< param "PRODUCT_NAME" >}} mode will differ from those produced by Static. +- {{< param "PRODUCT_ROOT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI](ref:ui). [debugging]: #debugging [example]: #example - diff --git a/docs/sources/flow/tasks/monitor/_index.md b/docs/sources/flow/tasks/monitor/_index.md index ac23db26072c..88b34c93ba60 100644 --- a/docs/sources/flow/tasks/monitor/_index.md +++ b/docs/sources/flow/tasks/monitor/_index.md @@ -1,15 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/monitor/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/monitoring/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/ -- /docs/grafana-cloud/send-data/agent/flow/monitoring/ -- ../monitoring/ # /docs/agent/latest/flow/monitoring/ + - /docs/grafana-cloud/agent/flow/tasks/monitor/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/monitoring/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/ + - /docs/grafana-cloud/send-data/agent/flow/monitoring/ + - ../monitoring/ # /docs/agent/latest/flow/monitoring/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/monitor/ description: Learn about monitoring Grafana Agent Flow title: Monitor Grafana Agent Flow diff --git a/docs/sources/flow/tasks/monitor/component_metrics.md b/docs/sources/flow/tasks/monitor/component_metrics.md index 2d4e1cee0571..d5447df93bab 100644 --- a/docs/sources/flow/tasks/monitor/component_metrics.md +++ b/docs/sources/flow/tasks/monitor/component_metrics.md @@ -1,17 +1,17 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/monitor/component_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/component_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/component_metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/component_metrics/ -- component-metrics/ # /docs/agent/latest/flow/tasks/monitor/component-metrics/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/monitoring/component_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/component_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/component_metrics/ -- /docs/grafana-cloud/send-data/agent/flow/monitoring/component_metrics/ -- ../../monitoring/component-metrics/ # /docs/agent/latest/flow/monitoring/component-metrics/ -- ../../monitoring/component_metrics/ # /docs/agent/latest/flow/monitoring/component_metrics/ + - /docs/grafana-cloud/agent/flow/tasks/monitor/component_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/component_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/component_metrics/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/component_metrics/ + - component-metrics/ # /docs/agent/latest/flow/tasks/monitor/component-metrics/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/monitoring/component_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/component_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/component_metrics/ + - /docs/grafana-cloud/send-data/agent/flow/monitoring/component_metrics/ + - ../../monitoring/component-metrics/ # /docs/agent/latest/flow/monitoring/component-metrics/ + - ../../monitoring/component_metrics/ # /docs/agent/latest/flow/monitoring/component_metrics/ canonical: https://grafana.com/docs/agent/latest/flow/monitoring/component_metrics/ description: Learn how to monitor component metrics title: Monitor components @@ -51,4 +51,3 @@ For example, component-specific metrics for a `prometheus.remote_write` componen The [reference documentation](ref:reference-documentation) for each component described the list of component-specific metrics that the component exposes. Not all components expose metrics. - diff --git a/docs/sources/flow/tasks/monitor/controller_metrics.md b/docs/sources/flow/tasks/monitor/controller_metrics.md index 27b0e37b6728..2af103e68a40 100644 --- a/docs/sources/flow/tasks/monitor/controller_metrics.md +++ b/docs/sources/flow/tasks/monitor/controller_metrics.md @@ -1,17 +1,17 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/monitor/controller_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/controller_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/controller_metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/controller_metrics/ -- controller-metrics/ # /docs/agent/latest/flow/tasks/monitor/controller-metrics/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/monitoring/controller_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/controller_metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/controller_metrics/ -- /docs/grafana-cloud/send-data/agent/flow/monitoring/controller_metrics/ -- ../../monitoring/controller-metrics/ # /docs/agent/latest/flow/monitoring/controller-metrics/ -- ../../monitoring/controller_metrics/ # /docs/agent/latest/flow/monitoring/controller_metrics/ + - /docs/grafana-cloud/agent/flow/tasks/monitor/controller_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/monitor/controller_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/monitor/controller_metrics/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/monitor/controller_metrics/ + - controller-metrics/ # /docs/agent/latest/flow/tasks/monitor/controller-metrics/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/monitoring/controller_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/monitoring/controller_metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/monitoring/controller_metrics/ + - /docs/grafana-cloud/send-data/agent/flow/monitoring/controller_metrics/ + - ../../monitoring/controller-metrics/ # /docs/agent/latest/flow/monitoring/controller-metrics/ + - ../../monitoring/controller_metrics/ # /docs/agent/latest/flow/monitoring/controller_metrics/ canonical: https://grafana.com/docs/agent/latest/flow/monitoring/controller_metrics/ description: Learn how to monitor controller metrics title: Monitor controller @@ -39,11 +39,10 @@ Metrics for the controller are exposed at the `/metrics` HTTP endpoint of the {{ The controller exposes the following metrics: -* `agent_component_controller_evaluating` (Gauge): Set to `1` whenever the component controller is currently evaluating components. +- `agent_component_controller_evaluating` (Gauge): Set to `1` whenever the component controller is currently evaluating components. This value may be misrepresented depending on how fast evaluations complete or how often evaluations occur. -* `agent_component_controller_running_components` (Gauge): The current number of running components by health. - The health is represented in the `health_type` label. -* `agent_component_evaluation_seconds` (Histogram): The time it takes to evaluate components after one of their dependencies is updated. -* `agent_component_dependencies_wait_seconds` (Histogram): Time spent by components waiting to be evaluated after one of their dependencies is updated. -* `agent_component_evaluation_queue_size` (Gauge): The current number of component evaluations waiting to be performed. - +- `agent_component_controller_running_components` (Gauge): The current number of running components by health. + The health is represented in the `health_type` label. +- `agent_component_evaluation_seconds` (Histogram): The time it takes to evaluate components after one of their dependencies is updated. +- `agent_component_dependencies_wait_seconds` (Histogram): Time spent by components waiting to be evaluated after one of their dependencies is updated. +- `agent_component_evaluation_queue_size` (Gauge): The current number of component evaluations waiting to be performed. diff --git a/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md b/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md index 632b5475c1df..62bf6f98605b 100644 --- a/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md +++ b/docs/sources/flow/tasks/opentelemetry-to-lgtm-stack.md @@ -1,17 +1,18 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tasks/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/send-data/agent/flow/tasks/opentelemetry-to-lgtm-stack/ -# Previous page aliases for backwards compatibility: -- /docs/grafana-cloud/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ -- /docs/grafana-cloud/send-data/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ -- ../getting-started/opentelemetry-to-lgtm-stack/ # /docs/agent/latest/flow/getting-started/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/agent/flow/tasks/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tasks/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tasks/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/send-data/agent/flow/tasks/opentelemetry-to-lgtm-stack/ + # Previous page aliases for backwards compatibility: + - /docs/grafana-cloud/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ + - /docs/grafana-cloud/send-data/agent/flow/getting-started/opentelemetry-to-lgtm-stack/ + - ../getting-started/opentelemetry-to-lgtm-stack/ # /docs/agent/latest/flow/getting-started/opentelemetry-to-lgtm-stack/ canonical: https://grafana.com/docs/agent/latest/flow/tasks/opentelemetry-to-lgtm-stack/ -description: Learn how to collect OpenTelemetry data and forward it to the Grafana +description: + Learn how to collect OpenTelemetry data and forward it to the Grafana stack title: OpenTelemetry to Grafana stack weight: 350 @@ -74,28 +75,28 @@ You can configure {{< param "PRODUCT_NAME" >}} to collect [OpenTelemetry][]-comp This topic describes how to: -* Configure {{< param "PRODUCT_NAME" >}} to send your data to Loki. -* Configure {{< param "PRODUCT_NAME" >}} to send your data to Tempo. -* Configure {{< param "PRODUCT_NAME" >}} to send your data to Mimir or Prometheus Remote Write. +- Configure {{< param "PRODUCT_NAME" >}} to send your data to Loki. +- Configure {{< param "PRODUCT_NAME" >}} to send your data to Tempo. +- Configure {{< param "PRODUCT_NAME" >}} to send your data to Mimir or Prometheus Remote Write. ## Components used in this topic -* [loki.write](ref:loki.write) -* [otelcol.auth.basic](ref:otelcol.auth.basic) -* [otelcol.exporter.loki](ref:otelcol.exporter.loki) -* [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) -* [otelcol.exporter.prometheus](ref:otelcol.exporter.prometheus) -* [otelcol.processor.batch](ref:otelcol.processor.batch) -* [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) -* [prometheus.remote_write](ref:prometheus.remote_write) +- [loki.write](ref:loki.write) +- [otelcol.auth.basic](ref:otelcol.auth.basic) +- [otelcol.exporter.loki](ref:otelcol.exporter.loki) +- [otelcol.exporter.otlp](ref:otelcol.exporter.otlp) +- [otelcol.exporter.prometheus](ref:otelcol.exporter.prometheus) +- [otelcol.processor.batch](ref:otelcol.processor.batch) +- [otelcol.receiver.otlp](ref:otelcol.receiver.otlp) +- [prometheus.remote_write](ref:prometheus.remote_write) ## Before you begin -* Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. -* Have a set of OpenTelemetry applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. -* Identify where {{< param "PRODUCT_NAME" >}} will write received telemetry data. -* Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. -* Complete the [Collect open telemetry data](ref:collect-open-telemetry-data) task. You will pick up from where that guide ended. +- Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry. +- Have a set of OpenTelemetry applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. +- Identify where {{< param "PRODUCT_NAME" >}} will write received telemetry data. +- Be familiar with the concept of [Components](ref:components) in {{< param "PRODUCT_NAME" >}}. +- Complete the [Collect open telemetry data](ref:collect-open-telemetry-data) task. You will pick up from where that guide ended. ## The pipeline @@ -146,6 +147,7 @@ Metrics: OTel → batch processor → Mimir or Prometheus remote write Logs: OTel → batch processor → Loki exporter Traces: OTel → batch processor → OTel exporter ``` + ## Grafana Loki [Grafana Loki][] is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. diff --git a/docs/sources/flow/tutorials/_index.md b/docs/sources/flow/tutorials/_index.md index d695d7fb1374..16b565e101e6 100644 --- a/docs/sources/flow/tutorials/_index.md +++ b/docs/sources/flow/tutorials/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/ + - /docs/grafana-cloud/agent/flow/tutorials/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/ description: Learn how to use Grafana Agent Flow title: Tutorials diff --git a/docs/sources/flow/tutorials/assets/docker-compose.yaml b/docs/sources/flow/tutorials/assets/docker-compose.yaml index 7775983f2b69..2bdafb719d08 100644 --- a/docs/sources/flow/tutorials/assets/docker-compose.yaml +++ b/docs/sources/flow/tutorials/assets/docker-compose.yaml @@ -1,4 +1,4 @@ -version: '3' +version: "3" services: mimir: diff --git a/docs/sources/flow/tutorials/assets/grafana/dashboards-provisioning/dashboards.yaml b/docs/sources/flow/tutorials/assets/grafana/dashboards-provisioning/dashboards.yaml index c038adf8e38a..2ffb31c28db2 100644 --- a/docs/sources/flow/tutorials/assets/grafana/dashboards-provisioning/dashboards.yaml +++ b/docs/sources/flow/tutorials/assets/grafana/dashboards-provisioning/dashboards.yaml @@ -1,14 +1,14 @@ apiVersion: 1 providers: -- name: 'dashboards' - orgId: 1 - folder: '' - folderUid: '' - type: file - disableDeletion: true - editable: true - updateIntervalSeconds: 10 - allowUiUpdates: false - options: - path: /var/lib/grafana/dashboards + - name: "dashboards" + orgId: 1 + folder: "" + folderUid: "" + type: file + disableDeletion: true + editable: true + updateIntervalSeconds: 10 + allowUiUpdates: false + options: + path: /var/lib/grafana/dashboards diff --git a/docs/sources/flow/tutorials/assets/grafana/dashboards/agent.json b/docs/sources/flow/tutorials/assets/grafana/dashboards/agent.json index 768fccb011e2..8cbf402f53cd 100644 --- a/docs/sources/flow/tutorials/assets/grafana/dashboards/agent.json +++ b/docs/sources/flow/tutorials/assets/grafana/dashboards/agent.json @@ -1,786 +1,774 @@ { - "annotations": { - "list": [ ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "hideControls": false, - "links": [ ], - "refresh": "30s", - "rows": [ - { - "collapse": false, - "height": "250px", - "panels": [ + "annotations": { + "list": [] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "hideControls": false, + "links": [], + "refresh": "30s", + "rows": [ + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 1, + "id": 1, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 12, + "stack": false, + "steppedLine": false, + "styles": [ { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 1, - "id": 1, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 12, - "stack": false, - "steppedLine": false, - "styles": [ - { - "alias": "Time", - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "pattern": "Time", - "type": "hidden" - }, - { - "alias": "Count", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "link": false, - "linkTargetBlank": false, - "linkTooltip": "Drill down", - "linkUrl": "", - "pattern": "Value #A", - "thresholds": [ ], - "type": "hidden", - "unit": "short" - }, - { - "alias": "Uptime", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "link": false, - "linkTargetBlank": false, - "linkTooltip": "Drill down", - "linkUrl": "", - "pattern": "Value #B", - "thresholds": [ ], - "type": "number", - "unit": "short" - }, - { - "alias": "Container", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "link": false, - "linkTargetBlank": false, - "linkTooltip": "Drill down", - "linkUrl": "", - "pattern": "container", - "thresholds": [ ], - "type": "number", - "unit": "short" - }, - { - "alias": "Pod", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "link": false, - "linkTargetBlank": false, - "linkTooltip": "Drill down", - "linkUrl": "", - "pattern": "pod", - "thresholds": [ ], - "type": "number", - "unit": "short" - }, - { - "alias": "Version", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "link": false, - "linkTargetBlank": false, - "linkTooltip": "Drill down", - "linkUrl": "", - "pattern": "version", - "thresholds": [ ], - "type": "number", - "unit": "short" - }, - { - "alias": "", - "colorMode": null, - "colors": [ ], - "dateFormat": "YYYY-MM-DD HH:mm:ss", - "decimals": 2, - "pattern": "/.*/", - "thresholds": [ ], - "type": "string", - "unit": "short" - } - ], - "targets": [ - { - "expr": "count by (pod, container, version) (agent_build_info{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", - "format": "table", - "instant": true, - "intervalFactor": 2, - "legendFormat": "", - "refId": "A", - "step": 10 - }, - { - "expr": "max by (pod, container) (time() - process_start_time_seconds{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", - "format": "table", - "instant": true, - "intervalFactor": 2, - "legendFormat": "", - "refId": "B", - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Agent Stats", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "transform": "table", - "type": "table", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": true, - "title": "Agent Stats", - "titleSize": "h6" - }, - { - "collapse": false, - "height": "250px", - "panels": [ + "alias": "Time", + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "pattern": "Time", + "type": "hidden" + }, { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 1, - "id": 2, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(rate(prometheus_target_sync_length_seconds_sum{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])) by (pod, scrape_job) * 1e3", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{pod}}/{{scrape_job}}", - "legendLink": null, - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Target Sync", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "ms", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] + "alias": "Count", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "link": false, + "linkTargetBlank": false, + "linkTooltip": "Drill down", + "linkUrl": "", + "pattern": "Value #A", + "thresholds": [], + "type": "hidden", + "unit": "short" }, { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 10, - "id": 3, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 0, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 6, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "sum by (pod) (prometheus_sd_discovered_targets{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{pod}}", - "legendLink": null, - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Targets", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": true, - "title": "Prometheus Discovery", - "titleSize": "h6" - }, - { - "collapse": false, - "height": "250px", - "panels": [ + "alias": "Uptime", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "link": false, + "linkTargetBlank": false, + "linkTooltip": "Drill down", + "linkUrl": "", + "pattern": "Value #B", + "thresholds": [], + "type": "number", + "unit": "short" + }, { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 1, - "id": 4, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 4, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "rate(prometheus_target_interval_length_seconds_sum{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])\n/\nrate(prometheus_target_interval_length_seconds_count{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])\n* 1e3\n", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{pod}} {{interval}} configured", - "legendLink": null, - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Average Scrape Interval Duration", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "ms", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] + "alias": "Container", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "link": false, + "linkTargetBlank": false, + "linkTooltip": "Drill down", + "linkUrl": "", + "pattern": "container", + "thresholds": [], + "type": "number", + "unit": "short" }, { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 10, - "id": 5, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 0, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 4, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "sum by (job) (rate(prometheus_target_scrapes_exceeded_sample_limit_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "exceeded sample limit: {{job}}", - "legendLink": null, - "step": 10 - }, - { - "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_duplicate_timestamp_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "duplicate timestamp: {{job}}", - "legendLink": null, - "step": 10 - }, - { - "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_out_of_bounds_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "out of bounds: {{job}}", - "legendLink": null, - "step": 10 - }, - { - "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_out_of_order_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "out of order: {{job}}", - "legendLink": null, - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Scrape failures", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] + "alias": "Pod", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "link": false, + "linkTargetBlank": false, + "linkTooltip": "Drill down", + "linkUrl": "", + "pattern": "pod", + "thresholds": [], + "type": "number", + "unit": "short" }, { - "aliasColors": { }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "$datasource", - "fill": 10, - "id": 6, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 0, - "links": [ ], - "nullPointMode": "null as zero", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ ], - "spaceLength": 10, - "span": 4, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "sum by (job, instance_group_name) (rate(agent_wal_samples_appended_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m]))", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{job}} {{instance_group_name}}", - "legendLink": null, - "step": 10 - } - ], - "thresholds": [ ], - "timeFrom": null, - "timeShift": null, - "title": "Appended Samples", - "tooltip": { - "shared": true, - "sort": 2, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [ ] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] + "alias": "Version", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "link": false, + "linkTargetBlank": false, + "linkTooltip": "Drill down", + "linkUrl": "", + "pattern": "version", + "thresholds": [], + "type": "number", + "unit": "short" + }, + { + "alias": "", + "colorMode": null, + "colors": [], + "dateFormat": "YYYY-MM-DD HH:mm:ss", + "decimals": 2, + "pattern": "/.*/", + "thresholds": [], + "type": "string", + "unit": "short" } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": true, - "title": "Prometheus Retrieval", - "titleSize": "h6" - } - ], - "schemaVersion": 14, - "style": "dark", - "tags": [ - "grafana-agent-mixin" - ], - "templating": { - "list": [ - { - "current": { - "text": "default", - "value": "default" + ], + "targets": [ + { + "expr": "count by (pod, container, version) (agent_build_info{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", + "format": "table", + "instant": true, + "intervalFactor": 2, + "legendFormat": "", + "refId": "A", + "step": 10 }, - "hide": 0, - "label": "Data Source", - "name": "datasource", - "options": [ ], - "query": "prometheus", - "refresh": 1, - "regex": "", - "type": "datasource" - }, - { - "allValue": ".+", - "current": { - "selected": true, - "text": "All", - "value": "$__all" + { + "expr": "max by (pod, container) (time() - process_start_time_seconds{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", + "format": "table", + "instant": true, + "intervalFactor": 2, + "legendFormat": "", + "refId": "B", + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Agent Stats", + "tooltip": { + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "transform": "table", + "type": "table", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true }, - "datasource": "$datasource", - "hide": 0, - "includeAll": true, - "label": "cluster", - "multi": true, - "name": "cluster", - "options": [ ], - "query": "label_values(agent_build_info, cluster)", - "refresh": 1, - "regex": "", + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Agent Stats", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 1, + "id": 2, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 6, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(prometheus_target_sync_length_seconds_sum{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])) by (pod, scrape_job) * 1e3", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "{{pod}}/{{scrape_job}}", + "legendLink": null, + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Target Sync", + "tooltip": { + "shared": true, "sort": 2, - "tagValuesQuery": "", - "tags": [ ], - "tagsQuery": "", - "type": "query", - "useTags": false - }, - { - "allValue": ".+", - "current": { - "selected": true, - "text": "All", - "value": "$__all" + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true }, - "datasource": "$datasource", - "hide": 0, - "includeAll": true, - "label": "namespace", - "multi": true, - "name": "namespace", - "options": [ ], - "query": "label_values(agent_build_info, namespace)", - "refresh": 1, - "regex": "", + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 10, + "id": 3, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 0, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 6, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum by (pod) (prometheus_sd_discovered_targets{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "{{pod}}", + "legendLink": null, + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Targets", + "tooltip": { + "shared": true, "sort": 2, - "tagValuesQuery": "", - "tags": [ ], - "tagsQuery": "", - "type": "query", - "useTags": false - }, - { - "allValue": ".+", - "current": { - "selected": true, - "text": "All", - "value": "$__all" + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true }, - "datasource": "$datasource", - "hide": 0, - "includeAll": true, - "label": "container", - "multi": true, - "name": "container", - "options": [ ], - "query": "label_values(agent_build_info, container)", - "refresh": 1, - "regex": "", + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } + ], + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Prometheus Discovery", + "titleSize": "h6" + }, + { + "collapse": false, + "height": "250px", + "panels": [ + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 1, + "id": 4, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 4, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "rate(prometheus_target_interval_length_seconds_sum{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])\n/\nrate(prometheus_target_interval_length_seconds_count{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m])\n* 1e3\n", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "{{pod}} {{interval}} configured", + "legendLink": null, + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Average Scrape Interval Duration", + "tooltip": { + "shared": true, "sort": 2, - "tagValuesQuery": "", - "tags": [ ], - "tagsQuery": "", - "type": "query", - "useTags": false - }, - { - "allValue": "grafana-agent-.*", - "current": { - "selected": true, - "text": "All", - "value": "$__all" + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "ms", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 10, + "id": 5, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 0, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 4, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum by (job) (rate(prometheus_target_scrapes_exceeded_sample_limit_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "exceeded sample limit: {{job}}", + "legendLink": null, + "step": 10 + }, + { + "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_duplicate_timestamp_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "duplicate timestamp: {{job}}", + "legendLink": null, + "step": 10 + }, + { + "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_out_of_bounds_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "out of bounds: {{job}}", + "legendLink": null, + "step": 10 + }, + { + "expr": "sum by (job) (rate(prometheus_target_scrapes_sample_out_of_order_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[1m]))", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "out of order: {{job}}", + "legendLink": null, + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Scrape failures", + "tooltip": { + "shared": true, + "sort": 2, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true }, - "datasource": "$datasource", - "hide": 0, - "includeAll": true, - "label": "pod", - "multi": true, - "name": "pod", - "options": [ ], - "query": "label_values(agent_build_info{container=~\"$container\"}, pod)", - "refresh": 1, - "regex": "", + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "$datasource", + "fill": 10, + "id": 6, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 0, + "links": [], + "nullPointMode": "null as zero", + "percentage": false, + "pointradius": 5, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "span": 4, + "stack": true, + "steppedLine": false, + "targets": [ + { + "expr": "sum by (job, instance_group_name) (rate(agent_wal_samples_appended_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m]))", + "format": "time_series", + "intervalFactor": 2, + "legendFormat": "{{job}} {{instance_group_name}}", + "legendLink": null, + "step": 10 + } + ], + "thresholds": [], + "timeFrom": null, + "timeShift": null, + "title": "Appended Samples", + "tooltip": { + "shared": true, "sort": 2, - "tagValuesQuery": "", - "tags": [ ], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-1h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": 0, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ] + } ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "", - "title": "Agent", - "uid": "", - "version": 0 + "repeat": null, + "repeatIteration": null, + "repeatRowId": null, + "showTitle": true, + "title": "Prometheus Retrieval", + "titleSize": "h6" + } + ], + "schemaVersion": 14, + "style": "dark", + "tags": ["grafana-agent-mixin"], + "templating": { + "list": [ + { + "current": { + "text": "default", + "value": "default" + }, + "hide": 0, + "label": "Data Source", + "name": "datasource", + "options": [], + "query": "prometheus", + "refresh": 1, + "regex": "", + "type": "datasource" + }, + { + "allValue": ".+", + "current": { + "selected": true, + "text": "All", + "value": "$__all" + }, + "datasource": "$datasource", + "hide": 0, + "includeAll": true, + "label": "cluster", + "multi": true, + "name": "cluster", + "options": [], + "query": "label_values(agent_build_info, cluster)", + "refresh": 1, + "regex": "", + "sort": 2, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": ".+", + "current": { + "selected": true, + "text": "All", + "value": "$__all" + }, + "datasource": "$datasource", + "hide": 0, + "includeAll": true, + "label": "namespace", + "multi": true, + "name": "namespace", + "options": [], + "query": "label_values(agent_build_info, namespace)", + "refresh": 1, + "regex": "", + "sort": 2, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": ".+", + "current": { + "selected": true, + "text": "All", + "value": "$__all" + }, + "datasource": "$datasource", + "hide": 0, + "includeAll": true, + "label": "container", + "multi": true, + "name": "container", + "options": [], + "query": "label_values(agent_build_info, container)", + "refresh": 1, + "regex": "", + "sort": 2, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + }, + { + "allValue": "grafana-agent-.*", + "current": { + "selected": true, + "text": "All", + "value": "$__all" + }, + "datasource": "$datasource", + "hide": 0, + "includeAll": true, + "label": "pod", + "multi": true, + "name": "pod", + "options": [], + "query": "label_values(agent_build_info{container=~\"$container\"}, pod)", + "refresh": 1, + "regex": "", + "sort": 2, + "tagValuesQuery": "", + "tags": [], + "tagsQuery": "", + "type": "query", + "useTags": false + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ], + "time_options": ["5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d"] + }, + "timezone": "", + "title": "Agent", + "uid": "", + "version": 0 } diff --git a/docs/sources/flow/tutorials/assets/grafana/datasources/datasource.yml b/docs/sources/flow/tutorials/assets/grafana/datasources/datasource.yml index d17bf25b33b8..199470f6ea9a 100644 --- a/docs/sources/flow/tutorials/assets/grafana/datasources/datasource.yml +++ b/docs/sources/flow/tutorials/assets/grafana/datasources/datasource.yml @@ -4,12 +4,12 @@ deleteDatasources: - name: Mimir datasources: -- name: Mimir - type: prometheus - access: proxy - orgId: 1 - url: http://mimir:9009/prometheus - basicAuth: false - isDefault: false - version: 1 - editable: false \ No newline at end of file + - name: Mimir + type: prometheus + access: proxy + orgId: 1 + url: http://mimir:9009/prometheus + basicAuth: false + isDefault: false + version: 1 + editable: false diff --git a/docs/sources/flow/tutorials/chaining.md b/docs/sources/flow/tutorials/chaining.md index c46987ace127..0bdc8089d916 100644 --- a/docs/sources/flow/tutorials/chaining.md +++ b/docs/sources/flow/tutorials/chaining.md @@ -1,10 +1,10 @@ --- aliases: -- ./chaining/ -- /docs/grafana-cloud/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/chaining/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/chaining/ + - ./chaining/ + - /docs/grafana-cloud/agent/flow/tutorials/chaining/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/chaining/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/chaining/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/chaining/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/chaining/ description: Learn how to chain Prometheus components menuTitle: Chain Prometheus components @@ -26,7 +26,7 @@ A new concept introduced in Flow is chaining components together in a composable ## Prerequisites -* [Docker](https://www.docker.com/products/docker-desktop) +- [Docker](https://www.docker.com/products/docker-desktop) ## Run the example @@ -91,4 +91,3 @@ In `multiple-input.river` add a new `prometheus.relabel` component that adds a ` [multiple-inputs.river]: https://grafana.com/docs/agent//flow/tutorials/assets/flow_configs/multiple-inputs.river [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D [node_exporter]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22node_cpu_seconds_total%22%7D%5D - diff --git a/docs/sources/flow/tutorials/collecting-prometheus-metrics.md b/docs/sources/flow/tutorials/collecting-prometheus-metrics.md index ad158ff98dd7..98f8eaa6e574 100644 --- a/docs/sources/flow/tutorials/collecting-prometheus-metrics.md +++ b/docs/sources/flow/tutorials/collecting-prometheus-metrics.md @@ -1,10 +1,10 @@ --- aliases: -- ./collecting-prometheus-metrics/ -- /docs/grafana-cloud/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/collecting-prometheus-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/collecting-prometheus-metrics/ + - ./collecting-prometheus-metrics/ + - /docs/grafana-cloud/agent/flow/tutorials/collecting-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/collecting-prometheus-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/collecting-prometheus-metrics/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/collecting-prometheus-metrics/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/collecting-prometheus-metrics/ description: Learn how to collect Prometheus metrics menuTitle: Collect Prometheus metrics @@ -44,7 +44,7 @@ refs: ## Prerequisites -* [Docker][] +- [Docker][] ## Run the example @@ -114,11 +114,10 @@ prometheus.remote_write "prom" { ## Running without Docker To try out {{< param "PRODUCT_ROOT_NAME" >}} without using Docker: + 1. Download {{< param "PRODUCT_ROOT_NAME" >}}. 1. Set the environment variable `AGENT_MODE=flow`. 1. Run the {{< param "PRODUCT_ROOT_NAME" >}} with `grafana-agent run `. - [Docker]: https://www.docker.com/products/docker-desktop [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D - diff --git a/docs/sources/flow/tutorials/filtering-metrics.md b/docs/sources/flow/tutorials/filtering-metrics.md index 391997e4969f..08bb428e30a1 100644 --- a/docs/sources/flow/tutorials/filtering-metrics.md +++ b/docs/sources/flow/tutorials/filtering-metrics.md @@ -1,10 +1,10 @@ --- aliases: -- ./filtering-metrics/ -- /docs/grafana-cloud/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/filtering-metrics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/filtering-metrics/ + - ./filtering-metrics/ + - /docs/grafana-cloud/agent/flow/tutorials/filtering-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/filtering-metrics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/filtering-metrics/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/filtering-metrics/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/filtering-metrics/ description: Learn how to filter Prometheus metrics menuTitle: Filter Prometheus metrics @@ -29,7 +29,7 @@ In this tutorial, you'll add a new component [prometheus.relabel](ref:prometheus ## Prerequisites -* [Docker][] +- [Docker][] ## Run the example @@ -47,7 +47,6 @@ The `runt.sh` script does: 1. Downloads the docker image for {{< param "PRODUCT_ROOT_NAME" >}} explicitly. 1. Runs the docker-compose up command to bring all the services up. - Allow {{< param "PRODUCT_ROOT_NAME" >}} to run for two minutes, then navigate to [Grafana][] page and the `service` label will be there with the `api_server` value. ![Dashboard showing api_server](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-filter.png) @@ -64,8 +63,6 @@ Open the `relabel.river` file that was downloaded and change the name of the ser ![Updated dashboard showing api_server_v2](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-transition.png) - [Docker]: https://www.docker.com/products/docker-desktop [Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D [relabel.river]: https://grafana.com/docs/agent//flow/tutorials/assets/flow_configs/relabel.river - diff --git a/docs/sources/flow/tutorials/flow-by-example/_index.md b/docs/sources/flow/tutorials/flow-by-example/_index.md index d9b037350272..579ae7708ae0 100644 --- a/docs/sources/flow/tutorials/flow-by-example/_index.md +++ b/docs/sources/flow/tutorials/flow-by-example/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/ + - /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/ description: Learn how to use Grafana Agent Flow title: Flow by example diff --git a/docs/sources/flow/tutorials/flow-by-example/first-components-and-stdlib/index.md b/docs/sources/flow/tutorials/flow-by-example/first-components-and-stdlib/index.md index 59bc59c5d17b..38d0bc17bc80 100644 --- a/docs/sources/flow/tutorials/flow-by-example/first-components-and-stdlib/index.md +++ b/docs/sources/flow/tutorials/flow-by-example/first-components-and-stdlib/index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/first-components-and-stdlib/ + - /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/first-components-and-stdlib/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/first-components-and-stdlib/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/first-components-and-stdlib/ description: Learn about the basics of River and the configuration language title: First components and introducing the standard library @@ -27,39 +27,41 @@ This tutorial covers the basics of the River language and the standard library. [River](https://github.com/grafana/river) is an HCL-inspired configuration language used to configure {{< param "PRODUCT_NAME" >}}. A River file is comprised of three things: -1. **Attributes** +1. **Attributes** - `key = value` pairs used to configure individual settings. + `key = value` pairs used to configure individual settings. ```river url = "http://localhost:9090" ``` -1. **Expressions** +1. **Expressions** - Expressions are used to compute values. They can be constant values (for example, `"localhost:9090"`), or they can be more complex (for example, referencing a component's export: `prometheus.exporter.unix.targets`. They can also be a mathematical expression: `(1 + 2) * 3`, or a standard library function call: `env("HOME")`). We will use more expressions as we go along the examples. If you are curious, you can find a list of available standard library functions in the [Standard library documentation][]. + Expressions are used to compute values. They can be constant values (for example, `"localhost:9090"`), or they can be more complex (for example, referencing a component's export: `prometheus.exporter.unix.targets`. They can also be a mathematical expression: `(1 + 2) * 3`, or a standard library function call: `env("HOME")`). We will use more expressions as we go along the examples. If you are curious, you can find a list of available standard library functions in the [Standard library documentation][]. -1. **Blocks** +1. **Blocks** - Blocks are used to configure components with groups of attributes or nested blocks. The following example block can be used to configure the logging output of {{< param "PRODUCT_NAME" >}}: + Blocks are used to configure components with groups of attributes or nested blocks. The following example block can be used to configure the logging output of {{< param "PRODUCT_NAME" >}}: - ```river - logging { - level = "debug" - format = "json" - } - ``` + ```river + logging { + level = "debug" + format = "json" + } + ``` + + {{< admonition type="note" >}} - {{< admonition type="note" >}} -The default log level is `info` and the default log format is `logfmt`. + The default log level is `info` and the default log format is `logfmt`. {{< /admonition >}} - Try pasting this into `config.river` and running `/path/to/agent run config.river` to see what happens. + Try pasting this into `config.river` and running `/path/to/agent run config.river` to see what happens. + + Congratulations, you've just written your first River file! You've also just written your first {{< param "PRODUCT_NAME" >}} configuration file. This configuration won't do anything, so let's add some components to it. - Congratulations, you've just written your first River file! You've also just written your first {{< param "PRODUCT_NAME" >}} configuration file. This configuration won't do anything, so let's add some components to it. + {{< admonition type="note" >}} - {{< admonition type="note" >}} -Comments in River are prefixed with `//` and are single-line only. For example: `// This is a comment`. + Comments in River are prefixed with `//` and are single-line only. For example: `// This is a comment`. {{< /admonition >}} ## Components diff --git a/docs/sources/flow/tutorials/flow-by-example/get-started.md b/docs/sources/flow/tutorials/flow-by-example/get-started.md index 1f439cd88fa6..7a35388a00e8 100644 --- a/docs/sources/flow/tutorials/flow-by-example/get-started.md +++ b/docs/sources/flow/tutorials/flow-by-example/get-started.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/faq/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/faq/ + - /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/faq/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/faq/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/faq/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/flow-by-example/faq/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/get-started/ description: Getting started with Flow-by-Example Tutorials title: Get started @@ -31,7 +31,7 @@ To run the examples, you should have a Grafana Agent binary available. You can f You can use this docker-compose file to set up a local Grafana instance alongside Loki and Prometheus pre-configured as datasources. The examples are designed to be run locally, so you can follow along and experiment with them yourself. ```yaml -version: '3' +version: "3" services: loki: image: grafana/loki:2.9.0 diff --git a/docs/sources/flow/tutorials/flow-by-example/logs-and-relabeling-basics/index.md b/docs/sources/flow/tutorials/flow-by-example/logs-and-relabeling-basics/index.md index 02c7c3c138f9..e3fa96ed8562 100644 --- a/docs/sources/flow/tutorials/flow-by-example/logs-and-relabeling-basics/index.md +++ b/docs/sources/flow/tutorials/flow-by-example/logs-and-relabeling-basics/index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/logs-and-relabeling-basics/ + - /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/logs-and-relabeling-basics/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/logs-and-relabeling-basics/ description: Learn how to relabel metrics and collect logs title: Logs and relabeling basics @@ -134,7 +134,7 @@ If you re-execute the query, you can see the new log lines. ![Grafana Explore view of example log lines](/media/docs/agent/screenshot-flow-by-example-log-lines.png) -If you are curious how {{< param "PRODUCT_ROOT_NAME" >}} keeps track of where it is in a log file, you can look at `data-agent/loki.source.file.local_files/positions.yml`. +If you are curious how {{< param "PRODUCT_ROOT_NAME" >}} keeps track of where it is in a log file, you can look at `data-agent/loki.source.file.local_files/positions.yml`. If you delete this file, {{< param "PRODUCT_ROOT_NAME" >}} starts reading from the beginning of the file again, which is why keeping the {{< param "PRODUCT_ROOT_NAME" >}}'s data directory in a persistent location is desirable. ## Exercise @@ -224,7 +224,7 @@ loki.write "local_loki" { This exercise is more challenging than the previous one. If you are having trouble, skip it and move to the next section, which will cover some of the concepts used here. You can always come back to this exercise later. {{< /admonition >}} -This exercise will build on the previous one, though it's more involved. +This exercise will build on the previous one, though it's more involved. Let's say we want to extract the `level` from the logs and add it as a label. As a starting point, look at [loki.process][]. This component allows you to perform processing on logs, including extracting values from log contents. @@ -305,4 +305,3 @@ loki.write "local_loki" { ## Finishing up and next steps You have learned the concepts of components, attributes, and expressions. You have also seen how to use some standard library components to collect metrics and logs. In the next tutorial, you will learn more about how to use the `loki.process` component to extract values from logs and use them. - diff --git a/docs/sources/flow/tutorials/flow-by-example/processing-logs/index.md b/docs/sources/flow/tutorials/flow-by-example/processing-logs/index.md index 327b40716c30..614589f2212f 100644 --- a/docs/sources/flow/tutorials/flow-by-example/processing-logs/index.md +++ b/docs/sources/flow/tutorials/flow-by-example/processing-logs/index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/processing-logs/ -- /docs/grafana-cloud/send-data/agent/flow/tutorials/processing-logs/ + - /docs/grafana-cloud/agent/flow/tutorials/flow-by-example/processing-logs/ + - /docs/grafana-cloud/monitor-infrastructure/agent/flow/tutorials/flow-by-example/processing-logs/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/tutorials/flow-by-example/processing-logs/ + - /docs/grafana-cloud/send-data/agent/flow/tutorials/processing-logs/ canonical: https://grafana.com/docs/agent/latest/flow/tutorials/flow-by-example/processing-logs/ description: Learn how to process logs title: Processing Logs @@ -146,12 +146,12 @@ Let's use an example log line to illustrate this, then go stage by stage, showin ```json { - "log": { - "is_secret": "true", - "level": "info", - "message": "This is a secret message!", - }, - "timestamp": "2023-11-16T06:01:50Z", + "log": { + "is_secret": "true", + "level": "info", + "message": "This is a secret message!" + }, + "timestamp": "2023-11-16T06:01:50Z" } ``` @@ -166,7 +166,7 @@ stage.json { } ``` -This stage parses the log line as JSON, extracts two values from it, `log` and `timestamp`, and puts them into the extracted map with keys `log` and `ts`, respectively. +This stage parses the log line as JSON, extracts two values from it, `log` and `timestamp`, and puts them into the extracted map with keys `log` and `ts`, respectively. {{< admonition type="note" >}} Supplying an empty string is shorthand for using the same key as in the input log line (so `log = ""` is the same as `log = "log"`). The _keys_ of the `expressions` object end up as the keys in the extracted map, and the _values_ are used as keys to look up in the parsed log line. @@ -192,12 +192,12 @@ Extracted map _after_ performing this stage: ```json { - "log": { - "is_secret": "true", - "level": "info", - "message": "This is a secret message!", - }, - "ts": "2023-11-16T06:01:50Z", + "log": { + "is_secret": "true", + "level": "info", + "message": "This is a secret message!" + }, + "ts": "2023-11-16T06:01:50Z" } ``` @@ -256,12 +256,12 @@ Extracted map _before_ performing this stage: ```json { - "log": { - "is_secret": "true", - "level": "info", - "message": "This is a secret message!", - }, - "ts": "2023-11-16T06:01:50Z", + "log": { + "is_secret": "true", + "level": "info", + "message": "This is a secret message!" + }, + "ts": "2023-11-16T06:01:50Z" } ``` @@ -269,15 +269,15 @@ Extracted map _after_ performing this stage: ```json { - "log": { - "is_secret": "true", - "level": "info", - "message": "This is a secret message!", - }, - "ts": "2023-11-16T06:01:50Z", + "log": { "is_secret": "true", "level": "info", - "log_line": "This is a secret message!", + "message": "This is a secret message!" + }, + "ts": "2023-11-16T06:01:50Z", + "is_secret": "true", + "level": "info", + "log_line": "This is a secret message!" } ``` @@ -344,7 +344,7 @@ curl localhost:9999/loki/api/v1/raw -XPOST -H "Content-Type: application/json" - ``` Now that you have sent some logs, let's see how they look in Grafana. -Navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. +Navigate to [localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. Try querying for `{source="demo-api"}` and see if you can find the logs you sent. Try playing around with the values of `"level"`, `"message"`, `"timestamp"`, and `"is_secret"` and see how the logs change. @@ -355,7 +355,7 @@ You can also try adding more stages to the `loki.process` component to extract m ## Exercise Since you are already using Docker and Docker exports logs, let's get those logs into Loki. -You can refer to the [discovery.docker](https://grafana.com/docs/agent//flow/reference/components/discovery.docker/) and [loki.source.docker](https://grafana.com/docs/agent//flow/reference/components/loki.source.docker/) documentation for more information. +You can refer to the [discovery.docker](https://grafana.com/docs/agent//flow/reference/components/discovery.docker/) and [loki.source.docker](https://grafana.com/docs/agent//flow/reference/components/loki.source.docker/) documentation for more information. To ensure proper timestamps and other labels, make sure you use a `loki.process` component to process the logs before sending them to Loki. @@ -404,4 +404,4 @@ loki.write "local_loki" { } ``` -{{< /collapse >}} \ No newline at end of file +{{< /collapse >}} diff --git a/docs/sources/operator/_index.md b/docs/sources/operator/_index.md index a39241c87a62..61d95259a5a4 100644 --- a/docs/sources/operator/_index.md +++ b/docs/sources/operator/_index.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/ -- /docs/grafana-cloud/send-data/agent/operator/ + - /docs/grafana-cloud/agent/operator/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/ + - /docs/grafana-cloud/send-data/agent/operator/ canonical: https://grafana.com/docs/agent/latest/operator/ description: Learn about the static mode Kubernetes operator menuTitle: Static mode Kubernetes operator @@ -20,10 +20,10 @@ collect telemetry data from Kubernetes resources. Grafana Agent Operator supports consuming various [custom resources][] for telemetry collection: -* Prometheus Operator [ServiceMonitor][] resources for collecting metrics from Kubernetes [Services][]. -* Prometheus Operator [PodMonitor][] resources for collecting metrics from Kubernetes [Pods][]. -* Prometheus Operator [Probe][] resources for collecting metrics from Kubernetes [Ingresses][]. -* Custom [PodLogs][] resources for collecting logs. +- Prometheus Operator [ServiceMonitor][] resources for collecting metrics from Kubernetes [Services][]. +- Prometheus Operator [PodMonitor][] resources for collecting metrics from Kubernetes [Pods][]. +- Prometheus Operator [Probe][] resources for collecting metrics from Kubernetes [Ingresses][]. +- Custom [PodLogs][] resources for collecting logs. {{< admonition type="note" >}} Grafana Agent Operator does not collect traces. @@ -49,16 +49,17 @@ installation can be tedious. The following sections describe how to use Grafana Agent Operator: -| Topic | Describes | -|---|---| -| [Configure Kubernetes Monitoring using Agent Operator](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/configure-infrastructure-manually/k8s-agent-operator/) | Use the Kubernetes Monitoring solution to set up monitoring of your Kubernetes cluster and to install preconfigured dashboards and alerts. | -| [Install Grafana Agent Operator with Helm]({{< relref "./helm-getting-started" >}}) | How to deploy the Grafana Agent Operator into your Kubernetes cluster using the grafana-agent-operator Helm chart. | -| [Install Grafana Agent Operator]({{< relref "./getting-started" >}}) | How to deploy the Grafana Agent Operator into your Kubernetes cluster without using Helm. | -| [Deploy the Grafana Agent Operator resources]({{< relref "./deploy-agent-operator-resources" >}}) | How to roll out the Grafana Agent Operator custom resources, needed to begin monitoring your cluster. Complete this procedure *after* installing Grafana Agent Operator—either with or without Helm. | -| [Grafana Agent Operator architecture]({{< relref "./architecture" >}}) | Learn about the resources used by Agent Operator to collect telemetry data and how it discovers the hierarchy of custom resources, continually reconciling the hierarchy. | -| [Set up Agent Operator integrations]({{< relref "./operator-integrations" >}}) | Learn how to set up node-exporter and mysqld-exporter integrations. | +| Topic | Describes | +| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [Configure Kubernetes Monitoring using Agent Operator](/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/configure-infrastructure-manually/k8s-agent-operator/) | Use the Kubernetes Monitoring solution to set up monitoring of your Kubernetes cluster and to install preconfigured dashboards and alerts. | +| [Install Grafana Agent Operator with Helm]({{< relref "./helm-getting-started" >}}) | How to deploy the Grafana Agent Operator into your Kubernetes cluster using the grafana-agent-operator Helm chart. | +| [Install Grafana Agent Operator]({{< relref "./getting-started" >}}) | How to deploy the Grafana Agent Operator into your Kubernetes cluster without using Helm. | +| [Deploy the Grafana Agent Operator resources]({{< relref "./deploy-agent-operator-resources" >}}) | How to roll out the Grafana Agent Operator custom resources, needed to begin monitoring your cluster. Complete this procedure _after_ installing Grafana Agent Operator—either with or without Helm. | +| [Grafana Agent Operator architecture]({{< relref "./architecture" >}}) | Learn about the resources used by Agent Operator to collect telemetry data and how it discovers the hierarchy of custom resources, continually reconciling the hierarchy. | +| [Set up Agent Operator integrations]({{< relref "./operator-integrations" >}}) | Learn how to set up node-exporter and mysqld-exporter integrations. | [Kubernetes operator]: https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/ + [static mode]: {{< relref "../static/" >}} [Services]: https://kubernetes.io/docs/concepts/services-networking/service/ [Pods]: https://kubernetes.io/docs/concepts/workloads/pods/ diff --git a/docs/sources/operator/add-custom-scrape-jobs.md b/docs/sources/operator/add-custom-scrape-jobs.md index 6f4fb9cc02df..7ce9cbbe5568 100644 --- a/docs/sources/operator/add-custom-scrape-jobs.md +++ b/docs/sources/operator/add-custom-scrape-jobs.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/add-custom-scrape-jobs/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/add-custom-scrape-jobs/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/add-custom-scrape-jobs/ -- /docs/grafana-cloud/send-data/agent/operator/add-custom-scrape-jobs/ + - /docs/grafana-cloud/agent/operator/add-custom-scrape-jobs/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/add-custom-scrape-jobs/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/add-custom-scrape-jobs/ + - /docs/grafana-cloud/send-data/agent/operator/add-custom-scrape-jobs/ canonical: https://grafana.com/docs/agent/latest/operator/add-custom-scrape-jobs/ description: Learn how to add custom scrape jobs title: Add custom scrape jobs @@ -98,12 +98,12 @@ Note that you **should** always add these two relabel_configs for each custom jo - action: hashmod modulus: $(SHARDS) source_labels: - - __address__ + - __address__ target_label: __tmp_hash - action: keep regex: $(SHARD) source_labels: - - __tmp_hash + - __tmp_hash ``` These rules ensure if your GrafanaAgent has multiple metrics shards, only one diff --git a/docs/sources/operator/api.md b/docs/sources/operator/api.md index e2aa26ffd9b7..4227c9eedbff 100644 --- a/docs/sources/operator/api.md +++ b/docs/sources/operator/api.md @@ -1,566 +1,733 @@ --- aliases: -- /docs/agent/latest/operator/crd/ -- /docs/grafana-cloud/agent/operator/api/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/api/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/api/ -- /docs/grafana-cloud/send-data/agent/operator/api/ + - /docs/agent/latest/operator/crd/ + - /docs/grafana-cloud/agent/operator/api/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/api/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/api/ + - /docs/grafana-cloud/send-data/agent/operator/api/ canonical: https://grafana.com/docs/agent/latest/operator/api/ title: Custom Resource Definition Reference description: Learn about the Grafana Agent API weight: 500 --- + # Custom Resource Definition Reference + ## Resource Types: -* [Deployment](#monitoring.grafana.com/v1alpha1.Deployment) -* [GrafanaAgent](#monitoring.grafana.com/v1alpha1.GrafanaAgent) -* [IntegrationsDeployment](#monitoring.grafana.com/v1alpha1.IntegrationsDeployment) -* [LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment) -* [MetricsDeployment](#monitoring.grafana.com/v1alpha1.MetricsDeployment) + +- [Deployment](#monitoring.grafana.com/v1alpha1.Deployment) +- [GrafanaAgent](#monitoring.grafana.com/v1alpha1.GrafanaAgent) +- [IntegrationsDeployment](#monitoring.grafana.com/v1alpha1.IntegrationsDeployment) +- [LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment) +- [MetricsDeployment](#monitoring.grafana.com/v1alpha1.MetricsDeployment) + ### Deployment -Deployment is a set of discovered resources relative to a GrafanaAgent. The tree of resources contained in a Deployment form the resource hierarchy used for reconciling a GrafanaAgent. -#### Fields -|Field|Description| -|-|-| -|apiVersion|string
`monitoring.grafana.com/v1alpha1`| -|kind|string
`Deployment`| -|`Agent`
_[GrafanaAgent](#monitoring.grafana.com/v1alpha1.GrafanaAgent)_| Root resource in the deployment. | -|`Metrics`
_[[]MetricsDeployment](#monitoring.grafana.com/v1alpha1.MetricsDeployment)_| Metrics resources discovered by Agent. | -|`Logs`
_[[]LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment)_| Logs resources discovered by Agent. | -|`Integrations`
_[[]IntegrationsDeployment](#monitoring.grafana.com/v1alpha1.IntegrationsDeployment)_| Integrations resources discovered by Agent. | -|`Secrets`
_[github.com/grafana/agent/static/operator/assets.SecretStore](https://pkg.go.dev/github.com/grafana/agent/static/operator/assets#SecretStore)_| The full list of Secrets referenced by resources in the Deployment. | + +Deployment is a set of discovered resources relative to a GrafanaAgent. The tree of resources contained in a Deployment form the resource hierarchy used for reconciling a GrafanaAgent. + +#### Fields + +| Field | Description | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | +| apiVersion | string
`monitoring.grafana.com/v1alpha1` | +| kind | string
`Deployment` | +| `Agent`
_[GrafanaAgent](#monitoring.grafana.com/v1alpha1.GrafanaAgent)_ | Root resource in the deployment. | +| `Metrics`
_[[]MetricsDeployment](#monitoring.grafana.com/v1alpha1.MetricsDeployment)_ | Metrics resources discovered by Agent. | +| `Logs`
_[[]LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment)_ | Logs resources discovered by Agent. | +| `Integrations`
_[[]IntegrationsDeployment](#monitoring.grafana.com/v1alpha1.IntegrationsDeployment)_ | Integrations resources discovered by Agent. | +| `Secrets`
_[github.com/grafana/agent/static/operator/assets.SecretStore](https://pkg.go.dev/github.com/grafana/agent/static/operator/assets#SecretStore)_ | The full list of Secrets referenced by resources in the Deployment. | + ### GrafanaAgent + (Appears on:[Deployment](#monitoring.grafana.com/v1alpha1.Deployment)) -GrafanaAgent defines a Grafana Agent deployment. -#### Fields -|Field|Description| -|-|-| -|apiVersion|string
`monitoring.grafana.com/v1alpha1`| -|kind|string
`GrafanaAgent`| -|`metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_| Refer to the Kubernetes API documentation for the fields of the `metadata` field. | -|`spec`
_[GrafanaAgentSpec](#monitoring.grafana.com/v1alpha1.GrafanaAgentSpec)_| Spec holds the specification of the desired behavior for the Grafana Agent cluster. | -|`logLevel`
_string_| LogLevel controls the log level of the generated pods. Defaults to "info" if not set. | -|`logFormat`
_string_| LogFormat controls the logging format of the generated pods. Defaults to "logfmt" if not set. | -|`apiServer`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.APIServerConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.APIServerConfig)_| APIServerConfig lets you specify a host and auth methods to access the Kubernetes API server. If left empty, the Agent assumes that it is running inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount. | -|`podMetadata`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.EmbeddedObjectMetadata](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.EmbeddedObjectMetadata)_| PodMetadata configures Labels and Annotations which are propagated to created Grafana Agent pods. | -|`version`
_string_| Version of Grafana Agent to be deployed. | -|`paused`
_bool_| Paused prevents actions except for deletion to be performed on the underlying managed objects. | -|`image`
_string_| Image, when specified, overrides the image used to run Agent. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | -|`configReloaderVersion`
_string_| Version of Config Reloader to be deployed. | -|`configReloaderImage`
_string_| Image, when specified, overrides the image used to run Config Reloader. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | -|`imagePullSecrets`
_[[]Kubernetes core/v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#localobjectreference-v1-core)_| ImagePullSecrets holds an optional list of references to Secrets within the same namespace used for pulling the Grafana Agent image from registries. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -|`storage`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.StorageSpec](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.StorageSpec)_| Storage spec to specify how storage will be used. | -|`volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_| Volumes allows configuration of additional volumes on the output StatefulSet definition. The volumes specified are appended to other volumes that are generated as a result of StorageSpec objects. | -|`volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_| VolumeMounts lets you configure additional VolumeMounts on the output StatefulSet definition. Specified VolumeMounts are appended to other VolumeMounts generated as a result of StorageSpec objects in the Grafana Agent container. | -|`resources`
_[Kubernetes core/v1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core)_| Resources holds requests and limits for individual pods. | -|`nodeSelector`
_map[string]string_| NodeSelector defines which nodes pods should be scheduling on. | -|`serviceAccountName`
_string_| ServiceAccountName is the name of the ServiceAccount to use for running Grafana Agent pods. | -|`secrets`
_[]string_| Secrets is a list of secrets in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The secrets are mounted into /var/lib/grafana-agent/extra-secrets/<secret-name>. | -|`configMaps`
_[]string_| ConfigMaps is a list of config maps in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The ConfigMaps are mounted into /var/lib/grafana-agent/extra-configmaps/<configmap-name>. | -|`affinity`
_[Kubernetes core/v1.Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#affinity-v1-core)_| Affinity, if specified, controls pod scheduling constraints. | -|`tolerations`
_[[]Kubernetes core/v1.Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#toleration-v1-core)_| Tolerations, if specified, controls the pod's tolerations. | -|`topologySpreadConstraints`
_[[]Kubernetes core/v1.TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core)_| TopologySpreadConstraints, if specified, controls the pod's topology spread constraints. | -|`securityContext`
_[Kubernetes core/v1.PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podsecuritycontext-v1-core)_| SecurityContext holds pod-level security attributes and common container settings. When unspecified, defaults to the default PodSecurityContext. | -|`containers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_| Containers lets you inject additional containers or modify operator-generated containers. This can be used to add an authentication proxy to a Grafana Agent pod or to change the behavior of an operator-generated container. Containers described here modify an operator-generated container if they share the same name and if modifications are done via a strategic merge patch. The current container names are: `grafana-agent` and `config-reloader`. Overriding containers is entirely outside the scope of what the Grafana Agent team supports and by doing so, you accept that this behavior may break at any time without notice. | -|`initContainers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_| InitContainers let you add initContainers to the pod definition. These can be used to, for example, fetch secrets for injection into the Grafana Agent configuration from external sources. Errors during the execution of an initContainer cause the pod to restart. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other than secret fetching is entirely outside the scope of what the Grafana Agent maintainers support and by doing so, you accept that this behavior may break at any time without notice. | -|`priorityClassName`
_string_| PriorityClassName is the priority class assigned to pods. | -|`runtimeClassName`
_string_| RuntimeClassName is the runtime class assigned to pods. | -|`portName`
_string_| Port name used for the pods and governing service. This defaults to agent-metrics. | -|`metrics`
_[MetricsSubsystemSpec](#monitoring.grafana.com/v1alpha1.MetricsSubsystemSpec)_| Metrics controls the metrics subsystem of the Agent and settings unique to metrics-specific pods that are deployed. | -|`logs`
_[LogsSubsystemSpec](#monitoring.grafana.com/v1alpha1.LogsSubsystemSpec)_| Logs controls the logging subsystem of the Agent and settings unique to logging-specific pods that are deployed. | -|`integrations`
_[IntegrationsSubsystemSpec](#monitoring.grafana.com/v1alpha1.IntegrationsSubsystemSpec)_| Integrations controls the integration subsystem of the Agent and settings unique to deployed integration-specific pods. | -|`enableConfigReadAPI`
_bool_| enableConfigReadAPI enables the read API for viewing the currently running config port 8080 on the agent. +kubebuilder:default=false | -|`disableReporting`
_bool_| disableReporting disables reporting of enabled feature flags to Grafana. +kubebuilder:default=false | -|`disableSupportBundle`
_bool_| disableSupportBundle disables the generation of support bundles. +kubebuilder:default=false | +GrafanaAgent defines a Grafana Agent deployment. + +#### Fields + +| Field | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| apiVersion | string
`monitoring.grafana.com/v1alpha1` | +| kind | string
`GrafanaAgent` | +| `metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to the Kubernetes API documentation for the fields of the `metadata` field. | +| `spec`
_[GrafanaAgentSpec](#monitoring.grafana.com/v1alpha1.GrafanaAgentSpec)_ | Spec holds the specification of the desired behavior for the Grafana Agent cluster. | +| `logLevel`
_string_ | LogLevel controls the log level of the generated pods. Defaults to "info" if not set. | +| `logFormat`
_string_ | LogFormat controls the logging format of the generated pods. Defaults to "logfmt" if not set. | +| `apiServer`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.APIServerConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.APIServerConfig)_ | APIServerConfig lets you specify a host and auth methods to access the Kubernetes API server. If left empty, the Agent assumes that it is running inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount. | +| `podMetadata`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.EmbeddedObjectMetadata](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.EmbeddedObjectMetadata)_ | PodMetadata configures Labels and Annotations which are propagated to created Grafana Agent pods. | +| `version`
_string_ | Version of Grafana Agent to be deployed. | +| `paused`
_bool_ | Paused prevents actions except for deletion to be performed on the underlying managed objects. | +| `image`
_string_ | Image, when specified, overrides the image used to run Agent. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | +| `configReloaderVersion`
_string_ | Version of Config Reloader to be deployed. | +| `configReloaderImage`
_string_ | Image, when specified, overrides the image used to run Config Reloader. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | +| `imagePullSecrets`
_[[]Kubernetes core/v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#localobjectreference-v1-core)_ | ImagePullSecrets holds an optional list of references to Secrets within the same namespace used for pulling the Grafana Agent image from registries. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `storage`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.StorageSpec](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.StorageSpec)_ | Storage spec to specify how storage will be used. | +| `volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_ | Volumes allows configuration of additional volumes on the output StatefulSet definition. The volumes specified are appended to other volumes that are generated as a result of StorageSpec objects. | +| `volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_ | VolumeMounts lets you configure additional VolumeMounts on the output StatefulSet definition. Specified VolumeMounts are appended to other VolumeMounts generated as a result of StorageSpec objects in the Grafana Agent container. | +| `resources`
_[Kubernetes core/v1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core)_ | Resources holds requests and limits for individual pods. | +| `nodeSelector`
_map[string]string_ | NodeSelector defines which nodes pods should be scheduling on. | +| `serviceAccountName`
_string_ | ServiceAccountName is the name of the ServiceAccount to use for running Grafana Agent pods. | +| `secrets`
_[]string_ | Secrets is a list of secrets in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The secrets are mounted into /var/lib/grafana-agent/extra-secrets/<secret-name>. | +| `configMaps`
_[]string_ | ConfigMaps is a list of config maps in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The ConfigMaps are mounted into /var/lib/grafana-agent/extra-configmaps/<configmap-name>. | +| `affinity`
_[Kubernetes core/v1.Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#affinity-v1-core)_ | Affinity, if specified, controls pod scheduling constraints. | +| `tolerations`
_[[]Kubernetes core/v1.Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#toleration-v1-core)_ | Tolerations, if specified, controls the pod's tolerations. | +| `topologySpreadConstraints`
_[[]Kubernetes core/v1.TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core)_ | TopologySpreadConstraints, if specified, controls the pod's topology spread constraints. | +| `securityContext`
_[Kubernetes core/v1.PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podsecuritycontext-v1-core)_ | SecurityContext holds pod-level security attributes and common container settings. When unspecified, defaults to the default PodSecurityContext. | +| `containers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_ | Containers lets you inject additional containers or modify operator-generated containers. This can be used to add an authentication proxy to a Grafana Agent pod or to change the behavior of an operator-generated container. Containers described here modify an operator-generated container if they share the same name and if modifications are done via a strategic merge patch. The current container names are: `grafana-agent` and `config-reloader`. Overriding containers is entirely outside the scope of what the Grafana Agent team supports and by doing so, you accept that this behavior may break at any time without notice. | +| `initContainers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_ | InitContainers let you add initContainers to the pod definition. These can be used to, for example, fetch secrets for injection into the Grafana Agent configuration from external sources. Errors during the execution of an initContainer cause the pod to restart. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other than secret fetching is entirely outside the scope of what the Grafana Agent maintainers support and by doing so, you accept that this behavior may break at any time without notice. | +| `priorityClassName`
_string_ | PriorityClassName is the priority class assigned to pods. | +| `runtimeClassName`
_string_ | RuntimeClassName is the runtime class assigned to pods. | +| `portName`
_string_ | Port name used for the pods and governing service. This defaults to agent-metrics. | +| `metrics`
_[MetricsSubsystemSpec](#monitoring.grafana.com/v1alpha1.MetricsSubsystemSpec)_ | Metrics controls the metrics subsystem of the Agent and settings unique to metrics-specific pods that are deployed. | +| `logs`
_[LogsSubsystemSpec](#monitoring.grafana.com/v1alpha1.LogsSubsystemSpec)_ | Logs controls the logging subsystem of the Agent and settings unique to logging-specific pods that are deployed. | +| `integrations`
_[IntegrationsSubsystemSpec](#monitoring.grafana.com/v1alpha1.IntegrationsSubsystemSpec)_ | Integrations controls the integration subsystem of the Agent and settings unique to deployed integration-specific pods. | +| `enableConfigReadAPI`
_bool_ | enableConfigReadAPI enables the read API for viewing the currently running config port 8080 on the agent. +kubebuilder:default=false | +| `disableReporting`
_bool_ | disableReporting disables reporting of enabled feature flags to Grafana. +kubebuilder:default=false | +| `disableSupportBundle`
_bool_ | disableSupportBundle disables the generation of support bundles. +kubebuilder:default=false | + ### IntegrationsDeployment + (Appears on:[Deployment](#monitoring.grafana.com/v1alpha1.Deployment)) -IntegrationsDeployment is a set of discovered resources relative to an IntegrationsDeployment. -#### Fields -|Field|Description| -|-|-| -|apiVersion|string
`monitoring.grafana.com/v1alpha1`| -|kind|string
`IntegrationsDeployment`| -|`Instance`
_[Integration](#monitoring.grafana.com/v1alpha1.Integration)_| | +IntegrationsDeployment is a set of discovered resources relative to an IntegrationsDeployment. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------- | -------------------------------------------- | +| apiVersion | string
`monitoring.grafana.com/v1alpha1` | +| kind | string
`IntegrationsDeployment` | +| `Instance`
_[Integration](#monitoring.grafana.com/v1alpha1.Integration)_ | | + ### LogsDeployment + (Appears on:[Deployment](#monitoring.grafana.com/v1alpha1.Deployment)) -LogsDeployment is a set of discovered resources relative to a LogsInstance. -#### Fields -|Field|Description| -|-|-| -|apiVersion|string
`monitoring.grafana.com/v1alpha1`| -|kind|string
`LogsDeployment`| -|`Instance`
_[LogsInstance](#monitoring.grafana.com/v1alpha1.LogsInstance)_| | -|`PodLogs`
_[[]PodLogs](#monitoring.grafana.com/v1alpha1.PodLogs)_| | +LogsDeployment is a set of discovered resources relative to a LogsInstance. + +#### Fields + +| Field | Description | +| ------------------------------------------------------------------------------ | -------------------------------------------- | +| apiVersion | string
`monitoring.grafana.com/v1alpha1` | +| kind | string
`LogsDeployment` | +| `Instance`
_[LogsInstance](#monitoring.grafana.com/v1alpha1.LogsInstance)_ | | +| `PodLogs`
_[[]PodLogs](#monitoring.grafana.com/v1alpha1.PodLogs)_ | | + ### MetricsDeployment + (Appears on:[Deployment](#monitoring.grafana.com/v1alpha1.Deployment)) -MetricsDeployment is a set of discovered resources relative to a MetricsInstance. -#### Fields -|Field|Description| -|-|-| -|apiVersion|string
`monitoring.grafana.com/v1alpha1`| -|kind|string
`MetricsDeployment`| -|`Instance`
_[MetricsInstance](#monitoring.grafana.com/v1alpha1.MetricsInstance)_| | -|`ServiceMonitors`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.ServiceMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.ServiceMonitor)_| | -|`PodMonitors`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.PodMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor)_| | -|`Probes`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.Probe](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.Probe)_| | +MetricsDeployment is a set of discovered resources relative to a MetricsInstance. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------- | +| apiVersion | string
`monitoring.grafana.com/v1alpha1` | +| kind | string
`MetricsDeployment` | +| `Instance`
_[MetricsInstance](#monitoring.grafana.com/v1alpha1.MetricsInstance)_ | | +| `ServiceMonitors`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.ServiceMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.ServiceMonitor)_ | | +| `PodMonitors`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.PodMonitor](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor)_ | | +| `Probes`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.Probe](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.Probe)_ | | + ### CRIStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -CRIStageSpec is a parsing stage that reads log lines using the standard CRI logging format. It needs no defined fields. +CRIStageSpec is a parsing stage that reads log lines using the standard CRI logging format. It needs no defined fields. + ### DockerStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -DockerStageSpec is a parsing stage that reads log lines using the standard Docker logging format. It needs no defined fields. +DockerStageSpec is a parsing stage that reads log lines using the standard Docker logging format. It needs no defined fields. + ### DropStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -DropStageSpec is a filtering stage that lets you drop certain logs. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from the extract data to parse. If empty, uses the log message. | -|`expression`
_string_| RE2 regular expression. If source is provided, the regex attempts to match the source. If no source is provided, then the regex attempts to attach the log line. If the provided regex matches the log line or a provided source, the line is dropped. | -|`value`
_string_| Value can only be specified when source is specified. If the value provided is an exact match for the given source then the line will be dropped. Mutually exclusive with expression. | -|`olderThan`
_string_| OlderThan will be parsed as a Go duration. If the log line's timestamp is older than the current time minus the provided duration, it will be dropped. | -|`longerThan`
_string_| LongerThan will drop a log line if it its content is longer than this value (in bytes). Can be expressed as an integer (8192) or a number with a suffix (8kb). | -|`dropCounterReason`
_string_| Every time a log line is dropped, the metric logentry_dropped_lines_total is incremented. A "reason" label is added, and can be customized by providing a custom value here. Defaults to "drop_stage". | +DropStageSpec is a filtering stage that lets you drop certain logs. + +#### Fields + +| Field | Description | +| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `source`
_string_ | Name from the extract data to parse. If empty, uses the log message. | +| `expression`
_string_ | RE2 regular expression. If source is provided, the regex attempts to match the source. If no source is provided, then the regex attempts to attach the log line. If the provided regex matches the log line or a provided source, the line is dropped. | +| `value`
_string_ | Value can only be specified when source is specified. If the value provided is an exact match for the given source then the line will be dropped. Mutually exclusive with expression. | +| `olderThan`
_string_ | OlderThan will be parsed as a Go duration. If the log line's timestamp is older than the current time minus the provided duration, it will be dropped. | +| `longerThan`
_string_ | LongerThan will drop a log line if it its content is longer than this value (in bytes). Can be expressed as an integer (8192) or a number with a suffix (8kb). | +| `dropCounterReason`
_string_ | Every time a log line is dropped, the metric logentry_dropped_lines_total is incremented. A "reason" label is added, and can be customized by providing a custom value here. Defaults to "drop_stage". | + ### GrafanaAgentSpec + (Appears on:[GrafanaAgent](#monitoring.grafana.com/v1alpha1.GrafanaAgent)) -GrafanaAgentSpec is a specification of the desired behavior of the Grafana Agent cluster. -#### Fields -|Field|Description| -|-|-| -|`logLevel`
_string_| LogLevel controls the log level of the generated pods. Defaults to "info" if not set. | -|`logFormat`
_string_| LogFormat controls the logging format of the generated pods. Defaults to "logfmt" if not set. | -|`apiServer`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.APIServerConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.APIServerConfig)_| APIServerConfig lets you specify a host and auth methods to access the Kubernetes API server. If left empty, the Agent assumes that it is running inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount. | -|`podMetadata`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.EmbeddedObjectMetadata](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.EmbeddedObjectMetadata)_| PodMetadata configures Labels and Annotations which are propagated to created Grafana Agent pods. | -|`version`
_string_| Version of Grafana Agent to be deployed. | -|`paused`
_bool_| Paused prevents actions except for deletion to be performed on the underlying managed objects. | -|`image`
_string_| Image, when specified, overrides the image used to run Agent. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | -|`configReloaderVersion`
_string_| Version of Config Reloader to be deployed. | -|`configReloaderImage`
_string_| Image, when specified, overrides the image used to run Config Reloader. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | -|`imagePullSecrets`
_[[]Kubernetes core/v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#localobjectreference-v1-core)_| ImagePullSecrets holds an optional list of references to Secrets within the same namespace used for pulling the Grafana Agent image from registries. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -|`storage`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.StorageSpec](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.StorageSpec)_| Storage spec to specify how storage will be used. | -|`volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_| Volumes allows configuration of additional volumes on the output StatefulSet definition. The volumes specified are appended to other volumes that are generated as a result of StorageSpec objects. | -|`volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_| VolumeMounts lets you configure additional VolumeMounts on the output StatefulSet definition. Specified VolumeMounts are appended to other VolumeMounts generated as a result of StorageSpec objects in the Grafana Agent container. | -|`resources`
_[Kubernetes core/v1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core)_| Resources holds requests and limits for individual pods. | -|`nodeSelector`
_map[string]string_| NodeSelector defines which nodes pods should be scheduling on. | -|`serviceAccountName`
_string_| ServiceAccountName is the name of the ServiceAccount to use for running Grafana Agent pods. | -|`secrets`
_[]string_| Secrets is a list of secrets in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The secrets are mounted into /var/lib/grafana-agent/extra-secrets/<secret-name>. | -|`configMaps`
_[]string_| ConfigMaps is a list of config maps in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The ConfigMaps are mounted into /var/lib/grafana-agent/extra-configmaps/<configmap-name>. | -|`affinity`
_[Kubernetes core/v1.Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#affinity-v1-core)_| Affinity, if specified, controls pod scheduling constraints. | -|`tolerations`
_[[]Kubernetes core/v1.Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#toleration-v1-core)_| Tolerations, if specified, controls the pod's tolerations. | -|`topologySpreadConstraints`
_[[]Kubernetes core/v1.TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core)_| TopologySpreadConstraints, if specified, controls the pod's topology spread constraints. | -|`securityContext`
_[Kubernetes core/v1.PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podsecuritycontext-v1-core)_| SecurityContext holds pod-level security attributes and common container settings. When unspecified, defaults to the default PodSecurityContext. | -|`containers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_| Containers lets you inject additional containers or modify operator-generated containers. This can be used to add an authentication proxy to a Grafana Agent pod or to change the behavior of an operator-generated container. Containers described here modify an operator-generated container if they share the same name and if modifications are done via a strategic merge patch. The current container names are: `grafana-agent` and `config-reloader`. Overriding containers is entirely outside the scope of what the Grafana Agent team supports and by doing so, you accept that this behavior may break at any time without notice. | -|`initContainers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_| InitContainers let you add initContainers to the pod definition. These can be used to, for example, fetch secrets for injection into the Grafana Agent configuration from external sources. Errors during the execution of an initContainer cause the pod to restart. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other than secret fetching is entirely outside the scope of what the Grafana Agent maintainers support and by doing so, you accept that this behavior may break at any time without notice. | -|`priorityClassName`
_string_| PriorityClassName is the priority class assigned to pods. | -|`runtimeClassName`
_string_| RuntimeClassName is the runtime class assigned to pods. | -|`portName`
_string_| Port name used for the pods and governing service. This defaults to agent-metrics. | -|`metrics`
_[MetricsSubsystemSpec](#monitoring.grafana.com/v1alpha1.MetricsSubsystemSpec)_| Metrics controls the metrics subsystem of the Agent and settings unique to metrics-specific pods that are deployed. | -|`logs`
_[LogsSubsystemSpec](#monitoring.grafana.com/v1alpha1.LogsSubsystemSpec)_| Logs controls the logging subsystem of the Agent and settings unique to logging-specific pods that are deployed. | -|`integrations`
_[IntegrationsSubsystemSpec](#monitoring.grafana.com/v1alpha1.IntegrationsSubsystemSpec)_| Integrations controls the integration subsystem of the Agent and settings unique to deployed integration-specific pods. | -|`enableConfigReadAPI`
_bool_| enableConfigReadAPI enables the read API for viewing the currently running config port 8080 on the agent. +kubebuilder:default=false | -|`disableReporting`
_bool_| disableReporting disables reporting of enabled feature flags to Grafana. +kubebuilder:default=false | -|`disableSupportBundle`
_bool_| disableSupportBundle disables the generation of support bundles. +kubebuilder:default=false | +GrafanaAgentSpec is a specification of the desired behavior of the Grafana Agent cluster. + +#### Fields + +| Field | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `logLevel`
_string_ | LogLevel controls the log level of the generated pods. Defaults to "info" if not set. | +| `logFormat`
_string_ | LogFormat controls the logging format of the generated pods. Defaults to "logfmt" if not set. | +| `apiServer`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.APIServerConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.APIServerConfig)_ | APIServerConfig lets you specify a host and auth methods to access the Kubernetes API server. If left empty, the Agent assumes that it is running inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount. | +| `podMetadata`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.EmbeddedObjectMetadata](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.EmbeddedObjectMetadata)_ | PodMetadata configures Labels and Annotations which are propagated to created Grafana Agent pods. | +| `version`
_string_ | Version of Grafana Agent to be deployed. | +| `paused`
_bool_ | Paused prevents actions except for deletion to be performed on the underlying managed objects. | +| `image`
_string_ | Image, when specified, overrides the image used to run Agent. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | +| `configReloaderVersion`
_string_ | Version of Config Reloader to be deployed. | +| `configReloaderImage`
_string_ | Image, when specified, overrides the image used to run Config Reloader. Specify the image along with a tag. You still need to set the version to ensure Grafana Agent Operator knows which version of Grafana Agent is being configured. | +| `imagePullSecrets`
_[[]Kubernetes core/v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#localobjectreference-v1-core)_ | ImagePullSecrets holds an optional list of references to Secrets within the same namespace used for pulling the Grafana Agent image from registries. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `storage`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.StorageSpec](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.StorageSpec)_ | Storage spec to specify how storage will be used. | +| `volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_ | Volumes allows configuration of additional volumes on the output StatefulSet definition. The volumes specified are appended to other volumes that are generated as a result of StorageSpec objects. | +| `volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_ | VolumeMounts lets you configure additional VolumeMounts on the output StatefulSet definition. Specified VolumeMounts are appended to other VolumeMounts generated as a result of StorageSpec objects in the Grafana Agent container. | +| `resources`
_[Kubernetes core/v1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core)_ | Resources holds requests and limits for individual pods. | +| `nodeSelector`
_map[string]string_ | NodeSelector defines which nodes pods should be scheduling on. | +| `serviceAccountName`
_string_ | ServiceAccountName is the name of the ServiceAccount to use for running Grafana Agent pods. | +| `secrets`
_[]string_ | Secrets is a list of secrets in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The secrets are mounted into /var/lib/grafana-agent/extra-secrets/<secret-name>. | +| `configMaps`
_[]string_ | ConfigMaps is a list of config maps in the same namespace as the GrafanaAgent object which will be mounted into each running Grafana Agent pod. The ConfigMaps are mounted into /var/lib/grafana-agent/extra-configmaps/<configmap-name>. | +| `affinity`
_[Kubernetes core/v1.Affinity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#affinity-v1-core)_ | Affinity, if specified, controls pod scheduling constraints. | +| `tolerations`
_[[]Kubernetes core/v1.Toleration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#toleration-v1-core)_ | Tolerations, if specified, controls the pod's tolerations. | +| `topologySpreadConstraints`
_[[]Kubernetes core/v1.TopologySpreadConstraint](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core)_ | TopologySpreadConstraints, if specified, controls the pod's topology spread constraints. | +| `securityContext`
_[Kubernetes core/v1.PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podsecuritycontext-v1-core)_ | SecurityContext holds pod-level security attributes and common container settings. When unspecified, defaults to the default PodSecurityContext. | +| `containers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_ | Containers lets you inject additional containers or modify operator-generated containers. This can be used to add an authentication proxy to a Grafana Agent pod or to change the behavior of an operator-generated container. Containers described here modify an operator-generated container if they share the same name and if modifications are done via a strategic merge patch. The current container names are: `grafana-agent` and `config-reloader`. Overriding containers is entirely outside the scope of what the Grafana Agent team supports and by doing so, you accept that this behavior may break at any time without notice. | +| `initContainers`
_[[]Kubernetes core/v1.Container](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core)_ | InitContainers let you add initContainers to the pod definition. These can be used to, for example, fetch secrets for injection into the Grafana Agent configuration from external sources. Errors during the execution of an initContainer cause the pod to restart. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Using initContainers for any use case other than secret fetching is entirely outside the scope of what the Grafana Agent maintainers support and by doing so, you accept that this behavior may break at any time without notice. | +| `priorityClassName`
_string_ | PriorityClassName is the priority class assigned to pods. | +| `runtimeClassName`
_string_ | RuntimeClassName is the runtime class assigned to pods. | +| `portName`
_string_ | Port name used for the pods and governing service. This defaults to agent-metrics. | +| `metrics`
_[MetricsSubsystemSpec](#monitoring.grafana.com/v1alpha1.MetricsSubsystemSpec)_ | Metrics controls the metrics subsystem of the Agent and settings unique to metrics-specific pods that are deployed. | +| `logs`
_[LogsSubsystemSpec](#monitoring.grafana.com/v1alpha1.LogsSubsystemSpec)_ | Logs controls the logging subsystem of the Agent and settings unique to logging-specific pods that are deployed. | +| `integrations`
_[IntegrationsSubsystemSpec](#monitoring.grafana.com/v1alpha1.IntegrationsSubsystemSpec)_ | Integrations controls the integration subsystem of the Agent and settings unique to deployed integration-specific pods. | +| `enableConfigReadAPI`
_bool_ | enableConfigReadAPI enables the read API for viewing the currently running config port 8080 on the agent. +kubebuilder:default=false | +| `disableReporting`
_bool_ | disableReporting disables reporting of enabled feature flags to Grafana. +kubebuilder:default=false | +| `disableSupportBundle`
_bool_ | disableSupportBundle disables the generation of support bundles. +kubebuilder:default=false | + ### Integration + (Appears on:[IntegrationsDeployment](#monitoring.grafana.com/v1alpha1.IntegrationsDeployment)) -Integration runs a single Grafana Agent integration. Integrations that generate telemetry must be configured to send that telemetry somewhere, such as autoscrape for exporter-based integrations. Integrations have access to the LogsInstances and MetricsInstances in the same GrafanaAgent resource set, referenced by the <namespace>/<name> of the Instance resource. For example, if there is a default/production MetricsInstance, you can configure a supported integration's autoscrape block with: autoscrape: enable: true metrics_instance: default/production There is currently no way for telemetry created by an Operator-managed integration to be collected from outside of the integration itself. -#### Fields -|Field|Description| -|-|-| -|`metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_| Refer to the Kubernetes API documentation for the fields of the `metadata` field. | -|`spec`
_[IntegrationSpec](#monitoring.grafana.com/v1alpha1.IntegrationSpec)_| Specifies the desired behavior of the Integration. | -|`name`
_string_| Name of the integration to run (e.g., "node_exporter", "mysqld_exporter"). | -|`type`
_[IntegrationType](#monitoring.grafana.com/v1alpha1.IntegrationType)_| Type informs Grafana Agent Operator about how to manage the integration being configured. | -|`config`
_[k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1#JSON)_| The configuration for the named integration. Note that Integrations are deployed with the integrations-next feature flag, which has different common settings: https://grafana.com/docs/agent/latest/configuration/integrations/integrations-next/ | -|`volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_| An extra list of Volumes to be associated with the Grafana Agent pods running this integration. Volume names are mutated to be unique across all Integrations. Note that the specified volumes should be able to tolerate existing on multiple pods at once when type is daemonset. Don't use volumes for loading Secrets or ConfigMaps from the same namespace as the Integration; use the Secrets and ConfigMaps fields instead. | -|`volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_| An extra list of VolumeMounts to be associated with the Grafana Agent pods running this integration. VolumeMount names are mutated to be unique across all used IntegrationSpecs. Mount paths should include the namespace/name of the Integration CR to avoid potentially colliding with other resources. | -|`secrets`
_[[]Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| An extra list of keys from Secrets in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. Secrets will be mounted at /etc/grafana-agent/integrations/secrets/<secret_namespace>/<secret_name>/<key>. | -|`configMaps`
_[[]Kubernetes core/v1.ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapkeyselector-v1-core)_| An extra list of keys from ConfigMaps in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. ConfigMaps are mounted at /etc/grafana-agent/integrations/configMaps/<configmap_namespace>/<configmap_name>/<key>. | +Integration runs a single Grafana Agent integration. Integrations that generate telemetry must be configured to send that telemetry somewhere, such as autoscrape for exporter-based integrations. Integrations have access to the LogsInstances and MetricsInstances in the same GrafanaAgent resource set, referenced by the <namespace>/<name> of the Instance resource. For example, if there is a default/production MetricsInstance, you can configure a supported integration's autoscrape block with: autoscrape: enable: true metrics_instance: default/production There is currently no way for telemetry created by an Operator-managed integration to be collected from outside of the integration itself. + +#### Fields + +| Field | Description | +| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to the Kubernetes API documentation for the fields of the `metadata` field. | +| `spec`
_[IntegrationSpec](#monitoring.grafana.com/v1alpha1.IntegrationSpec)_ | Specifies the desired behavior of the Integration. | +| `name`
_string_ | Name of the integration to run (e.g., "node_exporter", "mysqld_exporter"). | +| `type`
_[IntegrationType](#monitoring.grafana.com/v1alpha1.IntegrationType)_ | Type informs Grafana Agent Operator about how to manage the integration being configured. | +| `config`
_[k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1#JSON)_ | The configuration for the named integration. Note that Integrations are deployed with the integrations-next feature flag, which has different common settings: https://grafana.com/docs/agent/latest/configuration/integrations/integrations-next/ | +| `volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_ | An extra list of Volumes to be associated with the Grafana Agent pods running this integration. Volume names are mutated to be unique across all Integrations. Note that the specified volumes should be able to tolerate existing on multiple pods at once when type is daemonset. Don't use volumes for loading Secrets or ConfigMaps from the same namespace as the Integration; use the Secrets and ConfigMaps fields instead. | +| `volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_ | An extra list of VolumeMounts to be associated with the Grafana Agent pods running this integration. VolumeMount names are mutated to be unique across all used IntegrationSpecs. Mount paths should include the namespace/name of the Integration CR to avoid potentially colliding with other resources. | +| `secrets`
_[[]Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | An extra list of keys from Secrets in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. Secrets will be mounted at /etc/grafana-agent/integrations/secrets/<secret_namespace>/<secret_name>/<key>. | +| `configMaps`
_[[]Kubernetes core/v1.ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapkeyselector-v1-core)_ | An extra list of keys from ConfigMaps in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. ConfigMaps are mounted at /etc/grafana-agent/integrations/configMaps/<configmap_namespace>/<configmap_name>/<key>. | + ### IntegrationSpec + (Appears on:[Integration](#monitoring.grafana.com/v1alpha1.Integration)) -IntegrationSpec specifies the desired behavior of a metrics integration. -#### Fields -|Field|Description| -|-|-| -|`name`
_string_| Name of the integration to run (e.g., "node_exporter", "mysqld_exporter"). | -|`type`
_[IntegrationType](#monitoring.grafana.com/v1alpha1.IntegrationType)_| Type informs Grafana Agent Operator about how to manage the integration being configured. | -|`config`
_[k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1#JSON)_| The configuration for the named integration. Note that Integrations are deployed with the integrations-next feature flag, which has different common settings: https://grafana.com/docs/agent/latest/configuration/integrations/integrations-next/ | -|`volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_| An extra list of Volumes to be associated with the Grafana Agent pods running this integration. Volume names are mutated to be unique across all Integrations. Note that the specified volumes should be able to tolerate existing on multiple pods at once when type is daemonset. Don't use volumes for loading Secrets or ConfigMaps from the same namespace as the Integration; use the Secrets and ConfigMaps fields instead. | -|`volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_| An extra list of VolumeMounts to be associated with the Grafana Agent pods running this integration. VolumeMount names are mutated to be unique across all used IntegrationSpecs. Mount paths should include the namespace/name of the Integration CR to avoid potentially colliding with other resources. | -|`secrets`
_[[]Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| An extra list of keys from Secrets in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. Secrets will be mounted at /etc/grafana-agent/integrations/secrets/<secret_namespace>/<secret_name>/<key>. | -|`configMaps`
_[[]Kubernetes core/v1.ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapkeyselector-v1-core)_| An extra list of keys from ConfigMaps in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. ConfigMaps are mounted at /etc/grafana-agent/integrations/configMaps/<configmap_namespace>/<configmap_name>/<key>. | +IntegrationSpec specifies the desired behavior of a metrics integration. + +#### Fields + +| Field | Description | +| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `name`
_string_ | Name of the integration to run (e.g., "node_exporter", "mysqld_exporter"). | +| `type`
_[IntegrationType](#monitoring.grafana.com/v1alpha1.IntegrationType)_ | Type informs Grafana Agent Operator about how to manage the integration being configured. | +| `config`
_[k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1#JSON)_ | The configuration for the named integration. Note that Integrations are deployed with the integrations-next feature flag, which has different common settings: https://grafana.com/docs/agent/latest/configuration/integrations/integrations-next/ | +| `volumes`
_[[]Kubernetes core/v1.Volume](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core)_ | An extra list of Volumes to be associated with the Grafana Agent pods running this integration. Volume names are mutated to be unique across all Integrations. Note that the specified volumes should be able to tolerate existing on multiple pods at once when type is daemonset. Don't use volumes for loading Secrets or ConfigMaps from the same namespace as the Integration; use the Secrets and ConfigMaps fields instead. | +| `volumeMounts`
_[[]Kubernetes core/v1.VolumeMount](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core)_ | An extra list of VolumeMounts to be associated with the Grafana Agent pods running this integration. VolumeMount names are mutated to be unique across all used IntegrationSpecs. Mount paths should include the namespace/name of the Integration CR to avoid potentially colliding with other resources. | +| `secrets`
_[[]Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | An extra list of keys from Secrets in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. Secrets will be mounted at /etc/grafana-agent/integrations/secrets/<secret_namespace>/<secret_name>/<key>. | +| `configMaps`
_[[]Kubernetes core/v1.ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapkeyselector-v1-core)_ | An extra list of keys from ConfigMaps in the same namespace as the Integration which will be mounted into the Grafana Agent pod running this Integration. ConfigMaps are mounted at /etc/grafana-agent/integrations/configMaps/<configmap_namespace>/<configmap_name>/<key>. | + ### IntegrationType + (Appears on:[IntegrationSpec](#monitoring.grafana.com/v1alpha1.IntegrationSpec)) -IntegrationType determines specific behaviors of a configured integration. -#### Fields -|Field|Description| -|-|-| -|`allNodes`
_bool_| When true, the configured integration should be run on every Node in the cluster. This is required for Integrations that generate Node-specific metrics like node_exporter, otherwise it must be false to avoid generating duplicate metrics. | -|`unique`
_bool_| Whether this integration can only be defined once for a Grafana Agent process, such as statsd_exporter. It is invalid for a GrafanaAgent to discover multiple unique Integrations with the same Integration name (i.e., a single GrafanaAgent cannot deploy two statsd_exporters). | +IntegrationType determines specific behaviors of a configured integration. + +#### Fields + +| Field | Description | +| --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `allNodes`
_bool_ | When true, the configured integration should be run on every Node in the cluster. This is required for Integrations that generate Node-specific metrics like node_exporter, otherwise it must be false to avoid generating duplicate metrics. | +| `unique`
_bool_ | Whether this integration can only be defined once for a Grafana Agent process, such as statsd_exporter. It is invalid for a GrafanaAgent to discover multiple unique Integrations with the same Integration name (i.e., a single GrafanaAgent cannot deploy two statsd_exporters). | + ### IntegrationsSubsystemSpec + (Appears on:[GrafanaAgentSpec](#monitoring.grafana.com/v1alpha1.GrafanaAgentSpec)) -IntegrationsSubsystemSpec defines global settings to apply across the integrations subsystem. -#### Fields -|Field|Description| -|-|-| -|`selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Label selector to find Integration resources to run. When nil, no integration resources will be defined. | -|`namespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Label selector for namespaces to search when discovering integration resources. If nil, integration resources are only discovered in the namespace of the GrafanaAgent resource. Set to `{}` to search all namespaces. | +IntegrationsSubsystemSpec defines global settings to apply across the integrations subsystem. + +#### Fields + +| Field | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Label selector to find Integration resources to run. When nil, no integration resources will be defined. | +| `namespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Label selector for namespaces to search when discovering integration resources. If nil, integration resources are only discovered in the namespace of the GrafanaAgent resource. Set to `{}` to search all namespaces. | + ### JSONStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -JSONStageSpec is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from the extracted data to parse as JSON. If empty, uses entire log message. | -|`expressions`
_map[string]string_| Set of the key/value pairs of JMESPath expressions. The key will be the key in the extracted data while the expression will be the value, evaluated as a JMESPath from the source data. Literal JMESPath expressions can be used by wrapping a key in double quotes, which then must be wrapped again in single quotes in YAML so they get passed to the JMESPath parser. | +JSONStageSpec is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data. + +#### Fields + +| Field | Description | +| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `source`
_string_ | Name from the extracted data to parse as JSON. If empty, uses entire log message. | +| `expressions`
_map[string]string_ | Set of the key/value pairs of JMESPath expressions. The key will be the key in the extracted data while the expression will be the value, evaluated as a JMESPath from the source data. Literal JMESPath expressions can be used by wrapping a key in double quotes, which then must be wrapped again in single quotes in YAML so they get passed to the JMESPath parser. | + ### LimitStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -The limit stage is a rate-limiting stage that throttles logs based on several options. -#### Fields -|Field|Description| -|-|-| -|`rate`
_int_| The rate limit in lines per second that Promtail will push to Loki. | -|`burst`
_int_| The cap in the quantity of burst lines that Promtail will push to Loki. | -|`drop`
_bool_| When drop is true, log lines that exceed the current rate limit are discarded. When drop is false, log lines that exceed the current rate limit wait to enter the back pressure mode. Defaults to false. | +The limit stage is a rate-limiting stage that throttles logs based on several options. + +#### Fields + +| Field | Description | +| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `rate`
_int_ | The rate limit in lines per second that Promtail will push to Loki. | +| `burst`
_int_ | The cap in the quantity of burst lines that Promtail will push to Loki. | +| `drop`
_bool_ | When drop is true, log lines that exceed the current rate limit are discarded. When drop is false, log lines that exceed the current rate limit wait to enter the back pressure mode. Defaults to false. | + ### LogsBackoffConfigSpec + (Appears on:[LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)) -LogsBackoffConfigSpec configures timing for retrying failed requests. -#### Fields -|Field|Description| -|-|-| -|`minPeriod`
_string_| Initial backoff time between retries. Time between retries is increased exponentially. | -|`maxPeriod`
_string_| Maximum backoff time between retries. | -|`maxRetries`
_int_| Maximum number of retries to perform before giving up a request. | +LogsBackoffConfigSpec configures timing for retrying failed requests. + +#### Fields + +| Field | Description | +| ------------------------ | -------------------------------------------------------------------------------------- | +| `minPeriod`
_string_ | Initial backoff time between retries. Time between retries is increased exponentially. | +| `maxPeriod`
_string_ | Maximum backoff time between retries. | +| `maxRetries`
_int_ | Maximum number of retries to perform before giving up a request. | + ### LogsClientSpec + (Appears on:[LogsInstanceSpec](#monitoring.grafana.com/v1alpha1.LogsInstanceSpec), [LogsSubsystemSpec](#monitoring.grafana.com/v1alpha1.LogsSubsystemSpec)) -LogsClientSpec defines the client integration for logs, indicating which Loki server to send logs to. -#### Fields -|Field|Description| -|-|-| -|`url`
_string_| URL is the URL where Loki is listening. Must be a full HTTP URL, including protocol. Required. Example: https://logs-prod-us-central1.grafana.net/loki/api/v1/push. | -|`tenantId`
_string_| Tenant ID used by default to push logs to Loki. If omitted assumes remote Loki is running in single-tenant mode or an authentication layer is used to inject an X-Scope-OrgID header. | -|`batchWait`
_string_| Maximum amount of time to wait before sending a batch, even if that batch isn't full. | -|`batchSize`
_int_| Maximum batch size (in bytes) of logs to accumulate before sending the batch to Loki. | -|`basicAuth`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.BasicAuth](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.BasicAuth)_| BasicAuth for the Loki server. | -|`oauth2`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.OAuth2](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.OAuth2)_| Oauth2 for URL | -|`bearerToken`
_string_| BearerToken used for remote_write. | -|`bearerTokenFile`
_string_| BearerTokenFile used to read bearer token. | -|`proxyUrl`
_string_| ProxyURL to proxy requests through. Optional. | -|`tlsConfig`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.TLSConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.TLSConfig)_| TLSConfig to use for the client. Only used when the protocol of the URL is https. | -|`backoffConfig`
_[LogsBackoffConfigSpec](#monitoring.grafana.com/v1alpha1.LogsBackoffConfigSpec)_| Configures how to retry requests to Loki when a request fails. Defaults to a minPeriod of 500ms, maxPeriod of 5m, and maxRetries of 10. | -|`externalLabels`
_map[string]string_| ExternalLabels are labels to add to any time series when sending data to Loki. | -|`timeout`
_string_| Maximum time to wait for a server to respond to a request. | +LogsClientSpec defines the client integration for logs, indicating which Loki server to send logs to. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `url`
_string_ | URL is the URL where Loki is listening. Must be a full HTTP URL, including protocol. Required. Example: https://logs-prod-us-central1.grafana.net/loki/api/v1/push. | +| `tenantId`
_string_ | Tenant ID used by default to push logs to Loki. If omitted assumes remote Loki is running in single-tenant mode or an authentication layer is used to inject an X-Scope-OrgID header. | +| `batchWait`
_string_ | Maximum amount of time to wait before sending a batch, even if that batch isn't full. | +| `batchSize`
_int_ | Maximum batch size (in bytes) of logs to accumulate before sending the batch to Loki. | +| `basicAuth`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.BasicAuth](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.BasicAuth)_ | BasicAuth for the Loki server. | +| `oauth2`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.OAuth2](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.OAuth2)_ | Oauth2 for URL | +| `bearerToken`
_string_ | BearerToken used for remote_write. | +| `bearerTokenFile`
_string_ | BearerTokenFile used to read bearer token. | +| `proxyUrl`
_string_ | ProxyURL to proxy requests through. Optional. | +| `tlsConfig`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.TLSConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.TLSConfig)_ | TLSConfig to use for the client. Only used when the protocol of the URL is https. | +| `backoffConfig`
_[LogsBackoffConfigSpec](#monitoring.grafana.com/v1alpha1.LogsBackoffConfigSpec)_ | Configures how to retry requests to Loki when a request fails. Defaults to a minPeriod of 500ms, maxPeriod of 5m, and maxRetries of 10. | +| `externalLabels`
_map[string]string_ | ExternalLabels are labels to add to any time series when sending data to Loki. | +| `timeout`
_string_ | Maximum time to wait for a server to respond to a request. | + ### LogsInstance + (Appears on:[LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment)) -LogsInstance controls an individual logs instance within a Grafana Agent deployment. -#### Fields -|Field|Description| -|-|-| -|`metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_| Refer to the Kubernetes API documentation for the fields of the `metadata` field. | -|`spec`
_[LogsInstanceSpec](#monitoring.grafana.com/v1alpha1.LogsInstanceSpec)_| Spec holds the specification of the desired behavior for the logs instance. | -|`clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_| Clients controls where logs are written to for this instance. | -|`podLogsSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Determines which PodLogs should be selected for including in this instance. | -|`podLogsNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Set of labels to determine which namespaces should be watched for PodLogs. If not provided, checks only namespace of the instance. | -|`additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Grafana Agent logging scrape configurations. Scrape configurations specified are appended to the configurations generated by the Grafana Agent Operator. Job configurations specified must have the form as specified in the official Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#scrape_configs As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Grafana Agent. It is advised to review both Grafana Agent and Promtail release notes to ensure that no incompatible scrape configs are going to break Grafana Agent after the upgrade. | -|`targetConfig`
_[LogsTargetConfigSpec](#monitoring.grafana.com/v1alpha1.LogsTargetConfigSpec)_| Configures how tailed targets are watched. | +LogsInstance controls an individual logs instance within a Grafana Agent deployment. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to the Kubernetes API documentation for the fields of the `metadata` field. | +| `spec`
_[LogsInstanceSpec](#monitoring.grafana.com/v1alpha1.LogsInstanceSpec)_ | Spec holds the specification of the desired behavior for the logs instance. | +| `clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_ | Clients controls where logs are written to for this instance. | +| `podLogsSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Determines which PodLogs should be selected for including in this instance. | +| `podLogsNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Set of labels to determine which namespaces should be watched for PodLogs. If not provided, checks only namespace of the instance. | +| `additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Grafana Agent logging scrape configurations. Scrape configurations specified are appended to the configurations generated by the Grafana Agent Operator. Job configurations specified must have the form as specified in the official Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#scrape_configs As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Grafana Agent. It is advised to review both Grafana Agent and Promtail release notes to ensure that no incompatible scrape configs are going to break Grafana Agent after the upgrade. | +| `targetConfig`
_[LogsTargetConfigSpec](#monitoring.grafana.com/v1alpha1.LogsTargetConfigSpec)_ | Configures how tailed targets are watched. | + ### LogsInstanceSpec + (Appears on:[LogsInstance](#monitoring.grafana.com/v1alpha1.LogsInstance)) -LogsInstanceSpec controls how an individual instance will be used to discover LogMonitors. -#### Fields -|Field|Description| -|-|-| -|`clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_| Clients controls where logs are written to for this instance. | -|`podLogsSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Determines which PodLogs should be selected for including in this instance. | -|`podLogsNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Set of labels to determine which namespaces should be watched for PodLogs. If not provided, checks only namespace of the instance. | -|`additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Grafana Agent logging scrape configurations. Scrape configurations specified are appended to the configurations generated by the Grafana Agent Operator. Job configurations specified must have the form as specified in the official Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#scrape_configs As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Grafana Agent. It is advised to review both Grafana Agent and Promtail release notes to ensure that no incompatible scrape configs are going to break Grafana Agent after the upgrade. | -|`targetConfig`
_[LogsTargetConfigSpec](#monitoring.grafana.com/v1alpha1.LogsTargetConfigSpec)_| Configures how tailed targets are watched. | +LogsInstanceSpec controls how an individual instance will be used to discover LogMonitors. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_ | Clients controls where logs are written to for this instance. | +| `podLogsSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Determines which PodLogs should be selected for including in this instance. | +| `podLogsNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Set of labels to determine which namespaces should be watched for PodLogs. If not provided, checks only namespace of the instance. | +| `additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Grafana Agent logging scrape configurations. Scrape configurations specified are appended to the configurations generated by the Grafana Agent Operator. Job configurations specified must have the form as specified in the official Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#scrape_configs As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Grafana Agent. It is advised to review both Grafana Agent and Promtail release notes to ensure that no incompatible scrape configs are going to break Grafana Agent after the upgrade. | +| `targetConfig`
_[LogsTargetConfigSpec](#monitoring.grafana.com/v1alpha1.LogsTargetConfigSpec)_ | Configures how tailed targets are watched. | + ### LogsSubsystemSpec + (Appears on:[GrafanaAgentSpec](#monitoring.grafana.com/v1alpha1.GrafanaAgentSpec)) -LogsSubsystemSpec defines global settings to apply across the logging subsystem. -#### Fields -|Field|Description| -|-|-| -|`clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_| A global set of clients to use when a discovered LogsInstance does not have any clients defined. | -|`logsExternalLabelName`
_string_| LogsExternalLabelName is the name of the external label used to denote Grafana Agent cluster. Defaults to "cluster." External label will _not_ be added when value is set to the empty string. | -|`instanceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| InstanceSelector determines which LogInstances should be selected for running. Each instance runs its own set of Prometheus components, including service discovery, scraping, and remote_write. | -|`instanceNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| InstanceNamespaceSelector are the set of labels to determine which namespaces to watch for LogInstances. If not provided, only checks own namespace. | -|`ignoreNamespaceSelectors`
_bool_| IgnoreNamespaceSelectors, if true, will ignore NamespaceSelector settings from the PodLogs configs, and they will only discover endpoints within their current namespace. | -|`enforcedNamespaceLabel`
_string_| EnforcedNamespaceLabel enforces adding a namespace label of origin for each metric that is user-created. The label value will always be the namespace of the object that is being created. | +LogsSubsystemSpec defines global settings to apply across the logging subsystem. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `clients`
_[[]LogsClientSpec](#monitoring.grafana.com/v1alpha1.LogsClientSpec)_ | A global set of clients to use when a discovered LogsInstance does not have any clients defined. | +| `logsExternalLabelName`
_string_ | LogsExternalLabelName is the name of the external label used to denote Grafana Agent cluster. Defaults to "cluster." External label will _not_ be added when value is set to the empty string. | +| `instanceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | InstanceSelector determines which LogInstances should be selected for running. Each instance runs its own set of Prometheus components, including service discovery, scraping, and remote_write. | +| `instanceNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | InstanceNamespaceSelector are the set of labels to determine which namespaces to watch for LogInstances. If not provided, only checks own namespace. | +| `ignoreNamespaceSelectors`
_bool_ | IgnoreNamespaceSelectors, if true, will ignore NamespaceSelector settings from the PodLogs configs, and they will only discover endpoints within their current namespace. | +| `enforcedNamespaceLabel`
_string_ | EnforcedNamespaceLabel enforces adding a namespace label of origin for each metric that is user-created. The label value will always be the namespace of the object that is being created. | + ### LogsTargetConfigSpec + (Appears on:[LogsInstanceSpec](#monitoring.grafana.com/v1alpha1.LogsInstanceSpec)) -LogsTargetConfigSpec configures how tailed targets are watched. -#### Fields -|Field|Description| -|-|-| -|`syncPeriod`
_string_| Period to resync directories being watched and files being tailed to discover new ones or stop watching removed ones. | +LogsTargetConfigSpec configures how tailed targets are watched. + +#### Fields + +| Field | Description | +| ------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `syncPeriod`
_string_ | Period to resync directories being watched and files being tailed to discover new ones or stop watching removed ones. | + ### MatchStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -MatchStageSpec is a filtering stage that conditionally applies a set of stages or drop entries when a log entry matches a configurable LogQL stream selector and filter expressions. -#### Fields -|Field|Description| -|-|-| -|`selector`
_string_| LogQL stream selector and filter expressions. Required. | -|`pipelineName`
_string_| Names the pipeline. When defined, creates an additional label in the pipeline_duration_seconds histogram, where the value is concatenated with job_name using an underscore. | -|`action`
_string_| Determines what action is taken when the selector matches the log line. Can be keep or drop. Defaults to keep. When set to drop, entries are dropped and no later metrics are recorded. Stages must be empty when dropping metrics. | -|`dropCounterReason`
_string_| Every time a log line is dropped, the metric logentry_dropped_lines_total is incremented. A "reason" label is added, and can be customized by providing a custom value here. Defaults to "match_stage." | -|`stages`
_string_| Nested set of pipeline stages to execute when action is keep and the log line matches selector. An example value for stages may be: stages: | - json: {} - labelAllow: [foo, bar] Note that stages is a string because SIG API Machinery does not support recursive types, and so it cannot be validated for correctness. Be careful not to mistype anything. | +MatchStageSpec is a filtering stage that conditionally applies a set of stages or drop entries when a log entry matches a configurable LogQL stream selector and filter expressions. + +#### Fields + +| Field | Description | +| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `selector`
_string_ | LogQL stream selector and filter expressions. Required. | +| `pipelineName`
_string_ | Names the pipeline. When defined, creates an additional label in the pipeline_duration_seconds histogram, where the value is concatenated with job_name using an underscore. | +| `action`
_string_ | Determines what action is taken when the selector matches the log line. Can be keep or drop. Defaults to keep. When set to drop, entries are dropped and no later metrics are recorded. Stages must be empty when dropping metrics. | +| `dropCounterReason`
_string_ | Every time a log line is dropped, the metric logentry_dropped_lines_total is incremented. A "reason" label is added, and can be customized by providing a custom value here. Defaults to "match_stage." | +| `stages`
_string_ | Nested set of pipeline stages to execute when action is keep and the log line matches selector. An example value for stages may be: stages: | - json: {} - labelAllow: [foo, bar] Note that stages is a string because SIG API Machinery does not support recursive types, and so it cannot be validated for correctness. Be careful not to mistype anything. | + ### MetadataConfig + (Appears on:[RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)) -MetadataConfig configures the sending of series metadata to remote storage. -#### Fields -|Field|Description| -|-|-| -|`send`
_bool_| Send enables metric metadata to be sent to remote storage. | -|`sendInterval`
_string_| SendInterval controls how frequently metric metadata is sent to remote storage. | +MetadataConfig configures the sending of series metadata to remote storage. + +#### Fields + +| Field | Description | +| --------------------------- | ------------------------------------------------------------------------------- | +| `send`
_bool_ | Send enables metric metadata to be sent to remote storage. | +| `sendInterval`
_string_ | SendInterval controls how frequently metric metadata is sent to remote storage. | + ### MetricsInstance + (Appears on:[MetricsDeployment](#monitoring.grafana.com/v1alpha1.MetricsDeployment)) -MetricsInstance controls an individual Metrics instance within a Grafana Agent deployment. -#### Fields -|Field|Description| -|-|-| -|`metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_| Refer to the Kubernetes API documentation for the fields of the `metadata` field. | -|`spec`
_[MetricsInstanceSpec](#monitoring.grafana.com/v1alpha1.MetricsInstanceSpec)_| Spec holds the specification of the desired behavior for the Metrics instance. | -|`walTruncateFrequency`
_string_| WALTruncateFrequency specifies how frequently to run the WAL truncation process. Higher values cause the WAL to increase and for old series to stay in the WAL longer, but reduces the chance of data loss when remote_write fails for longer than the given frequency. | -|`minWALTime`
_string_| MinWALTime is the minimum amount of time that series and samples can exist in the WAL before being considered for deletion. | -|`maxWALTime`
_string_| MaxWALTime is the maximum amount of time that series and samples can exist in the WAL before being forcibly deleted. | -|`remoteFlushDeadline`
_string_| RemoteFlushDeadline is the deadline for flushing data when an instance shuts down. | -|`writeStaleOnShutdown`
_bool_| WriteStaleOnShutdown writes staleness markers on shutdown for all series. | -|`serviceMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ServiceMonitorSelector determines which ServiceMonitors to select for target discovery. | -|`serviceMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ServiceMonitorNamespaceSelector is the set of labels that determine which namespaces to watch for ServiceMonitor discovery. If nil, it only checks its own namespace. | -|`podMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| PodMonitorSelector determines which PodMonitors to selected for target discovery. Experimental. | -|`podMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| PodMonitorNamespaceSelector are the set of labels to determine which namespaces to watch for PodMonitor discovery. If nil, it only checks its own namespace. | -|`probeSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ProbeSelector determines which Probes to select for target discovery. | -|`probeNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ProbeNamespaceSelector is the set of labels that determines which namespaces to watch for Probe discovery. If nil, it only checks own namespace. | -|`remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_| RemoteWrite controls remote_write settings for this instance. | -|`additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| AdditionalScrapeConfigs lets you specify a key of a Secret containing additional Grafana Agent Prometheus scrape configurations. The specified scrape configurations are appended to the configurations generated by Grafana Agent Operator. Specified job configurations must have the form specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are appended, you must make sure the configuration is still valid. Note that it's possible that this feature will break future upgrades of Grafana Agent. Review both Grafana Agent and Prometheus release notes to ensure that no incompatible scrape configs will break Grafana Agent after the upgrade. | +MetricsInstance controls an individual Metrics instance within a Grafana Agent deployment. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to the Kubernetes API documentation for the fields of the `metadata` field. | +| `spec`
_[MetricsInstanceSpec](#monitoring.grafana.com/v1alpha1.MetricsInstanceSpec)_ | Spec holds the specification of the desired behavior for the Metrics instance. | +| `walTruncateFrequency`
_string_ | WALTruncateFrequency specifies how frequently to run the WAL truncation process. Higher values cause the WAL to increase and for old series to stay in the WAL longer, but reduces the chance of data loss when remote_write fails for longer than the given frequency. | +| `minWALTime`
_string_ | MinWALTime is the minimum amount of time that series and samples can exist in the WAL before being considered for deletion. | +| `maxWALTime`
_string_ | MaxWALTime is the maximum amount of time that series and samples can exist in the WAL before being forcibly deleted. | +| `remoteFlushDeadline`
_string_ | RemoteFlushDeadline is the deadline for flushing data when an instance shuts down. | +| `writeStaleOnShutdown`
_bool_ | WriteStaleOnShutdown writes staleness markers on shutdown for all series. | +| `serviceMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ServiceMonitorSelector determines which ServiceMonitors to select for target discovery. | +| `serviceMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ServiceMonitorNamespaceSelector is the set of labels that determine which namespaces to watch for ServiceMonitor discovery. If nil, it only checks its own namespace. | +| `podMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | PodMonitorSelector determines which PodMonitors to selected for target discovery. Experimental. | +| `podMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | PodMonitorNamespaceSelector are the set of labels to determine which namespaces to watch for PodMonitor discovery. If nil, it only checks its own namespace. | +| `probeSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ProbeSelector determines which Probes to select for target discovery. | +| `probeNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ProbeNamespaceSelector is the set of labels that determines which namespaces to watch for Probe discovery. If nil, it only checks own namespace. | +| `remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_ | RemoteWrite controls remote_write settings for this instance. | +| `additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | AdditionalScrapeConfigs lets you specify a key of a Secret containing additional Grafana Agent Prometheus scrape configurations. The specified scrape configurations are appended to the configurations generated by Grafana Agent Operator. Specified job configurations must have the form specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are appended, you must make sure the configuration is still valid. Note that it's possible that this feature will break future upgrades of Grafana Agent. Review both Grafana Agent and Prometheus release notes to ensure that no incompatible scrape configs will break Grafana Agent after the upgrade. | + ### MetricsInstanceSpec + (Appears on:[MetricsInstance](#monitoring.grafana.com/v1alpha1.MetricsInstance)) -MetricsInstanceSpec controls how an individual instance is used to discover PodMonitors. -#### Fields -|Field|Description| -|-|-| -|`walTruncateFrequency`
_string_| WALTruncateFrequency specifies how frequently to run the WAL truncation process. Higher values cause the WAL to increase and for old series to stay in the WAL longer, but reduces the chance of data loss when remote_write fails for longer than the given frequency. | -|`minWALTime`
_string_| MinWALTime is the minimum amount of time that series and samples can exist in the WAL before being considered for deletion. | -|`maxWALTime`
_string_| MaxWALTime is the maximum amount of time that series and samples can exist in the WAL before being forcibly deleted. | -|`remoteFlushDeadline`
_string_| RemoteFlushDeadline is the deadline for flushing data when an instance shuts down. | -|`writeStaleOnShutdown`
_bool_| WriteStaleOnShutdown writes staleness markers on shutdown for all series. | -|`serviceMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ServiceMonitorSelector determines which ServiceMonitors to select for target discovery. | -|`serviceMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ServiceMonitorNamespaceSelector is the set of labels that determine which namespaces to watch for ServiceMonitor discovery. If nil, it only checks its own namespace. | -|`podMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| PodMonitorSelector determines which PodMonitors to selected for target discovery. Experimental. | -|`podMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| PodMonitorNamespaceSelector are the set of labels to determine which namespaces to watch for PodMonitor discovery. If nil, it only checks its own namespace. | -|`probeSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ProbeSelector determines which Probes to select for target discovery. | -|`probeNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| ProbeNamespaceSelector is the set of labels that determines which namespaces to watch for Probe discovery. If nil, it only checks own namespace. | -|`remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_| RemoteWrite controls remote_write settings for this instance. | -|`additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| AdditionalScrapeConfigs lets you specify a key of a Secret containing additional Grafana Agent Prometheus scrape configurations. The specified scrape configurations are appended to the configurations generated by Grafana Agent Operator. Specified job configurations must have the form specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are appended, you must make sure the configuration is still valid. Note that it's possible that this feature will break future upgrades of Grafana Agent. Review both Grafana Agent and Prometheus release notes to ensure that no incompatible scrape configs will break Grafana Agent after the upgrade. | +MetricsInstanceSpec controls how an individual instance is used to discover PodMonitors. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `walTruncateFrequency`
_string_ | WALTruncateFrequency specifies how frequently to run the WAL truncation process. Higher values cause the WAL to increase and for old series to stay in the WAL longer, but reduces the chance of data loss when remote_write fails for longer than the given frequency. | +| `minWALTime`
_string_ | MinWALTime is the minimum amount of time that series and samples can exist in the WAL before being considered for deletion. | +| `maxWALTime`
_string_ | MaxWALTime is the maximum amount of time that series and samples can exist in the WAL before being forcibly deleted. | +| `remoteFlushDeadline`
_string_ | RemoteFlushDeadline is the deadline for flushing data when an instance shuts down. | +| `writeStaleOnShutdown`
_bool_ | WriteStaleOnShutdown writes staleness markers on shutdown for all series. | +| `serviceMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ServiceMonitorSelector determines which ServiceMonitors to select for target discovery. | +| `serviceMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ServiceMonitorNamespaceSelector is the set of labels that determine which namespaces to watch for ServiceMonitor discovery. If nil, it only checks its own namespace. | +| `podMonitorSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | PodMonitorSelector determines which PodMonitors to selected for target discovery. Experimental. | +| `podMonitorNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | PodMonitorNamespaceSelector are the set of labels to determine which namespaces to watch for PodMonitor discovery. If nil, it only checks its own namespace. | +| `probeSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ProbeSelector determines which Probes to select for target discovery. | +| `probeNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | ProbeNamespaceSelector is the set of labels that determines which namespaces to watch for Probe discovery. If nil, it only checks own namespace. | +| `remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_ | RemoteWrite controls remote_write settings for this instance. | +| `additionalScrapeConfigs`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | AdditionalScrapeConfigs lets you specify a key of a Secret containing additional Grafana Agent Prometheus scrape configurations. The specified scrape configurations are appended to the configurations generated by Grafana Agent Operator. Specified job configurations must have the form specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are appended, you must make sure the configuration is still valid. Note that it's possible that this feature will break future upgrades of Grafana Agent. Review both Grafana Agent and Prometheus release notes to ensure that no incompatible scrape configs will break Grafana Agent after the upgrade. | + ### MetricsStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -MetricsStageSpec is an action stage that allows for defining and updating metrics based on data from the extracted map. Created metrics are not pushed to Loki or Prometheus and are instead exposed via the /metrics endpoint of the Grafana Agent pod. The Grafana Agent Operator should be configured with a MetricsInstance that discovers the logging DaemonSet to collect metrics created by this stage. -#### Fields -|Field|Description| -|-|-| -|`type`
_string_| The metric type to create. Must be one of counter, gauge, histogram. Required. | -|`description`
_string_| Sets the description for the created metric. | -|`prefix`
_string_| Sets the custom prefix name for the metric. Defaults to "promtail_custom_". | -|`source`
_string_| Key from the extracted data map to use for the metric. Defaults to the metrics name if not present. | -|`maxIdleDuration`
_string_| Label values on metrics are dynamic which can cause exported metrics to go stale. To prevent unbounded cardinality, any metrics not updated within MaxIdleDuration are removed. Must be greater or equal to 1s. Defaults to 5m. | -|`matchAll`
_bool_| If true, all log lines are counted without attempting to match the source to the extracted map. Mutually exclusive with value. Only valid for type: counter. | -|`countEntryBytes`
_bool_| If true all log line bytes are counted. Can only be set with matchAll: true and action: add. Only valid for type: counter. | -|`value`
_string_| Filters down source data and only changes the metric if the targeted value matches the provided string exactly. If not present, all data matches. | -|`action`
_string_| The action to take against the metric. Required. Must be either "inc" or "add" for type: counter or type: histogram. When type: gauge, must be one of "set", "inc", "dec", "add", or "sub". "add", "set", or "sub" requires the extracted value to be convertible to a positive float. | -|`buckets`
_[]string_| Buckets to create. Bucket values must be convertible to float64s. Extremely large or small numbers are subject to some loss of precision. Only valid for type: histogram. | +MetricsStageSpec is an action stage that allows for defining and updating metrics based on data from the extracted map. Created metrics are not pushed to Loki or Prometheus and are instead exposed via the /metrics endpoint of the Grafana Agent pod. The Grafana Agent Operator should be configured with a MetricsInstance that discovers the logging DaemonSet to collect metrics created by this stage. + +#### Fields + +| Field | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `type`
_string_ | The metric type to create. Must be one of counter, gauge, histogram. Required. | +| `description`
_string_ | Sets the description for the created metric. | +| `prefix`
_string_ | Sets the custom prefix name for the metric. Defaults to "promtail*custom*". | +| `source`
_string_ | Key from the extracted data map to use for the metric. Defaults to the metrics name if not present. | +| `maxIdleDuration`
_string_ | Label values on metrics are dynamic which can cause exported metrics to go stale. To prevent unbounded cardinality, any metrics not updated within MaxIdleDuration are removed. Must be greater or equal to 1s. Defaults to 5m. | +| `matchAll`
_bool_ | If true, all log lines are counted without attempting to match the source to the extracted map. Mutually exclusive with value. Only valid for type: counter. | +| `countEntryBytes`
_bool_ | If true all log line bytes are counted. Can only be set with matchAll: true and action: add. Only valid for type: counter. | +| `value`
_string_ | Filters down source data and only changes the metric if the targeted value matches the provided string exactly. If not present, all data matches. | +| `action`
_string_ | The action to take against the metric. Required. Must be either "inc" or "add" for type: counter or type: histogram. When type: gauge, must be one of "set", "inc", "dec", "add", or "sub". "add", "set", or "sub" requires the extracted value to be convertible to a positive float. | +| `buckets`
_[]string_ | Buckets to create. Bucket values must be convertible to float64s. Extremely large or small numbers are subject to some loss of precision. Only valid for type: histogram. | + ### MetricsSubsystemSpec + (Appears on:[GrafanaAgentSpec](#monitoring.grafana.com/v1alpha1.GrafanaAgentSpec)) -MetricsSubsystemSpec defines global settings to apply across the Metrics subsystem. -#### Fields -|Field|Description| -|-|-| -|`remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_| RemoteWrite controls default remote_write settings for all instances. If an instance does not provide its own RemoteWrite settings, these will be used instead. | -|`replicas`
_int32_| Replicas of each shard to deploy for metrics pods. Number of replicas multiplied by the number of shards is the total number of pods created. | -|`shards`
_int32_| Shards to distribute targets onto. Number of replicas multiplied by the number of shards is the total number of pods created. Note that scaling down shards does not reshard data onto remaining instances; it must be manually moved. Increasing shards does not reshard data either, but it will continue to be available from the same instances. Sharding is performed on the content of the __address__ target meta-label. | -|`replicaExternalLabelName`
_string_| ReplicaExternalLabelName is the name of the metrics external label used to denote the replica name. Defaults to __replica__. The external label is _not_ added when the value is set to the empty string. | -|`metricsExternalLabelName`
_string_| MetricsExternalLabelName is the name of the external label used to denote Grafana Agent cluster. Defaults to "cluster." The external label is _not_ added when the value is set to the empty string. | -|`scrapeInterval`
_string_| ScrapeInterval is the time between consecutive scrapes. | -|`scrapeTimeout`
_string_| ScrapeTimeout is the time to wait for a target to respond before marking a scrape as failed. | -|`externalLabels`
_map[string]string_| ExternalLabels are labels to add to any time series when sending data over remote_write. | -|`arbitraryFSAccessThroughSMs`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.ArbitraryFSAccessThroughSMsConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.ArbitraryFSAccessThroughSMsConfig)_| ArbitraryFSAccessThroughSMs configures whether configuration based on a ServiceMonitor can access arbitrary files on the file system of the Grafana Agent container, e.g., bearer token files. | -|`overrideHonorLabels`
_bool_| OverrideHonorLabels, if true, overrides all configured honor_labels read from ServiceMonitor or PodMonitor and sets them to false. | -|`overrideHonorTimestamps`
_bool_| OverrideHonorTimestamps allows global enforcement for honoring timestamps in all scrape configs. | -|`ignoreNamespaceSelectors`
_bool_| IgnoreNamespaceSelectors, if true, ignores NamespaceSelector settings from the PodMonitor and ServiceMonitor configs, so that they only discover endpoints within their current namespace. | -|`enforcedNamespaceLabel`
_string_| EnforcedNamespaceLabel enforces adding a namespace label of origin for each metric that is user-created. The label value is always the namespace of the object that is being created. | -|`enforcedSampleLimit`
_uint64_| EnforcedSampleLimit defines a global limit on the number of scraped samples that are accepted. This overrides any SampleLimit set per ServiceMonitor and/or PodMonitor. It is meant to be used by admins to enforce the SampleLimit to keep the overall number of samples and series under the desired limit. Note that if a SampleLimit from a ServiceMonitor or PodMonitor is lower, that value is used instead. | -|`enforcedTargetLimit`
_uint64_| EnforcedTargetLimit defines a global limit on the number of scraped targets. This overrides any TargetLimit set per ServiceMonitor and/or PodMonitor. It is meant to be used by admins to enforce the TargetLimit to keep the overall number of targets under the desired limit. Note that if a TargetLimit from a ServiceMonitor or PodMonitor is higher, that value is used instead. | -|`instanceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| InstanceSelector determines which MetricsInstances should be selected for running. Each instance runs its own set of Metrics components, including service discovery, scraping, and remote_write. | -|`instanceNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| InstanceNamespaceSelector is the set of labels that determines which namespaces to watch for MetricsInstances. If not provided, it only checks its own namespace. | +MetricsSubsystemSpec defines global settings to apply across the Metrics subsystem. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `remoteWrite`
_[[]RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)_ | RemoteWrite controls default remote_write settings for all instances. If an instance does not provide its own RemoteWrite settings, these will be used instead. | +| `replicas`
_int32_ | Replicas of each shard to deploy for metrics pods. Number of replicas multiplied by the number of shards is the total number of pods created. | +| `shards`
_int32_ | Shards to distribute targets onto. Number of replicas multiplied by the number of shards is the total number of pods created. Note that scaling down shards does not reshard data onto remaining instances; it must be manually moved. Increasing shards does not reshard data either, but it will continue to be available from the same instances. Sharding is performed on the content of the **address** target meta-label. | +| `replicaExternalLabelName`
_string_ | ReplicaExternalLabelName is the name of the metrics external label used to denote the replica name. Defaults to **replica**. The external label is _not_ added when the value is set to the empty string. | +| `metricsExternalLabelName`
_string_ | MetricsExternalLabelName is the name of the external label used to denote Grafana Agent cluster. Defaults to "cluster." The external label is _not_ added when the value is set to the empty string. | +| `scrapeInterval`
_string_ | ScrapeInterval is the time between consecutive scrapes. | +| `scrapeTimeout`
_string_ | ScrapeTimeout is the time to wait for a target to respond before marking a scrape as failed. | +| `externalLabels`
_map[string]string_ | ExternalLabels are labels to add to any time series when sending data over remote_write. | +| `arbitraryFSAccessThroughSMs`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.ArbitraryFSAccessThroughSMsConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.ArbitraryFSAccessThroughSMsConfig)_ | ArbitraryFSAccessThroughSMs configures whether configuration based on a ServiceMonitor can access arbitrary files on the file system of the Grafana Agent container, e.g., bearer token files. | +| `overrideHonorLabels`
_bool_ | OverrideHonorLabels, if true, overrides all configured honor_labels read from ServiceMonitor or PodMonitor and sets them to false. | +| `overrideHonorTimestamps`
_bool_ | OverrideHonorTimestamps allows global enforcement for honoring timestamps in all scrape configs. | +| `ignoreNamespaceSelectors`
_bool_ | IgnoreNamespaceSelectors, if true, ignores NamespaceSelector settings from the PodMonitor and ServiceMonitor configs, so that they only discover endpoints within their current namespace. | +| `enforcedNamespaceLabel`
_string_ | EnforcedNamespaceLabel enforces adding a namespace label of origin for each metric that is user-created. The label value is always the namespace of the object that is being created. | +| `enforcedSampleLimit`
_uint64_ | EnforcedSampleLimit defines a global limit on the number of scraped samples that are accepted. This overrides any SampleLimit set per ServiceMonitor and/or PodMonitor. It is meant to be used by admins to enforce the SampleLimit to keep the overall number of samples and series under the desired limit. Note that if a SampleLimit from a ServiceMonitor or PodMonitor is lower, that value is used instead. | +| `enforcedTargetLimit`
_uint64_ | EnforcedTargetLimit defines a global limit on the number of scraped targets. This overrides any TargetLimit set per ServiceMonitor and/or PodMonitor. It is meant to be used by admins to enforce the TargetLimit to keep the overall number of targets under the desired limit. Note that if a TargetLimit from a ServiceMonitor or PodMonitor is higher, that value is used instead. | +| `instanceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | InstanceSelector determines which MetricsInstances should be selected for running. Each instance runs its own set of Metrics components, including service discovery, scraping, and remote_write. | +| `instanceNamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | InstanceNamespaceSelector is the set of labels that determines which namespaces to watch for MetricsInstances. If not provided, it only checks its own namespace. | + ### MultilineStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -MultilineStageSpec merges multiple lines into a multiline block before passing it on to the next stage in the pipeline. -#### Fields -|Field|Description| -|-|-| -|`firstLine`
_string_| RE2 regular expression. Creates a new multiline block when matched. Required. | -|`maxWaitTime`
_string_| Maximum time to wait before passing on the multiline block to the next stage if no new lines are received. Defaults to 3s. | -|`maxLines`
_int_| Maximum number of lines a block can have. A new block is started if the number of lines surpasses this value. Defaults to 128. | +MultilineStageSpec merges multiple lines into a multiline block before passing it on to the next stage in the pipeline. + +#### Fields + +| Field | Description | +| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | +| `firstLine`
_string_ | RE2 regular expression. Creates a new multiline block when matched. Required. | +| `maxWaitTime`
_string_ | Maximum time to wait before passing on the multiline block to the next stage if no new lines are received. Defaults to 3s. | +| `maxLines`
_int_ | Maximum number of lines a block can have. A new block is started if the number of lines surpasses this value. Defaults to 128. | + ### ObjectSelector -ObjectSelector is a set of selectors to use for finding an object in the resource hierarchy. When NamespaceSelector is nil, search for objects directly in the ParentNamespace. -#### Fields -|Field|Description| -|-|-| -|`ObjectType`
_[sigs.k8s.io/controller-runtime/pkg/client.Object](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client#Object)_| | -|`ParentNamespace`
_string_| | -|`NamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| | -|`Labels`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| | + +ObjectSelector is a set of selectors to use for finding an object in the resource hierarchy. When NamespaceSelector is nil, search for objects directly in the ParentNamespace. + +#### Fields + +| Field | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | +| `ObjectType`
_[sigs.k8s.io/controller-runtime/pkg/client.Object](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client#Object)_ | | +| `ParentNamespace`
_string_ | | +| `NamespaceSelector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | | +| `Labels`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | | + ### OutputStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -OutputStageSpec is an action stage that takes data from the extracted map and changes the log line that will be sent to Loki. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from extract data to use for the log entry. Required. | +OutputStageSpec is an action stage that takes data from the extracted map and changes the log line that will be sent to Loki. + +#### Fields + +| Field | Description | +| --------------------- | ---------------------------------------------------------- | +| `source`
_string_ | Name from extract data to use for the log entry. Required. | + ### PackStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -PackStageSpec is a transform stage that lets you embed extracted values and labels into the log line by packing the log line and labels inside of a JSON object. -#### Fields -|Field|Description| -|-|-| -|`labels`
_[]string_| Name from extracted data or line labels. Required. Labels provided here are automatically removed from output labels. | -|`ingestTimestamp`
_bool_| If the resulting log line should use any existing timestamp or use time.Now() when the line was created. Set to true when combining several log streams from different containers to avoid out of order errors. | +PackStageSpec is a transform stage that lets you embed extracted values and labels into the log line by packing the log line and labels inside of a JSON object. + +#### Fields + +| Field | Description | +| ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `labels`
_[]string_ | Name from extracted data or line labels. Required. Labels provided here are automatically removed from output labels. | +| `ingestTimestamp`
_bool_ | If the resulting log line should use any existing timestamp or use time.Now() when the line was created. Set to true when combining several log streams from different containers to avoid out of order errors. | + ### PipelineStageSpec + (Appears on:[PodLogsSpec](#monitoring.grafana.com/v1alpha1.PodLogsSpec)) -PipelineStageSpec defines an individual pipeline stage. Each stage type is mutually exclusive and no more than one may be set per stage. More information on pipelines can be found in the Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/pipelines/ -#### Fields -|Field|Description| -|-|-| -|`cri`
_[CRIStageSpec](#monitoring.grafana.com/v1alpha1.CRIStageSpec)_| CRI is a parsing stage that reads log lines using the standard CRI logging format. Supply cri: {} to enable. | -|`docker`
_[DockerStageSpec](#monitoring.grafana.com/v1alpha1.DockerStageSpec)_| Docker is a parsing stage that reads log lines using the standard Docker logging format. Supply docker: {} to enable. | -|`drop`
_[DropStageSpec](#monitoring.grafana.com/v1alpha1.DropStageSpec)_| Drop is a filtering stage that lets you drop certain logs. | -|`json`
_[JSONStageSpec](#monitoring.grafana.com/v1alpha1.JSONStageSpec)_| JSON is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data. Information on JMESPath: http://jmespath.org/ | -|`labelAllow`
_[]string_| LabelAllow is an action stage that only allows the provided labels to be included in the label set that is sent to Loki with the log entry. | -|`labelDrop`
_[]string_| LabelDrop is an action stage that drops labels from the label set that is sent to Loki with the log entry. | -|`labels`
_map[string]string_| Labels is an action stage that takes data from the extracted map and modifies the label set that is sent to Loki with the log entry. The key is REQUIRED and represents the name for the label that will be created. Value is optional and will be the name from extracted data to use for the value of the label. If the value is not provided, it defaults to match the key. | -|`limit`
_[LimitStageSpec](#monitoring.grafana.com/v1alpha1.LimitStageSpec)_| Limit is a rate-limiting stage that throttles logs based on several options. | -|`match`
_[MatchStageSpec](#monitoring.grafana.com/v1alpha1.MatchStageSpec)_| Match is a filtering stage that conditionally applies a set of stages or drop entries when a log entry matches a configurable LogQL stream selector and filter expressions. | -|`metrics`
_[map[string]github.com/grafana/agent/static/operator/apis/monitoring/v1alpha1.MetricsStageSpec](#monitoring.grafana.com/v1alpha1.MetricsStageSpec)_| Metrics is an action stage that supports defining and updating metrics based on data from the extracted map. Created metrics are not pushed to Loki or Prometheus and are instead exposed via the /metrics endpoint of the Grafana Agent pod. The Grafana Agent Operator should be configured with a MetricsInstance that discovers the logging DaemonSet to collect metrics created by this stage. | -|`multiline`
_[MultilineStageSpec](#monitoring.grafana.com/v1alpha1.MultilineStageSpec)_| Multiline stage merges multiple lines into a multiline block before passing it on to the next stage in the pipeline. | -|`output`
_[OutputStageSpec](#monitoring.grafana.com/v1alpha1.OutputStageSpec)_| Output stage is an action stage that takes data from the extracted map and changes the log line that will be sent to Loki. | -|`pack`
_[PackStageSpec](#monitoring.grafana.com/v1alpha1.PackStageSpec)_| Pack is a transform stage that lets you embed extracted values and labels into the log line by packing the log line and labels inside of a JSON object. | -|`regex`
_[RegexStageSpec](#monitoring.grafana.com/v1alpha1.RegexStageSpec)_| Regex is a parsing stage that parses a log line using a regular expression. Named capture groups in the regex allows for adding data into the extracted map. | -|`replace`
_[ReplaceStageSpec](#monitoring.grafana.com/v1alpha1.ReplaceStageSpec)_| Replace is a parsing stage that parses a log line using a regular expression and replaces the log line. Named capture groups in the regex allows for adding data into the extracted map. | -|`template`
_[TemplateStageSpec](#monitoring.grafana.com/v1alpha1.TemplateStageSpec)_| Template is a transform stage that manipulates the values in the extracted map using Go's template syntax. | -|`tenant`
_[TenantStageSpec](#monitoring.grafana.com/v1alpha1.TenantStageSpec)_| Tenant is an action stage that sets the tenant ID for the log entry picking it from a field in the extracted data map. If the field is missing, the default LogsClientSpec.tenantId will be used. | -|`timestamp`
_[TimestampStageSpec](#monitoring.grafana.com/v1alpha1.TimestampStageSpec)_| Timestamp is an action stage that can change the timestamp of a log line before it is sent to Loki. If not present, the timestamp of a log line defaults to the time when the log line was read. | +PipelineStageSpec defines an individual pipeline stage. Each stage type is mutually exclusive and no more than one may be set per stage. More information on pipelines can be found in the Promtail documentation: https://grafana.com/docs/loki/latest/clients/promtail/pipelines/ + +#### Fields + +| Field | Description | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `cri`
_[CRIStageSpec](#monitoring.grafana.com/v1alpha1.CRIStageSpec)_ | CRI is a parsing stage that reads log lines using the standard CRI logging format. Supply cri: {} to enable. | +| `docker`
_[DockerStageSpec](#monitoring.grafana.com/v1alpha1.DockerStageSpec)_ | Docker is a parsing stage that reads log lines using the standard Docker logging format. Supply docker: {} to enable. | +| `drop`
_[DropStageSpec](#monitoring.grafana.com/v1alpha1.DropStageSpec)_ | Drop is a filtering stage that lets you drop certain logs. | +| `json`
_[JSONStageSpec](#monitoring.grafana.com/v1alpha1.JSONStageSpec)_ | JSON is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data. Information on JMESPath: http://jmespath.org/ | +| `labelAllow`
_[]string_ | LabelAllow is an action stage that only allows the provided labels to be included in the label set that is sent to Loki with the log entry. | +| `labelDrop`
_[]string_ | LabelDrop is an action stage that drops labels from the label set that is sent to Loki with the log entry. | +| `labels`
_map[string]string_ | Labels is an action stage that takes data from the extracted map and modifies the label set that is sent to Loki with the log entry. The key is REQUIRED and represents the name for the label that will be created. Value is optional and will be the name from extracted data to use for the value of the label. If the value is not provided, it defaults to match the key. | +| `limit`
_[LimitStageSpec](#monitoring.grafana.com/v1alpha1.LimitStageSpec)_ | Limit is a rate-limiting stage that throttles logs based on several options. | +| `match`
_[MatchStageSpec](#monitoring.grafana.com/v1alpha1.MatchStageSpec)_ | Match is a filtering stage that conditionally applies a set of stages or drop entries when a log entry matches a configurable LogQL stream selector and filter expressions. | +| `metrics`
_[map[string]github.com/grafana/agent/static/operator/apis/monitoring/v1alpha1.MetricsStageSpec](#monitoring.grafana.com/v1alpha1.MetricsStageSpec)_ | Metrics is an action stage that supports defining and updating metrics based on data from the extracted map. Created metrics are not pushed to Loki or Prometheus and are instead exposed via the /metrics endpoint of the Grafana Agent pod. The Grafana Agent Operator should be configured with a MetricsInstance that discovers the logging DaemonSet to collect metrics created by this stage. | +| `multiline`
_[MultilineStageSpec](#monitoring.grafana.com/v1alpha1.MultilineStageSpec)_ | Multiline stage merges multiple lines into a multiline block before passing it on to the next stage in the pipeline. | +| `output`
_[OutputStageSpec](#monitoring.grafana.com/v1alpha1.OutputStageSpec)_ | Output stage is an action stage that takes data from the extracted map and changes the log line that will be sent to Loki. | +| `pack`
_[PackStageSpec](#monitoring.grafana.com/v1alpha1.PackStageSpec)_ | Pack is a transform stage that lets you embed extracted values and labels into the log line by packing the log line and labels inside of a JSON object. | +| `regex`
_[RegexStageSpec](#monitoring.grafana.com/v1alpha1.RegexStageSpec)_ | Regex is a parsing stage that parses a log line using a regular expression. Named capture groups in the regex allows for adding data into the extracted map. | +| `replace`
_[ReplaceStageSpec](#monitoring.grafana.com/v1alpha1.ReplaceStageSpec)_ | Replace is a parsing stage that parses a log line using a regular expression and replaces the log line. Named capture groups in the regex allows for adding data into the extracted map. | +| `template`
_[TemplateStageSpec](#monitoring.grafana.com/v1alpha1.TemplateStageSpec)_ | Template is a transform stage that manipulates the values in the extracted map using Go's template syntax. | +| `tenant`
_[TenantStageSpec](#monitoring.grafana.com/v1alpha1.TenantStageSpec)_ | Tenant is an action stage that sets the tenant ID for the log entry picking it from a field in the extracted data map. If the field is missing, the default LogsClientSpec.tenantId will be used. | +| `timestamp`
_[TimestampStageSpec](#monitoring.grafana.com/v1alpha1.TimestampStageSpec)_ | Timestamp is an action stage that can change the timestamp of a log line before it is sent to Loki. If not present, the timestamp of a log line defaults to the time when the log line was read. | + ### PodLogs + (Appears on:[LogsDeployment](#monitoring.grafana.com/v1alpha1.LogsDeployment)) -PodLogs defines how to collect logs for a pod. -#### Fields -|Field|Description| -|-|-| -|`metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_| Refer to the Kubernetes API documentation for the fields of the `metadata` field. | -|`spec`
_[PodLogsSpec](#monitoring.grafana.com/v1alpha1.PodLogsSpec)_| Spec holds the specification of the desired behavior for the PodLogs. | -|`jobLabel`
_string_| The label to use to retrieve the job name from. | -|`podTargetLabels`
_[]string_| PodTargetLabels transfers labels on the Kubernetes Pod onto the target. | -|`selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Selector to select Pod objects. Required. | -|`namespaceSelector`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.NamespaceSelector](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.NamespaceSelector)_| Selector to select which namespaces the Pod objects are discovered from. | -|`pipelineStages`
_[[]PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)_| Pipeline stages for this pod. Pipeline stages support transforming and filtering log lines. | -|`relabelings`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_| RelabelConfigs to apply to logs before delivering. Grafana Agent Operator automatically adds relabelings for a few standard Kubernetes fields and replaces original scrape job name with __tmp_logs_job_name. More info: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#relabel_configs | +PodLogs defines how to collect logs for a pod. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `metadata`
_[Kubernetes meta/v1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta)_ | Refer to the Kubernetes API documentation for the fields of the `metadata` field. | +| `spec`
_[PodLogsSpec](#monitoring.grafana.com/v1alpha1.PodLogsSpec)_ | Spec holds the specification of the desired behavior for the PodLogs. | +| `jobLabel`
_string_ | The label to use to retrieve the job name from. | +| `podTargetLabels`
_[]string_ | PodTargetLabels transfers labels on the Kubernetes Pod onto the target. | +| `selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Selector to select Pod objects. Required. | +| `namespaceSelector`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.NamespaceSelector](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.NamespaceSelector)_ | Selector to select which namespaces the Pod objects are discovered from. | +| `pipelineStages`
_[[]PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)_ | Pipeline stages for this pod. Pipeline stages support transforming and filtering log lines. | +| `relabelings`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_ | RelabelConfigs to apply to logs before delivering. Grafana Agent Operator automatically adds relabelings for a few standard Kubernetes fields and replaces original scrape job name with \_\_tmp_logs_job_name. More info: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#relabel_configs | + ### PodLogsSpec + (Appears on:[PodLogs](#monitoring.grafana.com/v1alpha1.PodLogs)) -PodLogsSpec defines how to collect logs for a pod. -#### Fields -|Field|Description| -|-|-| -|`jobLabel`
_string_| The label to use to retrieve the job name from. | -|`podTargetLabels`
_[]string_| PodTargetLabels transfers labels on the Kubernetes Pod onto the target. | -|`selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_| Selector to select Pod objects. Required. | -|`namespaceSelector`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.NamespaceSelector](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.NamespaceSelector)_| Selector to select which namespaces the Pod objects are discovered from. | -|`pipelineStages`
_[[]PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)_| Pipeline stages for this pod. Pipeline stages support transforming and filtering log lines. | -|`relabelings`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_| RelabelConfigs to apply to logs before delivering. Grafana Agent Operator automatically adds relabelings for a few standard Kubernetes fields and replaces original scrape job name with __tmp_logs_job_name. More info: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#relabel_configs | +PodLogsSpec defines how to collect logs for a pod. + +#### Fields + +| Field | Description | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `jobLabel`
_string_ | The label to use to retrieve the job name from. | +| `podTargetLabels`
_[]string_ | PodTargetLabels transfers labels on the Kubernetes Pod onto the target. | +| `selector`
_[Kubernetes meta/v1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)_ | Selector to select Pod objects. Required. | +| `namespaceSelector`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.NamespaceSelector](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.NamespaceSelector)_ | Selector to select which namespaces the Pod objects are discovered from. | +| `pipelineStages`
_[[]PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)_ | Pipeline stages for this pod. Pipeline stages support transforming and filtering log lines. | +| `relabelings`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_ | RelabelConfigs to apply to logs before delivering. Grafana Agent Operator automatically adds relabelings for a few standard Kubernetes fields and replaces original scrape job name with \_\_tmp_logs_job_name. More info: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#relabel_configs | + ### QueueConfig + (Appears on:[RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)) -QueueConfig allows the tuning of remote_write queue_config parameters. -#### Fields -|Field|Description| -|-|-| -|`capacity`
_int_| Capacity is the number of samples to buffer per shard before samples start being dropped. | -|`minShards`
_int_| MinShards is the minimum number of shards, i.e., the amount of concurrency. | -|`maxShards`
_int_| MaxShards is the maximum number of shards, i.e., the amount of concurrency. | -|`maxSamplesPerSend`
_int_| MaxSamplesPerSend is the maximum number of samples per send. | -|`batchSendDeadline`
_string_| BatchSendDeadline is the maximum time a sample will wait in the buffer. | -|`maxRetries`
_int_| MaxRetries is the maximum number of times to retry a batch on recoverable errors. | -|`minBackoff`
_string_| MinBackoff is the initial retry delay. MinBackoff is doubled for every retry. | -|`maxBackoff`
_string_| MaxBackoff is the maximum retry delay. | -|`retryOnRateLimit`
_bool_| RetryOnRateLimit retries requests when encountering rate limits. | +QueueConfig allows the tuning of remote_write queue_config parameters. + +#### Fields + +| Field | Description | +| -------------------------------- | ----------------------------------------------------------------------------------------- | +| `capacity`
_int_ | Capacity is the number of samples to buffer per shard before samples start being dropped. | +| `minShards`
_int_ | MinShards is the minimum number of shards, i.e., the amount of concurrency. | +| `maxShards`
_int_ | MaxShards is the maximum number of shards, i.e., the amount of concurrency. | +| `maxSamplesPerSend`
_int_ | MaxSamplesPerSend is the maximum number of samples per send. | +| `batchSendDeadline`
_string_ | BatchSendDeadline is the maximum time a sample will wait in the buffer. | +| `maxRetries`
_int_ | MaxRetries is the maximum number of times to retry a batch on recoverable errors. | +| `minBackoff`
_string_ | MinBackoff is the initial retry delay. MinBackoff is doubled for every retry. | +| `maxBackoff`
_string_ | MaxBackoff is the maximum retry delay. | +| `retryOnRateLimit`
_bool_ | RetryOnRateLimit retries requests when encountering rate limits. | + ### RegexStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -RegexStageSpec is a parsing stage that parses a log line using a regular expression. Named capture groups in the regex allows for adding data into the extracted map. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from extracted data to parse. If empty, defaults to using the log message. | -|`expression`
_string_| RE2 regular expression. Each capture group MUST be named. Required. | +RegexStageSpec is a parsing stage that parses a log line using a regular expression. Named capture groups in the regex allows for adding data into the extracted map. + +#### Fields + +| Field | Description | +| ------------------------- | ------------------------------------------------------------------------------- | +| `source`
_string_ | Name from extracted data to parse. If empty, defaults to using the log message. | +| `expression`
_string_ | RE2 regular expression. Each capture group MUST be named. Required. | + ### RemoteWriteSpec + (Appears on:[MetricsInstanceSpec](#monitoring.grafana.com/v1alpha1.MetricsInstanceSpec), [MetricsSubsystemSpec](#monitoring.grafana.com/v1alpha1.MetricsSubsystemSpec)) -RemoteWriteSpec defines the remote_write configuration for Prometheus. -#### Fields -|Field|Description| -|-|-| -|`name`
_string_| Name of the remote_write queue. Must be unique if specified. The name is used in metrics and logging in order to differentiate queues. | -|`url`
_string_| URL of the endpoint to send samples to. | -|`remoteTimeout`
_string_| RemoteTimeout is the timeout for requests to the remote_write endpoint. | -|`headers`
_map[string]string_| Headers is a set of custom HTTP headers to be sent along with each remote_write request. Be aware that any headers set by Grafana Agent itself can't be overwritten. | -|`writeRelabelConfigs`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_| WriteRelabelConfigs holds relabel_configs to relabel samples before they are sent to the remote_write endpoint. | -|`basicAuth`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.BasicAuth](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.BasicAuth)_| BasicAuth for the URL. | -|`oauth2`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.OAuth2](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.OAuth2)_| Oauth2 for URL | -|`bearerToken`
_string_| BearerToken used for remote_write. | -|`bearerTokenFile`
_string_| BearerTokenFile used to read bearer token. | -|`sigv4`
_[SigV4Config](#monitoring.grafana.com/v1alpha1.SigV4Config)_| SigV4 configures SigV4-based authentication to the remote_write endpoint. SigV4-based authentication is used if SigV4 is defined, even with an empty object. | -|`tlsConfig`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.TLSConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.TLSConfig)_| TLSConfig to use for remote_write. | -|`proxyUrl`
_string_| ProxyURL to proxy requests through. Optional. | -|`queueConfig`
_[QueueConfig](#monitoring.grafana.com/v1alpha1.QueueConfig)_| QueueConfig allows tuning of the remote_write queue parameters. | -|`metadataConfig`
_[MetadataConfig](#monitoring.grafana.com/v1alpha1.MetadataConfig)_| MetadataConfig configures the sending of series metadata to remote storage. | +RemoteWriteSpec defines the remote_write configuration for Prometheus. + +#### Fields + +| Field | Description | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `name`
_string_ | Name of the remote_write queue. Must be unique if specified. The name is used in metrics and logging in order to differentiate queues. | +| `url`
_string_ | URL of the endpoint to send samples to. | +| `remoteTimeout`
_string_ | RemoteTimeout is the timeout for requests to the remote_write endpoint. | +| `headers`
_map[string]string_ | Headers is a set of custom HTTP headers to be sent along with each remote_write request. Be aware that any headers set by Grafana Agent itself can't be overwritten. | +| `writeRelabelConfigs`
_[[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.RelabelConfig)_ | WriteRelabelConfigs holds relabel_configs to relabel samples before they are sent to the remote_write endpoint. | +| `basicAuth`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.BasicAuth](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.BasicAuth)_ | BasicAuth for the URL. | +| `oauth2`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.OAuth2](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.OAuth2)_ | Oauth2 for URL | +| `bearerToken`
_string_ | BearerToken used for remote_write. | +| `bearerTokenFile`
_string_ | BearerTokenFile used to read bearer token. | +| `sigv4`
_[SigV4Config](#monitoring.grafana.com/v1alpha1.SigV4Config)_ | SigV4 configures SigV4-based authentication to the remote_write endpoint. SigV4-based authentication is used if SigV4 is defined, even with an empty object. | +| `tlsConfig`
_[github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.TLSConfig](https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.TLSConfig)_ | TLSConfig to use for remote_write. | +| `proxyUrl`
_string_ | ProxyURL to proxy requests through. Optional. | +| `queueConfig`
_[QueueConfig](#monitoring.grafana.com/v1alpha1.QueueConfig)_ | QueueConfig allows tuning of the remote_write queue parameters. | +| `metadataConfig`
_[MetadataConfig](#monitoring.grafana.com/v1alpha1.MetadataConfig)_ | MetadataConfig configures the sending of series metadata to remote storage. | + ### ReplaceStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -ReplaceStageSpec is a parsing stage that parses a log line using a regular expression and replaces the log line. Named capture groups in the regex allows for adding data into the extracted map. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from extracted data to parse. If empty, defaults to using the log message. | -|`expression`
_string_| RE2 regular expression. Each capture group MUST be named. Required. | -|`replace`
_string_| Value to replace the captured group with. | +ReplaceStageSpec is a parsing stage that parses a log line using a regular expression and replaces the log line. Named capture groups in the regex allows for adding data into the extracted map. + +#### Fields + +| Field | Description | +| ------------------------- | ------------------------------------------------------------------------------- | +| `source`
_string_ | Name from extracted data to parse. If empty, defaults to using the log message. | +| `expression`
_string_ | RE2 regular expression. Each capture group MUST be named. Required. | +| `replace`
_string_ | Value to replace the captured group with. | + ### SigV4Config + (Appears on:[RemoteWriteSpec](#monitoring.grafana.com/v1alpha1.RemoteWriteSpec)) -SigV4Config specifies configuration to perform SigV4 authentication. -#### Fields -|Field|Description| -|-|-| -|`region`
_string_| Region of the AWS endpoint. If blank, the region from the default credentials chain is used. | -|`accessKey`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| AccessKey holds the secret of the AWS API access key to use for signing. If not provided, the environment variable AWS_ACCESS_KEY_ID is used. | -|`secretKey`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_| SecretKey of the AWS API to use for signing. If blank, the environment variable AWS_SECRET_ACCESS_KEY is used. | -|`profile`
_string_| Profile is the named AWS profile to use for authentication. | -|`roleARN`
_string_| RoleARN is the AWS Role ARN to use for authentication, as an alternative for using the AWS API keys. | +SigV4Config specifies configuration to perform SigV4 authentication. + +#### Fields + +| Field | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| `region`
_string_ | Region of the AWS endpoint. If blank, the region from the default credentials chain is used. | +| `accessKey`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | AccessKey holds the secret of the AWS API access key to use for signing. If not provided, the environment variable AWS_ACCESS_KEY_ID is used. | +| `secretKey`
_[Kubernetes core/v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core)_ | SecretKey of the AWS API to use for signing. If blank, the environment variable AWS_SECRET_ACCESS_KEY is used. | +| `profile`
_string_ | Profile is the named AWS profile to use for authentication. | +| `roleARN`
_string_ | RoleARN is the AWS Role ARN to use for authentication, as an alternative for using the AWS API keys. | + ### TemplateStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -TemplateStageSpec is a transform stage that manipulates the values in the extracted map using Go's template syntax. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from extracted data to parse. Required. If empty, defaults to using the log message. | -|`template`
_string_| Go template string to use. Required. In addition to normal template functions, ToLower, ToUpper, Replace, Trim, TrimLeft, TrimRight, TrimPrefix, and TrimSpace are also available. | +TemplateStageSpec is a transform stage that manipulates the values in the extracted map using Go's template syntax. + +#### Fields + +| Field | Description | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `source`
_string_ | Name from extracted data to parse. Required. If empty, defaults to using the log message. | +| `template`
_string_ | Go template string to use. Required. In addition to normal template functions, ToLower, ToUpper, Replace, Trim, TrimLeft, TrimRight, TrimPrefix, and TrimSpace are also available. | + ### TenantStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -TenantStageSpec is an action stage that sets the tenant ID for the log entry picking it from a field in the extracted data map. -#### Fields -|Field|Description| -|-|-| -|`label`
_string_| Name from labels whose value should be set as tenant ID. Mutually exclusive with source and value. | -|`source`
_string_| Name from extracted data to use as the tenant ID. Mutually exclusive with label and value. | -|`value`
_string_| Value to use for the template ID. Useful when this stage is used within a conditional pipeline such as match. Mutually exclusive with label and source. | +TenantStageSpec is an action stage that sets the tenant ID for the log entry picking it from a field in the extracted data map. + +#### Fields + +| Field | Description | +| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `label`
_string_ | Name from labels whose value should be set as tenant ID. Mutually exclusive with source and value. | +| `source`
_string_ | Name from extracted data to use as the tenant ID. Mutually exclusive with label and value. | +| `value`
_string_ | Value to use for the template ID. Useful when this stage is used within a conditional pipeline such as match. Mutually exclusive with label and source. | + ### TimestampStageSpec + (Appears on:[PipelineStageSpec](#monitoring.grafana.com/v1alpha1.PipelineStageSpec)) -TimestampStageSpec is an action stage that can change the timestamp of a log line before it is sent to Loki. -#### Fields -|Field|Description| -|-|-| -|`source`
_string_| Name from extracted data to use as the timestamp. Required. | -|`format`
_string_| Determines format of the time string. Required. Can be one of: ANSIC, UnixDate, RubyDate, RFC822, RFC822Z, RFC850, RFC1123, RFC1123Z, RFC3339, RFC3339Nano, Unix, UnixMs, UnixUs, UnixNs. | -|`fallbackFormats`
_[]string_| Fallback formats to try if format fails. | -|`location`
_string_| IANA Timezone Database string. | -|`actionOnFailure`
_string_| Action to take when the timestamp can't be extracted or parsed. Can be skip or fudge. Defaults to fudge. | +TimestampStageSpec is an action stage that can change the timestamp of a log line before it is sent to Loki. + +#### Fields + +| Field | Description | +| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `source`
_string_ | Name from extracted data to use as the timestamp. Required. | +| `format`
_string_ | Determines format of the time string. Required. Can be one of: ANSIC, UnixDate, RubyDate, RFC822, RFC822Z, RFC850, RFC1123, RFC1123Z, RFC3339, RFC3339Nano, Unix, UnixMs, UnixUs, UnixNs. | +| `fallbackFormats`
_[]string_ | Fallback formats to try if format fails. | +| `location`
_string_ | IANA Timezone Database string. | +| `actionOnFailure`
_string_ | Action to take when the timestamp can't be extracted or parsed. Can be skip or fudge. Defaults to fudge. | diff --git a/docs/sources/operator/architecture.md b/docs/sources/operator/architecture.md index ba0b5c97fd06..0ff2130ab402 100644 --- a/docs/sources/operator/architecture.md +++ b/docs/sources/operator/architecture.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/architecture/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/architecture/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/architecture/ -- /docs/grafana-cloud/send-data/agent/operator/architecture/ + - /docs/grafana-cloud/agent/operator/architecture/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/architecture/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/architecture/ + - /docs/grafana-cloud/send-data/agent/operator/architecture/ canonical: https://grafana.com/docs/agent/latest/operator/architecture/ description: Learn about Grafana Agent architecture title: Architecture @@ -24,22 +24,22 @@ discovers other sub-resources, `MetricsInstance` and `LogsInstance`. The `Grafan The full hierarchy of custom resources is as follows: - `GrafanaAgent` - - `MetricsInstance` - - `PodMonitor` - - `Probe` - - `ServiceMonitor` - - `LogsInstance` - - `PodLogs` + - `MetricsInstance` + - `PodMonitor` + - `Probe` + - `ServiceMonitor` + - `LogsInstance` + - `PodLogs` The following table describes these custom resources: -| Custom resource | description | -|---|---| -| `GrafanaAgent` | Discovers one or more `MetricsInstance` and `LogsInstance` resources. | -| `MetricsInstance` | Defines where to ship collected metrics. This rolls out a Grafana Agent StatefulSet that will scrape and ship metrics to a `remote_write` endpoint. | -| `ServiceMonitor` | Collects [cAdvisor](https://github.com/google/cadvisor) and [kubelet metrics](https://github.com/kubernetes/kube-state-metrics). This configures the `MetricsInstance` / Agent StatefulSet | -| `LogsInstance` | Defines where to ship collected logs. This rolls out a Grafana Agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that will tail log files on your cluster nodes. | -| `PodLogs` | Collects container logs from Kubernetes Pods. This configures the `LogsInstance` / Agent DaemonSet. | +| Custom resource | description | +| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `GrafanaAgent` | Discovers one or more `MetricsInstance` and `LogsInstance` resources. | +| `MetricsInstance` | Defines where to ship collected metrics. This rolls out a Grafana Agent StatefulSet that will scrape and ship metrics to a `remote_write` endpoint. | +| `ServiceMonitor` | Collects [cAdvisor](https://github.com/google/cadvisor) and [kubelet metrics](https://github.com/kubernetes/kube-state-metrics). This configures the `MetricsInstance` / Agent StatefulSet | +| `LogsInstance` | Defines where to ship collected logs. This rolls out a Grafana Agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that will tail log files on your cluster nodes. | +| `PodLogs` | Collects container logs from Kubernetes Pods. This configures the `LogsInstance` / Agent DaemonSet. | Most of the Grafana Agent Operator resources have the ability to reference a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) or a [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). All referenced ConfigMaps or Secrets are added into the resource @@ -133,8 +133,8 @@ Two labels are added by default to every metric: - `cluster`, representing the `GrafanaAgent` deployment. Holds the value of `/`. - `__replica__`, representing the replica number of the Agent. This label works - out of the box with Grafana Cloud and Cortex's [HA - deduplication](https://cortexmetrics.io/docs/guides/ha-pair-handling/). + out of the box with Grafana Cloud and Cortex's [HA + deduplication](https://cortexmetrics.io/docs/guides/ha-pair-handling/). The shard number is not added as a label, as sharding is designed to be transparent on the receiver end. @@ -148,13 +148,13 @@ shards: 3 replicas: 2 ``` -You can also enable sharding and replication by setting the `shards` and `replicas` arguments when you start the Grafana Agent. +You can also enable sharding and replication by setting the `shards` and `replicas` arguments when you start the Grafana Agent. ### Examples The following examples show you how to enable sharding and replication in a Kubernetes environment. -* To shard the data into three shards and replicate each shard to two other Grafana Agent instances, you would use the following deployment manifest: +- To shard the data into three shards and replicate each shard to two other Grafana Agent instances, you would use the following deployment manifest: ``` apiVersion: apps/v1 @@ -179,7 +179,7 @@ The following examples show you how to enable sharding and replication in a Kube - "--replicas=2" ``` -* To shard the data into 10 shards and replicate each shard to three other Grafana Agent instances, you would use the following deployment manifest: +- To shard the data into 10 shards and replicate each shard to three other Grafana Agent instances, you would use the following deployment manifest: ``` apiVersion: apps/v1 @@ -203,4 +203,3 @@ The following examples show you how to enable sharding and replication in a Kube - "--shards=10" - "--replicas=3" ``` - diff --git a/docs/sources/operator/deploy-agent-operator-resources.md b/docs/sources/operator/deploy-agent-operator-resources.md index 6b6f6564c85a..8bbf3e7787cf 100644 --- a/docs/sources/operator/deploy-agent-operator-resources.md +++ b/docs/sources/operator/deploy-agent-operator-resources.md @@ -1,15 +1,16 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/deploy-agent-operator-resources/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/deploy-agent-operator-resources/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/deploy-agent-operator-resources/ -- /docs/grafana-cloud/send-data/agent/operator/deploy-agent-operator-resources/ -- custom-resource-quickstart/ + - /docs/grafana-cloud/agent/operator/deploy-agent-operator-resources/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/deploy-agent-operator-resources/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/deploy-agent-operator-resources/ + - /docs/grafana-cloud/send-data/agent/operator/deploy-agent-operator-resources/ + - custom-resource-quickstart/ canonical: https://grafana.com/docs/agent/latest/operator/deploy-agent-operator-resources/ description: Learn how to deploy Operator resources title: Deploy Operator resources weight: 120 --- + # Deploy Operator resources To start collecting telemetry data, you need to roll out Grafana Agent Operator custom resources into your Kubernetes cluster. Before you can create the custom resources, you must first apply the Agent Custom Resource Definitions (CRDs) and install Agent Operator, with or without Helm. If you haven't yet taken these steps, follow the instructions in one of the following topics: @@ -54,104 +55,101 @@ To deploy the `GrafanaAgent` resource: 1. Copy the following manifests to a file: - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: GrafanaAgent - metadata: - name: grafana-agent - namespace: default - labels: - app: grafana-agent - spec: - image: grafana/agent:{{< param "AGENT_RELEASE" >}} - integrations: - selector: - matchLabels: - agent: grafana-agent-integrations - logLevel: info - serviceAccountName: grafana-agent - metrics: - instanceSelector: - matchLabels: - agent: grafana-agent-metrics - externalLabels: - cluster: cloud - - logs: - instanceSelector: - matchLabels: - agent: grafana-agent-logs - - --- - - apiVersion: v1 - kind: ServiceAccount - metadata: - name: grafana-agent - namespace: default - - --- - - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: grafana-agent - rules: - - apiGroups: - - "" - resources: - - nodes - - nodes/proxy - - nodes/metrics - - services - - endpoints - - pods - - events - verbs: - - get - - list - - watch - - apiGroups: - - networking.k8s.io - resources: - - ingresses - verbs: - - get - - list - - watch - - nonResourceURLs: - - /metrics - - /metrics/cadvisor - verbs: - - get - - --- - - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: grafana-agent - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: grafana-agent - subjects: - - kind: ServiceAccount - name: grafana-agent - namespace: default - ``` - - In the first manifest, the `GrafanaAgent` resource: - - - Specifies an Agent image version. - - Specifies `MetricsInstance` and `LogsInstance` selectors. These search for `MetricsInstances` and `LogsInstances` in the same namespace with labels matching `agent: grafana-agent-metrics` and `agent: grafana-agent-logs`, respectively. - - Sets a `cluster: cloud` label for all metrics shipped to your Prometheus-compatible endpoint. Change this label to your cluster name. To search for `MetricsInstances` or `LogsInstances` in a *different* namespace, use the `instanceNamespaceSelector` field. To learn more about this field, see the `GrafanaAgent` [CRD specification](https://github.com/grafana/agent/tree/main/operations/agent-static-operator/crds/monitoring.grafana.com_grafanaagents.yaml). + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: GrafanaAgent + metadata: + name: grafana-agent + namespace: default + labels: + app: grafana-agent + spec: + image: grafana/agent:{{< param "AGENT_RELEASE" >}} + integrations: + selector: + matchLabels: + agent: grafana-agent-integrations + logLevel: info + serviceAccountName: grafana-agent + metrics: + instanceSelector: + matchLabels: + agent: grafana-agent-metrics + externalLabels: + cluster: cloud + + logs: + instanceSelector: + matchLabels: + agent: grafana-agent-logs + + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: grafana-agent + namespace: default + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: grafana-agent + rules: + - apiGroups: + - "" + resources: + - nodes + - nodes/proxy + - nodes/metrics + - services + - endpoints + - pods + - events + verbs: + - get + - list + - watch + - apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - get + - list + - watch + - nonResourceURLs: + - /metrics + - /metrics/cadvisor + verbs: + - get + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: grafana-agent + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: grafana-agent + subjects: + - kind: ServiceAccount + name: grafana-agent + namespace: default + ``` + + In the first manifest, the `GrafanaAgent` resource: + + - Specifies an Agent image version. + - Specifies `MetricsInstance` and `LogsInstance` selectors. These search for `MetricsInstances` and `LogsInstances` in the same namespace with labels matching `agent: grafana-agent-metrics` and `agent: grafana-agent-logs`, respectively. + - Sets a `cluster: cloud` label for all metrics shipped to your Prometheus-compatible endpoint. Change this label to your cluster name. To search for `MetricsInstances` or `LogsInstances` in a _different_ namespace, use the `instanceNamespaceSelector` field. To learn more about this field, see the `GrafanaAgent` [CRD specification](https://github.com/grafana/agent/tree/main/operations/agent-static-operator/crds/monitoring.grafana.com_grafanaagents.yaml). 1. Customize the manifests as needed and roll them out to your cluster using `kubectl apply -f` followed by the filename. - This step creates a `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` for the `GrafanaAgent` resource. + This step creates a `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` for the `GrafanaAgent` resource. - Deploying a `GrafanaAgent` resource on its own does not spin up Agent Pods. Agent Operator creates Agent Pods once `MetricsInstance` and `LogsIntance` resources have been created. Follow the instructions in the [Deploy a MetricsInstance resource](#deploy-a-metricsinstance-resource) and [Deploy LogsInstance and PodLogs resources](#deploy-logsinstance-and-podlogs-resources) sections to create these resources. + Deploying a `GrafanaAgent` resource on its own does not spin up Agent Pods. Agent Operator creates Agent Pods once `MetricsInstance` and `LogsIntance` resources have been created. Follow the instructions in the [Deploy a MetricsInstance resource](#deploy-a-metricsinstance-resource) and [Deploy LogsInstance and PodLogs resources](#deploy-logsinstance-and-podlogs-resources) sections to create these resources. ### Disable feature flags reporting @@ -173,63 +171,63 @@ To deploy a `MetricsInstance` resource: 1. Copy the following manifest to a file: - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: MetricsInstance - metadata: - name: primary - namespace: default - labels: - agent: grafana-agent-metrics - spec: - remoteWrite: - - url: your_remote_write_URL - basicAuth: - username: - name: primary-credentials-metrics - key: username - password: - name: primary-credentials-metrics - key: password - - # Supply an empty namespace selector to look in all namespaces. Remove - # this to only look in the same namespace as the MetricsInstance CR - serviceMonitorNamespaceSelector: {} - serviceMonitorSelector: - matchLabels: - instance: primary - - # Supply an empty namespace selector to look in all namespaces. Remove - # this to only look in the same namespace as the MetricsInstance CR. - podMonitorNamespaceSelector: {} - podMonitorSelector: - matchLabels: - instance: primary - - # Supply an empty namespace selector to look in all namespaces. Remove - # this to only look in the same namespace as the MetricsInstance CR. - probeNamespaceSelector: {} - probeSelector: - matchLabels: - instance: primary - ``` + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: MetricsInstance + metadata: + name: primary + namespace: default + labels: + agent: grafana-agent-metrics + spec: + remoteWrite: + - url: your_remote_write_URL + basicAuth: + username: + name: primary-credentials-metrics + key: username + password: + name: primary-credentials-metrics + key: password + + # Supply an empty namespace selector to look in all namespaces. Remove + # this to only look in the same namespace as the MetricsInstance CR + serviceMonitorNamespaceSelector: {} + serviceMonitorSelector: + matchLabels: + instance: primary + + # Supply an empty namespace selector to look in all namespaces. Remove + # this to only look in the same namespace as the MetricsInstance CR. + podMonitorNamespaceSelector: {} + podMonitorSelector: + matchLabels: + instance: primary + + # Supply an empty namespace selector to look in all namespaces. Remove + # this to only look in the same namespace as the MetricsInstance CR. + probeNamespaceSelector: {} + probeSelector: + matchLabels: + instance: primary + ``` 1. Replace the `remote_write` URL and customize the namespace and label configuration as necessary. - This step associates the `MetricsInstance` resource with the `agent: grafana-agent` `GrafanaAgent` resource deployed in the previous step. The `MetricsInstance` resource watches for creation and updates to `*Monitors` with the `instance: primary` label. + This step associates the `MetricsInstance` resource with the `agent: grafana-agent` `GrafanaAgent` resource deployed in the previous step. The `MetricsInstance` resource watches for creation and updates to `*Monitors` with the `instance: primary` label. 1. Once you've rolled out the manifest, create the `basicAuth` credentials [using a Kubernetes Secret](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/): - ```yaml - apiVersion: v1 - kind: Secret - metadata: - name: primary-credentials-metrics - namespace: default - stringData: - username: 'your_cloud_prometheus_username' - password: 'your_cloud_prometheus_API_key' - ``` + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: primary-credentials-metrics + namespace: default + stringData: + username: "your_cloud_prometheus_username" + password: "your_cloud_prometheus_API_key" + ``` If you're using Grafana Cloud, you can find your hosted Loki endpoint username and password by clicking **Details** on the Loki tile on the [Grafana Cloud Portal](/profile/org). If you want to base64-encode these values yourself, use `data` instead of `stringData`. @@ -243,87 +241,87 @@ To scrape the kubelet and cAdvisor endpoints: 1. Copy the following kubelet ServiceMonitor manifest to a file, then roll it out in your cluster using `kubectl apply -f` followed by the filename. - ```yaml - apiVersion: monitoring.coreos.com/v1 - kind: ServiceMonitor - metadata: - labels: - instance: primary - name: kubelet-monitor - namespace: default - spec: - endpoints: - - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token - honorLabels: true - interval: 60s - metricRelabelings: - - action: keep - regex: kubelet_cgroup_manager_duration_seconds_count|go_goroutines|kubelet_pod_start_duration_seconds_count|kubelet_runtime_operations_total|kubelet_pleg_relist_duration_seconds_bucket|volume_manager_total_volumes|kubelet_volume_stats_capacity_bytes|container_cpu_usage_seconds_total|container_network_transmit_bytes_total|kubelet_runtime_operations_errors_total|container_network_receive_bytes_total|container_memory_swap|container_network_receive_packets_total|container_cpu_cfs_periods_total|container_cpu_cfs_throttled_periods_total|kubelet_running_pod_count|node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate|container_memory_working_set_bytes|storage_operation_errors_total|kubelet_pleg_relist_duration_seconds_count|kubelet_running_pods|rest_client_request_duration_seconds_bucket|process_resident_memory_bytes|storage_operation_duration_seconds_count|kubelet_running_containers|kubelet_runtime_operations_duration_seconds_bucket|kubelet_node_config_error|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_running_container_count|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes|container_memory_rss|kubelet_pod_worker_duration_seconds_count|kubelet_node_name|kubelet_pleg_relist_interval_seconds_bucket|container_network_receive_packets_dropped_total|kubelet_pod_worker_duration_seconds_bucket|container_start_time_seconds|container_network_transmit_packets_dropped_total|process_cpu_seconds_total|storage_operation_duration_seconds_bucket|container_memory_cache|container_network_transmit_packets_total|kubelet_volume_stats_inodes_used|up|rest_client_requests_total - sourceLabels: - - __name__ - port: https-metrics - relabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - - action: replace - targetLabel: job - replacement: integrations/kubernetes/kubelet - scheme: https - tlsConfig: - insecureSkipVerify: true - namespaceSelector: - matchNames: - - default - selector: - matchLabels: - app.kubernetes.io/name: kubelet - ``` + ```yaml + apiVersion: monitoring.coreos.com/v1 + kind: ServiceMonitor + metadata: + labels: + instance: primary + name: kubelet-monitor + namespace: default + spec: + endpoints: + - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + honorLabels: true + interval: 60s + metricRelabelings: + - action: keep + regex: kubelet_cgroup_manager_duration_seconds_count|go_goroutines|kubelet_pod_start_duration_seconds_count|kubelet_runtime_operations_total|kubelet_pleg_relist_duration_seconds_bucket|volume_manager_total_volumes|kubelet_volume_stats_capacity_bytes|container_cpu_usage_seconds_total|container_network_transmit_bytes_total|kubelet_runtime_operations_errors_total|container_network_receive_bytes_total|container_memory_swap|container_network_receive_packets_total|container_cpu_cfs_periods_total|container_cpu_cfs_throttled_periods_total|kubelet_running_pod_count|node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate|container_memory_working_set_bytes|storage_operation_errors_total|kubelet_pleg_relist_duration_seconds_count|kubelet_running_pods|rest_client_request_duration_seconds_bucket|process_resident_memory_bytes|storage_operation_duration_seconds_count|kubelet_running_containers|kubelet_runtime_operations_duration_seconds_bucket|kubelet_node_config_error|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_running_container_count|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes|container_memory_rss|kubelet_pod_worker_duration_seconds_count|kubelet_node_name|kubelet_pleg_relist_interval_seconds_bucket|container_network_receive_packets_dropped_total|kubelet_pod_worker_duration_seconds_bucket|container_start_time_seconds|container_network_transmit_packets_dropped_total|process_cpu_seconds_total|storage_operation_duration_seconds_bucket|container_memory_cache|container_network_transmit_packets_total|kubelet_volume_stats_inodes_used|up|rest_client_requests_total + sourceLabels: + - __name__ + port: https-metrics + relabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + - action: replace + targetLabel: job + replacement: integrations/kubernetes/kubelet + scheme: https + tlsConfig: + insecureSkipVerify: true + namespaceSelector: + matchNames: + - default + selector: + matchLabels: + app.kubernetes.io/name: kubelet + ``` 1. Copy the following cAdvisor ServiceMonitor manifest to a file, then roll it out in your cluster using `kubectl apply -f` followed by the filename. - ```yaml - apiVersion: monitoring.coreos.com/v1 - kind: ServiceMonitor - metadata: - labels: - instance: primary - name: cadvisor-monitor - namespace: default - spec: - endpoints: - - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token - honorLabels: true - honorTimestamps: false - interval: 60s - metricRelabelings: - - action: keep - regex: kubelet_cgroup_manager_duration_seconds_count|go_goroutines|kubelet_pod_start_duration_seconds_count|kubelet_runtime_operations_total|kubelet_pleg_relist_duration_seconds_bucket|volume_manager_total_volumes|kubelet_volume_stats_capacity_bytes|container_cpu_usage_seconds_total|container_network_transmit_bytes_total|kubelet_runtime_operations_errors_total|container_network_receive_bytes_total|container_memory_swap|container_network_receive_packets_total|container_cpu_cfs_periods_total|container_cpu_cfs_throttled_periods_total|kubelet_running_pod_count|node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate|container_memory_working_set_bytes|storage_operation_errors_total|kubelet_pleg_relist_duration_seconds_count|kubelet_running_pods|rest_client_request_duration_seconds_bucket|process_resident_memory_bytes|storage_operation_duration_seconds_count|kubelet_running_containers|kubelet_runtime_operations_duration_seconds_bucket|kubelet_node_config_error|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_running_container_count|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes|container_memory_rss|kubelet_pod_worker_duration_seconds_count|kubelet_node_name|kubelet_pleg_relist_interval_seconds_bucket|container_network_receive_packets_dropped_total|kubelet_pod_worker_duration_seconds_bucket|container_start_time_seconds|container_network_transmit_packets_dropped_total|process_cpu_seconds_total|storage_operation_duration_seconds_bucket|container_memory_cache|container_network_transmit_packets_total|kubelet_volume_stats_inodes_used|up|rest_client_requests_total - sourceLabels: - - __name__ - path: /metrics/cadvisor - port: https-metrics - relabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - - action: replace - targetLabel: job - replacement: integrations/kubernetes/cadvisor - scheme: https - tlsConfig: - insecureSkipVerify: true - namespaceSelector: - matchNames: - - default - selector: - matchLabels: - app.kubernetes.io/name: kubelet - ``` + ```yaml + apiVersion: monitoring.coreos.com/v1 + kind: ServiceMonitor + metadata: + labels: + instance: primary + name: cadvisor-monitor + namespace: default + spec: + endpoints: + - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + honorLabels: true + honorTimestamps: false + interval: 60s + metricRelabelings: + - action: keep + regex: kubelet_cgroup_manager_duration_seconds_count|go_goroutines|kubelet_pod_start_duration_seconds_count|kubelet_runtime_operations_total|kubelet_pleg_relist_duration_seconds_bucket|volume_manager_total_volumes|kubelet_volume_stats_capacity_bytes|container_cpu_usage_seconds_total|container_network_transmit_bytes_total|kubelet_runtime_operations_errors_total|container_network_receive_bytes_total|container_memory_swap|container_network_receive_packets_total|container_cpu_cfs_periods_total|container_cpu_cfs_throttled_periods_total|kubelet_running_pod_count|node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate|container_memory_working_set_bytes|storage_operation_errors_total|kubelet_pleg_relist_duration_seconds_count|kubelet_running_pods|rest_client_request_duration_seconds_bucket|process_resident_memory_bytes|storage_operation_duration_seconds_count|kubelet_running_containers|kubelet_runtime_operations_duration_seconds_bucket|kubelet_node_config_error|kubelet_cgroup_manager_duration_seconds_bucket|kubelet_running_container_count|kubelet_volume_stats_available_bytes|kubelet_volume_stats_inodes|container_memory_rss|kubelet_pod_worker_duration_seconds_count|kubelet_node_name|kubelet_pleg_relist_interval_seconds_bucket|container_network_receive_packets_dropped_total|kubelet_pod_worker_duration_seconds_bucket|container_start_time_seconds|container_network_transmit_packets_dropped_total|process_cpu_seconds_total|storage_operation_duration_seconds_bucket|container_memory_cache|container_network_transmit_packets_total|kubelet_volume_stats_inodes_used|up|rest_client_requests_total + sourceLabels: + - __name__ + path: /metrics/cadvisor + port: https-metrics + relabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + - action: replace + targetLabel: job + replacement: integrations/kubernetes/cadvisor + scheme: https + tlsConfig: + insecureSkipVerify: true + namespaceSelector: + matchNames: + - default + selector: + matchLabels: + app.kubernetes.io/name: kubelet + ``` These two ServiceMonitors configure Agent to scrape all the kubelet and cAdvisor endpoints in your Kubernetes cluster (one of each per Node). In addition, it defines a `job` label which you can update (it is preset here for compatibility with Grafana Cloud's Kubernetes integration). It also provides an allowlist containing a core set of Kubernetes metrics to reduce remote metrics usage. If you don't need this allowlist, you can omit it, however, your metrics usage will increase significantly. - When you're done, Agent should now be shipping kubelet and cAdvisor metrics to your remote Prometheus endpoint. To check this in Grafana Cloud, go to your dashboards, select **Integration - Kubernetes**, then select **Kubernetes / Kubelet**. +When you're done, Agent should now be shipping kubelet and cAdvisor metrics to your remote Prometheus endpoint. To check this in Grafana Cloud, go to your dashboards, select **Integration - Kubernetes**, then select **Kubernetes / Kubelet**. ## Deploy LogsInstance and PodLogs resources @@ -333,90 +331,90 @@ To deploy the `LogsInstance` resource into your cluster: 1. Copy the following manifest to a file, then roll it out in your cluster using `kubectl apply -f` followed by the filename. - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: LogsInstance - metadata: - name: primary - namespace: default - labels: - agent: grafana-agent-logs - spec: - clients: - - url: your_remote_logs_URL - basicAuth: - username: - name: primary-credentials-logs - key: username - password: - name: primary-credentials-logs - key: password - - # Supply an empty namespace selector to look in all namespaces. Remove - # this to only look in the same namespace as the LogsInstance CR - podLogsNamespaceSelector: {} - podLogsSelector: - matchLabels: - instance: primary - ``` - - This `LogsInstance` picks up `PodLogs` resources with the `instance: primary` label. Be sure to set the Loki URL to the correct push endpoint. For Grafana Cloud, this will look similar to `logs-prod-us-central1.grafana.net/loki/api/v1/push`, however check the [Grafana Cloud Portal](/profile/org) to confirm by clicking **Details** on the Loki tile. - - Also note that this example uses the `agent: grafana-agent-logs` label, which associates this `LogsInstance` with the `GrafanaAgent` resource defined earlier. This means that it will inherit requests, limits, affinities and other properties defined in the `GrafanaAgent` custom resource. + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: LogsInstance + metadata: + name: primary + namespace: default + labels: + agent: grafana-agent-logs + spec: + clients: + - url: your_remote_logs_URL + basicAuth: + username: + name: primary-credentials-logs + key: username + password: + name: primary-credentials-logs + key: password + + # Supply an empty namespace selector to look in all namespaces. Remove + # this to only look in the same namespace as the LogsInstance CR + podLogsNamespaceSelector: {} + podLogsSelector: + matchLabels: + instance: primary + ``` + + This `LogsInstance` picks up `PodLogs` resources with the `instance: primary` label. Be sure to set the Loki URL to the correct push endpoint. For Grafana Cloud, this will look similar to `logs-prod-us-central1.grafana.net/loki/api/v1/push`, however check the [Grafana Cloud Portal](/profile/org) to confirm by clicking **Details** on the Loki tile. + + Also note that this example uses the `agent: grafana-agent-logs` label, which associates this `LogsInstance` with the `GrafanaAgent` resource defined earlier. This means that it will inherit requests, limits, affinities and other properties defined in the `GrafanaAgent` custom resource. 1. To create the Secret for the `LogsInstance` resource, copy the following Secret manifest to a file, then roll it out in your cluster using `kubectl apply -f` followed by the filename. - ```yaml - apiVersion: v1 - kind: Secret - metadata: - name: primary-credentials-logs - namespace: default - stringData: - username: 'your_username_here' - password: 'your_password_here' - ``` - - If you're using Grafana Cloud, you can find your hosted Loki endpoint username and password by clicking **Details** on the Loki tile on the [Grafana Cloud Portal](/profile/org). If you want to base64-encode these values yourself, use `data` instead of `stringData`. - -1. Copy the following `PodLogs` manifest to a file, then roll it to your cluster using `kubectl apply -f` followed by the filename. The manifest defines your logging targets. Agent Operator turns this into Agent configuration for the logs subsystem, and rolls it out to the DaemonSet of logging Agents. - - {{< admonition type="note" >}} - The following is a minimal working example which you should adapt to your production needs. - {{< /admonition >}} - - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: PodLogs - metadata: - labels: - instance: primary - name: kubernetes-pods - namespace: default - spec: - pipelineStages: - - docker: {} - namespaceSelector: - matchNames: - - default - selector: - matchLabels: {} - ``` - - This example tails container logs for all Pods in the `default` namespace. You can restrict the set of matched Pods by using the `matchLabels` selector. You can also set additional `pipelineStages` and create `relabelings` to add or modify log line labels. To learn more about the `PodLogs` specification and available resource fields, see the [PodLogs CRD](https://github.com/grafana/agent/tree/main/operations/agent-static-operator/crds/monitoring.grafana.com_podlogs.yaml). - - The above `PodLogs` resource adds the following labels to log lines: - - - `namespace` - - `service` - - `pod` - - `container` - - `job` (set to `PodLogs_namespace/PodLogs_name`) - - `__path__` (the path to log files, set to `/var/log/pods/*$1/*.log` where `$1` is `__meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name`) - - To learn more about this configuration format and other available labels, see the [Promtail Scraping](/docs/loki/latest/clients/promtail/scraping/#promtail-scraping-service-discovery) documentation. Agent Operator loads this configuration into the `LogsInstance` agents automatically. - -The DaemonSet of logging agents should be tailing your container logs, applying default labels to the log lines, and shipping them to your remote Loki endpoint. + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: primary-credentials-logs + namespace: default + stringData: + username: "your_username_here" + password: "your_password_here" + ``` + + If you're using Grafana Cloud, you can find your hosted Loki endpoint username and password by clicking **Details** on the Loki tile on the [Grafana Cloud Portal](/profile/org). If you want to base64-encode these values yourself, use `data` instead of `stringData`. + +1. Copy the following `PodLogs` manifest to a file, then roll it to your cluster using `kubectl apply -f` followed by the filename. The manifest defines your logging targets. Agent Operator turns this into Agent configuration for the logs subsystem, and rolls it out to the DaemonSet of logging Agents. + + {{< admonition type="note" >}} + The following is a minimal working example which you should adapt to your production needs. + {{< /admonition >}} + + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: PodLogs + metadata: + labels: + instance: primary + name: kubernetes-pods + namespace: default + spec: + pipelineStages: + - docker: {} + namespaceSelector: + matchNames: + - default + selector: + matchLabels: {} + ``` + + This example tails container logs for all Pods in the `default` namespace. You can restrict the set of matched Pods by using the `matchLabels` selector. You can also set additional `pipelineStages` and create `relabelings` to add or modify log line labels. To learn more about the `PodLogs` specification and available resource fields, see the [PodLogs CRD](https://github.com/grafana/agent/tree/main/operations/agent-static-operator/crds/monitoring.grafana.com_podlogs.yaml). + + The above `PodLogs` resource adds the following labels to log lines: + + - `namespace` + - `service` + - `pod` + - `container` + - `job` (set to `PodLogs_namespace/PodLogs_name`) + - `__path__` (the path to log files, set to `/var/log/pods/*$1/*.log` where `$1` is `__meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name`) + + To learn more about this configuration format and other available labels, see the [Promtail Scraping](/docs/loki/latest/clients/promtail/scraping/#promtail-scraping-service-discovery) documentation. Agent Operator loads this configuration into the `LogsInstance` agents automatically. + +The DaemonSet of logging agents should be tailing your container logs, applying default labels to the log lines, and shipping them to your remote Loki endpoint. ## Summary diff --git a/docs/sources/operator/getting-started.md b/docs/sources/operator/getting-started.md index e7393880876b..a3b5459f4d22 100644 --- a/docs/sources/operator/getting-started.md +++ b/docs/sources/operator/getting-started.md @@ -1,9 +1,9 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/getting-started/ -- /docs/grafana-cloud/send-data/agent/operator/getting-started/ + - /docs/grafana-cloud/agent/operator/getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/getting-started/ + - /docs/grafana-cloud/send-data/agent/operator/getting-started/ canonical: https://grafana.com/docs/agent/latest/operator/getting-started/ description: Learn how to install the Operator title: Install the Operator @@ -15,6 +15,7 @@ weight: 110 In this guide, you'll learn how to deploy [Grafana Agent Operator]({{< relref "./_index.md" >}}) into your Kubernetes cluster. This guide does not use Helm. To learn how to deploy Agent Operator using the [grafana-agent-operator Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/agent-operator), see [Install Grafana Agent Operator with Helm]({{< relref "./helm-getting-started.md" >}}). > **Note**: If you are shipping your data to Grafana Cloud, use [Kubernetes Monitoring](/docs/grafana-cloud/kubernetes-monitoring/) to set up Agent Operator. Kubernetes Monitoring provides a simplified approach and preconfigured dashboards and alerts. + ## Before you begin To deploy Agent Operator, make sure that you have the following: @@ -39,15 +40,15 @@ You can find the set of Custom Resource Definitions for Grafana Agent Operator i To deploy the CRDs: -1. Clone the agent repo and then apply the CRDs from the root of the agent repository: - ``` - kubectl apply -f production/operator/crds - ``` +1. Clone the agent repo and then apply the CRDs from the root of the agent repository: + ` kubectl apply -f production/operator/crds + ` + + This step _must_ be completed before installing Agent Operator—it will - This step _must_ be completed before installing Agent Operator—it will -fail to start if the CRDs do not exist. + fail to start if the CRDs do not exist. -2. To check that the CRDs are deployed to your Kubernetes cluster and to access documentation for each resource, use `kubectl explain `. +2. To check that the CRDs are deployed to your Kubernetes cluster and to access documentation for each resource, use `kubectl explain `. For example, `kubectl explain GrafanaAgent` describes the GrafanaAgent CRD, and `kubectl explain GrafanaAgent.spec` gives you information on its spec field. @@ -59,95 +60,92 @@ To install Agent Operator: 1. Copy the following deployment schema to a file, updating the namespace if needed: - ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: grafana-agent-operator - namespace: default - labels: - app: grafana-agent-operator - spec: - replicas: 1 - selector: - matchLabels: - app: grafana-agent-operator - template: - metadata: - labels: - app: grafana-agent-operator - spec: - serviceAccountName: grafana-agent-operator - containers: - - name: operator - image: grafana/agent-operator:{{< param "AGENT_RELEASE" >}} - args: - - --kubelet-service=default/kubelet - --- - - apiVersion: v1 - kind: ServiceAccount - metadata: - name: grafana-agent-operator - namespace: default - - --- - - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: grafana-agent-operator - rules: - - apiGroups: [monitoring.grafana.com] - resources: - - grafanaagents - - metricsinstances - - logsinstances - - podlogs - - integrations - verbs: [get, list, watch] - - apiGroups: [monitoring.coreos.com] - resources: - - podmonitors - - probes - - servicemonitors - verbs: [get, list, watch] - - apiGroups: [""] - resources: - - namespaces - - nodes - verbs: [get, list, watch] - - apiGroups: [""] - resources: - - secrets - - services - - configmaps - - endpoints - verbs: [get, list, watch, create, update, patch, delete] - - apiGroups: ["apps"] - resources: - - statefulsets - - daemonsets - - deployments - verbs: [get, list, watch, create, update, patch, delete] - - --- - - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: grafana-agent-operator - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: grafana-agent-operator - subjects: - - kind: ServiceAccount - name: grafana-agent-operator - namespace: default - ``` - -2. Roll out the deployment in your cluster using `kubectl apply -f` followed by your deployment filename. + ```yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: grafana-agent-operator + namespace: default + labels: + app: grafana-agent-operator + spec: + replicas: 1 + selector: + matchLabels: + app: grafana-agent-operator + template: + metadata: + labels: + app: grafana-agent-operator + spec: + serviceAccountName: grafana-agent-operator + containers: + - name: operator + image: grafana/agent-operator:{{< param "AGENT_RELEASE" >}} + args: + - --kubelet-service=default/kubelet + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: grafana-agent-operator + namespace: default + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: grafana-agent-operator + rules: + - apiGroups: [monitoring.grafana.com] + resources: + - grafanaagents + - metricsinstances + - logsinstances + - podlogs + - integrations + verbs: [get, list, watch] + - apiGroups: [monitoring.coreos.com] + resources: + - podmonitors + - probes + - servicemonitors + verbs: [get, list, watch] + - apiGroups: [""] + resources: + - namespaces + - nodes + verbs: [get, list, watch] + - apiGroups: [""] + resources: + - secrets + - services + - configmaps + - endpoints + verbs: [get, list, watch, create, update, patch, delete] + - apiGroups: ["apps"] + resources: + - statefulsets + - daemonsets + - deployments + verbs: [get, list, watch, create, update, patch, delete] + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: grafana-agent-operator + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: grafana-agent-operator + subjects: + - kind: ServiceAccount + name: grafana-agent-operator + namespace: default + ``` + +2. Roll out the deployment in your cluster using `kubectl apply -f` followed by your deployment filename. > **Note**: If you want to run Agent Operator locally, make sure your kubectl context is correct. Running locally uses your current kubectl context. If it is set to your production environment, you could accidentally deploy a new Grafana Agent to production. Install CRDs on the cluster prior to running locally. Afterwards, you can run Agent Operator using `go run ./cmd/grafana-agent-operator`. diff --git a/docs/sources/operator/helm-getting-started.md b/docs/sources/operator/helm-getting-started.md index bb63f01190ce..437322683619 100644 --- a/docs/sources/operator/helm-getting-started.md +++ b/docs/sources/operator/helm-getting-started.md @@ -1,14 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/helm-getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/helm-getting-started/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/helm-getting-started/ -- /docs/grafana-cloud/send-data/agent/operator/helm-getting-started/ + - /docs/grafana-cloud/agent/operator/helm-getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/helm-getting-started/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/helm-getting-started/ + - /docs/grafana-cloud/send-data/agent/operator/helm-getting-started/ canonical: https://grafana.com/docs/agent/latest/operator/helm-getting-started/ description: Learn how to install the Operator with Helm charts title: Install the Operator with Helm weight: 100 --- + # Install the Operator with Helm In this guide, you'll learn how to deploy [Grafana Agent Operator]({{< relref "./_index.md" >}}) into your Kubernetes cluster using the [grafana-agent-operator Helm chart](https://github.com/grafana/helm-charts/tree/main/charts/agent-operator). To learn how to deploy Agent Operator without using Helm, see [Install Grafana Agent Operator]({{< relref "./getting-started.md" >}}). @@ -33,39 +34,40 @@ To install the Agent Operator Helm chart: 1. Add and update the `grafana` Helm chart repo: - ```bash - helm repo add grafana https://grafana.github.io/helm-charts - helm repo update - ``` + ```bash + helm repo add grafana https://grafana.github.io/helm-charts + helm repo update + ``` 1. Install the chart, replacing `my-release` with your release name: - ```bash - helm install my-release grafana/grafana-agent-operator - ``` + ```bash + helm install my-release grafana/grafana-agent-operator + ``` + + If you want to modify the default parameters, you can create a `values.yaml` file and pass it to `helm install`: - If you want to modify the default parameters, you can create a `values.yaml` file and pass it to `helm install`: + ```bash + helm install my-release grafana/grafana-agent-operator -f values.yaml + ``` - ```bash - helm install my-release grafana/grafana-agent-operator -f values.yaml - ``` + If you want to deploy Agent Operator into a namespace other than `default`, use the `-n` flag: - If you want to deploy Agent Operator into a namespace other than `default`, use the `-n` flag: + ```bash + helm install my-release grafana/grafana-agent-operator -f values.yaml -n my-namespace + ``` - ```bash - helm install my-release grafana/grafana-agent-operator -f values.yaml -n my-namespace - ``` - You can find a list of configurable template parameters in the [Helm chart repository](https://github.com/grafana/helm-charts/blob/main/charts/agent-operator/values.yaml). + You can find a list of configurable template parameters in the [Helm chart repository](https://github.com/grafana/helm-charts/blob/main/charts/agent-operator/values.yaml). 1. Once you've successfully deployed the Helm release, confirm that Agent Operator is up and running: - ```bash - kubectl get pod - kubectl get svc - ``` + ```bash + kubectl get pod + kubectl get svc + ``` - You should see an Agent Operator Pod in `RUNNING` state, and a `kubelet` service. Depending on your setup, this could take a moment. + You should see an Agent Operator Pod in `RUNNING` state, and a `kubelet` service. Depending on your setup, this could take a moment. ## Deploy the Grafana Agent Operator resources - Agent Operator is now up and running. Next, you need to install a Grafana Agent for Agent Operator to run for you. To do so, follow the instructions in the [Deploy the Grafana Agent Operator resources]({{< relref "./deploy-agent-operator-resources.md" >}}) topic. To learn more about the custom resources Agent Operator provides and their hierarchy, see [Grafana Agent Operator architecture]({{< relref "./architecture" >}}). +Agent Operator is now up and running. Next, you need to install a Grafana Agent for Agent Operator to run for you. To do so, follow the instructions in the [Deploy the Grafana Agent Operator resources]({{< relref "./deploy-agent-operator-resources.md" >}}) topic. To learn more about the custom resources Agent Operator provides and their hierarchy, see [Grafana Agent Operator architecture]({{< relref "./architecture" >}}). diff --git a/docs/sources/operator/operator-integrations.md b/docs/sources/operator/operator-integrations.md index fc49836f8157..566a88e00b42 100644 --- a/docs/sources/operator/operator-integrations.md +++ b/docs/sources/operator/operator-integrations.md @@ -1,14 +1,15 @@ --- aliases: -- /docs/grafana-cloud/agent/operator/operator-integrations/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/operator-integrations/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/operator-integrations/ -- /docs/grafana-cloud/send-data/agent/operator/operator-integrations/ + - /docs/grafana-cloud/agent/operator/operator-integrations/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/operator-integrations/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/operator-integrations/ + - /docs/grafana-cloud/send-data/agent/operator/operator-integrations/ canonical: https://grafana.com/docs/agent/latest/operator/operator-integrations/ description: Learn how to set up integrations title: Set up integrations weight: 350 --- + # Set up integrations This topic provides examples of setting up Grafana Agent Operator integrations, including [node_exporter](#set-up-an-agent-operator-node_exporter-integration) and [mysqld_exporter](#set-up-an-agent-operator-mysqld_exporter-integration). @@ -31,15 +32,15 @@ To set up a node_exporter integration: 1. Copy the following manifest to a file: - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: Integration - metadata: + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: Integration + metadata: name: node-exporter namespace: default labels: agent: grafana-agent-integrations - spec: + spec: name: node_exporter type: allNodes: true @@ -68,11 +69,11 @@ To set up a node_exporter integration: - name: root hostPath: path: /root - ``` + ``` 2. Customize the manifest as needed and roll it out to your cluster using `kubectl apply -f` followed by the filename. - The manifest causes Agent Operator to create an instance of a grafana-agent-integrations-deploy resource that exports Node metrics. + The manifest causes Agent Operator to create an instance of a grafana-agent-integrations-deploy resource that exports Node metrics. ## Set up an Agent Operator mysqld_exporter integration @@ -82,15 +83,15 @@ To set up a mysqld_exporter integration: 1. Copy the following manifest to a file: - ```yaml - apiVersion: monitoring.grafana.com/v1alpha1 - kind: Integration - metadata: + ```yaml + apiVersion: monitoring.grafana.com/v1alpha1 + kind: Integration + metadata: name: mysqld-exporter namespace: default labels: agent: grafana-agent-integrations - spec: + spec: name: mysql type: allNodes: true @@ -100,8 +101,8 @@ To set up a mysqld_exporter integration: enable: true metrics_instance: default/primary data_source_name: root@(server-a:3306)/ - ``` + ``` 2. Customize the manifest as needed and roll it out to your cluster using `kubectl apply -f` followed by the filename. - The manifest causes Agent Operator to create an instance of a grafana-agent-integrations-deploy resource that exports MySQL metrics. + The manifest causes Agent Operator to create an instance of a grafana-agent-integrations-deploy resource that exports MySQL metrics. diff --git a/docs/sources/operator/release-notes.md b/docs/sources/operator/release-notes.md index 9c83ca534b6f..a6dd723d0c87 100644 --- a/docs/sources/operator/release-notes.md +++ b/docs/sources/operator/release-notes.md @@ -1,10 +1,10 @@ --- aliases: -- ./upgrade-guide/ -- /docs/grafana-cloud/agent/operator/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/operator/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/release-notes/ -- /docs/grafana-cloud/send-data/agent/operator/release-notes/ + - ./upgrade-guide/ + - /docs/grafana-cloud/agent/operator/release-notes/ + - /docs/grafana-cloud/monitor-infrastructure/agent/operator/release-notes/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/operator/release-notes/ + - /docs/grafana-cloud/send-data/agent/operator/release-notes/ canonical: https://grafana.com/docs/agent/latest/operator/release-notes/ description: Release notes for Grafana Agent Operator menuTitle: Release notes @@ -35,7 +35,6 @@ For a complete list of changes to Grafana Agent, with links to pull requests and > - [Static mode release notes](ref:release-notes-static) > - [Flow mode release notes](ref:release-notes-flow) - ## v0.33 ### Symbolic links in Docker containers removed @@ -120,7 +119,6 @@ refer to the new `agentctl operator-detatch` command: this will iterate through all of your objects and remove any OwnerReferences to a CRD, allowing you to delete your Operator CRDs or CRs. - Example old ClusterRole: ```yaml @@ -129,11 +127,11 @@ kind: ClusterRole metadata: name: grafana-agent-operator rules: -- apiGroups: [monitoring.grafana.com] - resources: - - grafana-agents - - prometheus-instances - verbs: [get, list, watch] + - apiGroups: [monitoring.grafana.com] + resources: + - grafana-agents + - prometheus-instances + verbs: [get, list, watch] ``` Example new ClusterRole: @@ -144,9 +142,9 @@ kind: ClusterRole metadata: name: grafana-agent-operator rules: -- apiGroups: [monitoring.grafana.com] - resources: - - grafanaagents - - metricsinstances - verbs: [get, list, watch] + - apiGroups: [monitoring.grafana.com] + resources: + - grafanaagents + - metricsinstances + verbs: [get, list, watch] ``` diff --git a/docs/sources/shared/deploy-agent.md b/docs/sources/shared/deploy-agent.md index 1799ea174579..f0ff01261597 100644 --- a/docs/sources/shared/deploy-agent.md +++ b/docs/sources/shared/deploy-agent.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/shared/deploy-agent/ -- /docs/grafana-cloud/agent/shared/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/deploy-agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/deploy-agent/ -- /docs/grafana-cloud/send-data/agent/shared/deploy-agent/ + - /docs/agent/shared/deploy-agent/ + - /docs/grafana-cloud/agent/shared/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/deploy-agent/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/deploy-agent/ + - /docs/grafana-cloud/send-data/agent/shared/deploy-agent/ canonical: https://grafana.com/docs/agent/latest/shared/deploy-agent/ description: Shared content, deployment topologies for Grafana Agent headless: true @@ -22,6 +22,7 @@ to consider using each topology, issues you may run into, and scaling considerations. ## As a centralized collection service + Deploying Grafana Agent as a centralized service is recommended for collecting application telemetry. This topology allows you to use a smaller number of agents to coordinate service discovery, collection, and remote writing. @@ -36,6 +37,7 @@ series. We recommend you start looking towards horizontal scaling around the 1 m active series mark. ### Using Kubernetes StatefulSets + Deploying Grafana Agent as a StatefulSet is the recommended option for metrics collection. The persistent pod identifiers make it possible to consistently match volumes @@ -44,20 +46,25 @@ with pods so that you can use them for the WAL directory. You can also use a Kubernetes deployment in cases where persistent storage is not required, such as a traces-only pipeline. ### Pros -* Straightforward scaling using [clustering][] or [hashmod sharding][] -* Minimizes the “noisy neighbor” effect -* Easy to meta-monitor + +- Straightforward scaling using [clustering][] or [hashmod sharding][] +- Minimizes the “noisy neighbor” effect +- Easy to meta-monitor ### Cons -* Requires running on separate infrastructure + +- Requires running on separate infrastructure ### Use for -* Scalable telemetry collection + +- Scalable telemetry collection ### Don’t use for -* Host-level metrics and logs + +- Host-level metrics and logs ## As a host daemon + Deploying one Grafana Agent per machine is required for collecting machine-level metrics and logs, such as node_exporter hardware and network metrics or journald system logs. @@ -71,56 +78,67 @@ outgoing connections on different ports. So, if all agents are shipping metrics and log data, an egress IP can support up to 32,255 agents. ### Using Kubernetes DaemonSets + The simplest use case of the host daemon topology is a Kubernetes DaemonSet, and it is required for node-level observability (for example cAdvisor metrics) and collecting pod logs. ### Pros -* Doesn’t require running on separate infrastructure -* Typically leads to smaller-sized agents -* Lower network latency to instrumented applications + +- Doesn’t require running on separate infrastructure +- Typically leads to smaller-sized agents +- Lower network latency to instrumented applications ### Cons -* Requires planning a process for provisioning Grafana Agent on new machines, as well as keeping configuration up to date to avoid configuration drift -* Not possible to scale agents independently when using Kubernetes DaemonSets -* Scaling the topology can strain external APIs (like service discovery) and network infrastructure (like firewalls, proxy servers, and egress points) + +- Requires planning a process for provisioning Grafana Agent on new machines, as well as keeping configuration up to date to avoid configuration drift +- Not possible to scale agents independently when using Kubernetes DaemonSets +- Scaling the topology can strain external APIs (like service discovery) and network infrastructure (like firewalls, proxy servers, and egress points) ### Use for -* Collecting machine-level metrics and logs (for example, node_exporter hardware metrics, Kubernetes pod logs) + +- Collecting machine-level metrics and logs (for example, node_exporter hardware metrics, Kubernetes pod logs) ### Don’t use for -* Scenarios where Grafana Agent grows so large it can become a noisy neighbor -* Collecting an unpredictable amount of telemetry + +- Scenarios where Grafana Agent grows so large it can become a noisy neighbor +- Collecting an unpredictable amount of telemetry ## As a container sidecar + Deploying Grafana Agent as a container sidecar is only recommended for short-lived applications or specialized agent deployments. ![daemonset](/media/docs/agent/agent-topologies/sidecar.png) ### Using Kubernetes pod sidecars + In a Kubernetes environment, the sidecar model consists of deploying Grafana Agent as an extra container on the pod. The pod’s controller, network configuration, enabled capabilities, and available resources are shared between the actual application and the sidecar agent. ### Pros -* Doesn’t require running on separate infrastructure -* Straightforward networking with partner applications + +- Doesn’t require running on separate infrastructure +- Straightforward networking with partner applications ### Cons -* Doesn’t scale separately -* Makes resource consumption harder to monitor and predict -* Agents do not have a life cycle of their own, making it harder to reason about things like recovering from network outages + +- Doesn’t scale separately +- Makes resource consumption harder to monitor and predict +- Agents do not have a life cycle of their own, making it harder to reason about things like recovering from network outages ### Use for -* Serverless services -* Job/batch applications that work with a push model -* Air-gapped applications that can’t be otherwise reached over the network + +- Serverless services +- Job/batch applications that work with a push model +- Air-gapped applications that can’t be otherwise reached over the network ### Don’t use for -* Long-lived applications -* Scenarios where the agent size grows so large it can become a noisy neighbor + +- Long-lived applications +- Scenarios where the agent size grows so large it can become a noisy neighbor [hashmod sharding]: {{< relref "../static/operation-guide/_index.md" >}} [clustering]: {{< relref "../flow/concepts/clustering.md" >}} diff --git a/docs/sources/shared/flow/reference/components/authorization-block.md b/docs/sources/shared/flow/reference/components/authorization-block.md index 11a74326f997..dad17fca8957 100644 --- a/docs/sources/shared/flow/reference/components/authorization-block.md +++ b/docs/sources/shared/flow/reference/components/authorization-block.md @@ -1,19 +1,19 @@ --- aliases: -- /docs/agent/shared/flow/reference/components/authorization-block/ -- /docs/grafana-cloud/agent/shared/flow/reference/components/authorization-block/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/authorization-block/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/authorization-block/ -- /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/authorization-block/ + - /docs/agent/shared/flow/reference/components/authorization-block/ + - /docs/grafana-cloud/agent/shared/flow/reference/components/authorization-block/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/authorization-block/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/authorization-block/ + - /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/authorization-block/ canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/authorization-block/ description: Shared content, authorization block headless: true --- -Name | Type | Description | Default | Required --------------------|----------|--------------------------------------------|---------|--------- -`credentials_file` | `string` | File containing the secret value. | | no -`credentials` | `secret` | Secret value. | | no -`type` | `string` | Authorization type, for example, "Bearer". | | no +| Name | Type | Description | Default | Required | +| ------------------ | -------- | ------------------------------------------ | ------- | -------- | +| `credentials_file` | `string` | File containing the secret value. | | no | +| `credentials` | `secret` | Secret value. | | no | +| `type` | `string` | Authorization type, for example, "Bearer". | | no | `credential` and `credentials_file` are mutually exclusive, and only one can be provided inside an `authorization` block. diff --git a/docs/sources/shared/flow/reference/components/azuread-block.md b/docs/sources/shared/flow/reference/components/azuread-block.md index 07d974385134..f3fea4367be9 100644 --- a/docs/sources/shared/flow/reference/components/azuread-block.md +++ b/docs/sources/shared/flow/reference/components/azuread-block.md @@ -1,20 +1,21 @@ --- aliases: -- /docs/agent/shared/flow/reference/components/azuread-block/ -- /docs/grafana-cloud/agent/shared/flow/reference/components/azuread-block/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/azuread-block/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/azuread-block/ -- /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/azuread-block/ + - /docs/agent/shared/flow/reference/components/azuread-block/ + - /docs/grafana-cloud/agent/shared/flow/reference/components/azuread-block/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/azuread-block/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/azuread-block/ + - /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/azuread-block/ canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/azuread-block/ description: Shared content, azuread block headless: true --- -Name | Type | Description | Default | Required ---------|----------|------------------|-----------------|--------- -`cloud` | `string` | The Azure Cloud. | `"AzurePublic"` | no +| Name | Type | Description | Default | Required | +| ------- | -------- | ---------------- | --------------- | -------- | +| `cloud` | `string` | The Azure Cloud. | `"AzurePublic"` | no | The supported values for `cloud` are: -* `"AzurePublic"` -* `"AzureChina"` -* `"AzureGovernment"` + +- `"AzurePublic"` +- `"AzureChina"` +- `"AzureGovernment"` diff --git a/docs/sources/shared/flow/reference/components/basic-auth-block.md b/docs/sources/shared/flow/reference/components/basic-auth-block.md index 62f7e0a25d61..5b0a696e0789 100644 --- a/docs/sources/shared/flow/reference/components/basic-auth-block.md +++ b/docs/sources/shared/flow/reference/components/basic-auth-block.md @@ -1,19 +1,19 @@ --- aliases: -- /docs/agent/shared/flow/reference/components/basic-auth-block/ -- /docs/grafana-cloud/agent/shared/flow/reference/components/basic-auth-block/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/basic-auth-block/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/basic-auth-block/ -- /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/basic-auth-block/ + - /docs/agent/shared/flow/reference/components/basic-auth-block/ + - /docs/grafana-cloud/agent/shared/flow/reference/components/basic-auth-block/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/basic-auth-block/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/basic-auth-block/ + - /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/basic-auth-block/ canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/basic-auth-block/ description: Shared content, basic auth block headless: true --- -Name | Type | Description | Default | Required -----------------|----------|------------------------------------------|---------|--------- -`password_file` | `string` | File containing the basic auth password. | | no -`password` | `secret` | Basic auth password. | | no -`username` | `string` | Basic auth username. | | no +| Name | Type | Description | Default | Required | +| --------------- | -------- | ---------------------------------------- | ------- | -------- | +| `password_file` | `string` | File containing the basic auth password. | | no | +| `password` | `secret` | Basic auth password. | | no | +| `username` | `string` | Basic auth username. | | no | `password` and `password_file` are mutually exclusive, and only one can be provided inside a `basic_auth` block. diff --git a/docs/sources/shared/flow/reference/components/exporter-component-exports.md b/docs/sources/shared/flow/reference/components/exporter-component-exports.md index f1a8ca440cd9..7f307d05dd87 100644 --- a/docs/sources/shared/flow/reference/components/exporter-component-exports.md +++ b/docs/sources/shared/flow/reference/components/exporter-component-exports.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/shared/flow/reference/components/exporter-component-exports/ -- /docs/grafana-cloud/agent/shared/flow/reference/components/exporter-component-exports/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/exporter-component-exports/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/exporter-component-exports/ -- /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/exporter-component-exports/ + - /docs/agent/shared/flow/reference/components/exporter-component-exports/ + - /docs/grafana-cloud/agent/shared/flow/reference/components/exporter-component-exports/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/exporter-component-exports/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/exporter-component-exports/ + - /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/exporter-component-exports/ canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/exporter-component-exports/ description: Shared content, exporter component exports headless: true @@ -12,9 +12,9 @@ headless: true The following fields are exported and can be referenced by other components. -Name | Type | Description -----------|---------------------|---------------------------------------------------------- -`targets` | `list(map(string))` | The targets that can be used to collect exporter metrics. +| Name | Type | Description | +| --------- | ------------------- | --------------------------------------------------------- | +| `targets` | `list(map(string))` | The targets that can be used to collect exporter metrics. | For example, the `targets` can either be passed to a `discovery.relabel` component to rewrite the targets' label sets or to a `prometheus.scrape` component that collects the exposed metrics. diff --git a/docs/sources/shared/flow/reference/components/extract-field-block.md b/docs/sources/shared/flow/reference/components/extract-field-block.md index 51ae70ed2a63..f7c7775bfb97 100644 --- a/docs/sources/shared/flow/reference/components/extract-field-block.md +++ b/docs/sources/shared/flow/reference/components/extract-field-block.md @@ -1,10 +1,10 @@ --- aliases: -- /docs/agent/shared/flow/reference/components/extract-field-block/ -- /docs/grafana-cloud/agent/shared/flow/reference/components/extract-field-block/ -- /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/extract-field-block/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/extract-field-block/ -- /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/extract-field-block/ + - /docs/agent/shared/flow/reference/components/extract-field-block/ + - /docs/grafana-cloud/agent/shared/flow/reference/components/extract-field-block/ + - /docs/grafana-cloud/monitor-infrastructure/agent/shared/flow/reference/components/extract-field-block/ + - /docs/grafana-cloud/monitor-infrastructure/integrations/agent/shared/flow/reference/components/extract-field-block/ + - /docs/grafana-cloud/send-data/agent/shared/flow/reference/components/extract-field-block/ canonical: https://grafana.com/docs/agent/latest/shared/flow/reference/components/extract-field-block/ description: Shared content, extract field block headless: true @@ -12,17 +12,18 @@ headless: true The following attributes are supported: -Name | Type | Description | Default | Required -------------|----------|-----------------------------------------------------------------------------------------------|---------|--------- -`from` | `string` | The source of the labels or annotations. Allowed values are `pod`, `namespace`, and `node`. | `pod` | no -`key_regex` | `string` | A regular expression used to extract a key that matches the regular expression. | `""` | no -`key` | `string` | The annotation or label name. This key must exactly match an annotation or label name. | `""` | no -`regex` | `string` | An optional field used to extract a sub-string from a complex field value. | `""` | no -`tag_name` | `string` | The name of the resource attribute added to logs, metrics, or spans. | `""` | no +| Name | Type | Description | Default | Required | +| ----------- | -------- | ------------------------------------------------------------------------------------------- | ------- | -------- | +| `from` | `string` | The source of the labels or annotations. Allowed values are `pod`, `namespace`, and `node`. | `pod` | no | +| `key_regex` | `string` | A regular expression used to extract a key that matches the regular expression. | `""` | no | +| `key` | `string` | The annotation or label name. This key must exactly match an annotation or label name. | `""` | no | +| `regex` | `string` | An optional field used to extract a sub-string from a complex field value. | `""` | no | +| `tag_name` | `string` | The name of the resource attribute added to logs, metrics, or spans. | `""` | no | When you don't specify the `tag_name`, a default tag name is used with the format: -* `k8s.pod.annotations.` -* `k8s.pod.labels.