From 1084eb849c7d8cd03685fa6fdfbd8501016da43f Mon Sep 17 00:00:00 2001 From: Clayton Cornell <131809008+clayton-cornell@users.noreply.github.com> Date: Wed, 27 Nov 2024 11:03:54 -0800 Subject: [PATCH] Clean up some of the linting warnings and errors (#2155) * Clean up some of the linting warnings and errors * Additional linting warning and error cleanup * More work on removing linting errors * More linting cleanup * Even more linting warning cleanup * Fix links to components * Fix link syntax in topic * Correct reference to AWS X-Ray * Add missing link in collect topic * Fix up some redirected links and minor syntax fixes * Fix typo in file name * Apply suggestions from code review Co-authored-by: Beverly Buchanan <131809838+BeverlyJaneJ@users.noreply.github.com> --------- Co-authored-by: Beverly Buchanan <131809838+BeverlyJaneJ@users.noreply.github.com> (cherry picked from commit c27c8ac8eb0edb46a7101f323ef9e65c33b998c6) --- docs/sources/collect/_index.md | 2 +- docs/sources/collect/choose-component.md | 13 +- .../sources/collect/datadog-traces-metrics.md | 42 +++--- ...etry-data.md => ecs-opentelemetry-data.md} | 22 ++-- docs/sources/collect/logs-in-kubernetes.md | 95 +++++++------- docs/sources/collect/metamonitoring.md | 45 +++---- docs/sources/collect/opentelemetry-data.md | 54 ++++---- .../collect/opentelemetry-to-lgtm-stack.md | 120 +++++++++--------- docs/sources/collect/prometheus-metrics.md | 62 +++++---- docs/sources/configure/_index.md | 4 +- .../distribute-prometheus-scrape-load.md | 2 +- docs/sources/configure/kubernetes.md | 21 ++- docs/sources/configure/linux.md | 2 +- docs/sources/configure/macos.md | 4 +- docs/sources/configure/nonroot.md | 4 +- docs/sources/configure/windows.md | 6 +- docs/sources/introduction/_index.md | 18 +-- .../introduction/backward-compatibility.md | 6 +- .../introduction/estimate-resource-usage.md | 2 +- docs/sources/set-up/deploy.md | 28 ++-- docs/sources/set-up/install/_index.md | 2 +- docs/sources/set-up/install/ansible.md | 12 +- docs/sources/set-up/install/binary.md | 12 +- docs/sources/set-up/install/chef.md | 4 +- docs/sources/set-up/install/docker.md | 15 +-- docs/sources/set-up/install/kubernetes.md | 13 +- docs/sources/set-up/install/linux.md | 13 +- docs/sources/set-up/install/macos.md | 4 +- docs/sources/set-up/install/puppet.md | 12 +- docs/sources/set-up/install/windows.md | 6 +- docs/sources/set-up/migrate/from-flow.md | 52 ++++---- docs/sources/set-up/migrate/from-operator.md | 7 +- docs/sources/set-up/migrate/from-otelcol.md | 51 ++++---- .../sources/set-up/migrate/from-prometheus.md | 38 +++--- docs/sources/set-up/migrate/from-promtail.md | 46 +++---- docs/sources/set-up/migrate/from-static.md | 17 +-- docs/sources/set-up/run/binary.md | 2 +- docs/sources/set-up/run/linux.md | 2 +- docs/sources/set-up/run/macos.md | 2 +- docs/sources/set-up/run/windows.md | 2 +- .../sources/troubleshoot/component_metrics.md | 4 +- .../troubleshoot/controller_metrics.md | 2 +- docs/sources/troubleshoot/debug.md | 29 ++--- docs/sources/troubleshoot/profile.md | 30 ++--- docs/sources/troubleshoot/support_bundle.md | 10 +- docs/sources/tutorials/_index.md | 2 +- .../tutorials/first-components-and-stdlib.md | 47 ++++--- .../tutorials/logs-and-relabeling-basics.md | 41 +++--- docs/sources/tutorials/processing-logs.md | 30 ++--- docs/sources/tutorials/send-logs-to-loki.md | 27 ++-- .../tutorials/send-metrics-to-prometheus.md | 17 ++- 51 files changed, 541 insertions(+), 562 deletions(-) rename docs/sources/collect/{ecs-openteletry-data.md => ecs-opentelemetry-data.md} (89%) diff --git a/docs/sources/collect/_index.md b/docs/sources/collect/_index.md index ad88e94104..8411043ddb 100644 --- a/docs/sources/collect/_index.md +++ b/docs/sources/collect/_index.md @@ -8,4 +8,4 @@ weight: 100 # Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}} -{{< section >}} \ No newline at end of file +{{< section >}} diff --git a/docs/sources/collect/choose-component.md b/docs/sources/collect/choose-component.md index 05f9d4df0b..36a880d54c 100644 --- a/docs/sources/collect/choose-component.md +++ b/docs/sources/collect/choose-component.md @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want ## Metrics for infrastructure Use `prometheus.*` components to collect infrastructure metrics. -This will give you the best experience with [Grafana Infrastructure Observability][]. +This gives you the best experience with [Grafana Infrastructure Observability][]. -For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, -and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. +For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. You can also scrape any Prometheus endpoint using `prometheus.scrape`. Use `discovery.*` components to find targets for `prometheus.scrape`. @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`. ## Metrics for applications Use `otelcol.receiver.*` components to collect application metrics. -This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. +This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications. @@ -48,12 +47,12 @@ with logs collected by `loki.*` components. For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`. On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`, -which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem. +which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem. ## Logs from applications Use `otelcol.receiver.*` components to collect application logs. -This will gather the application logs in an OpenTelemetry-native way, making it easier to +This gathers the application logs in an OpenTelemetry-native way, making it easier to correlate the logs with OpenTelemetry metrics and traces coming from the application. All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation. @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri Use `otelcol.receiver.*` components to collect traces. -If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. +If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. ## Profiles diff --git a/docs/sources/collect/datadog-traces-metrics.md b/docs/sources/collect/datadog-traces-metrics.md index 034a093e8c..2ab9da3590 100644 --- a/docs/sources/collect/datadog-traces-metrics.md +++ b/docs/sources/collect/datadog-traces-metrics.md @@ -20,9 +20,9 @@ This topic describes how to: ## Before you begin -* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces. -* Identify where you will write the collected telemetry. - Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. +* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. +* Identify where to write the collected telemetry. + Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces. * Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. + * _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data ```alloy otelcol.processor.deltatocumulative "default" { - max_stale = “” + max_stale = "" max_streams = output { metrics = [otelcol.processor.batch.default.input] @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: How long until a series not receiving new samples is removed, such as "5m". - - _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. + * _``_: How long until a series not receiving new samples is removed, such as "5m". + * _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. 1. Add the following `otelcol.receiver.datadog` component to your configuration file. ```alloy otelcol.receiver.datadog "default" { - endpoint = “:” + endpoint = ":" output { metrics = [otelcol.processor.deltatocumulative.default.input] traces = [otelcol.processor.batch.default.input] @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The host address where the receiver will listen. - - _``_: The port where the receiver will listen. + * _``_: The host address where the receiver listens. + * _``_: The port where the receiver listens. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -139,10 +139,10 @@ We recommend this approach for current Datadog users who want to try using {{< p Replace the following: - - _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. - - _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. + * _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. + * _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. -Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. +Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. You can do this by setting up your Datadog Agent in the following way: 1. Replace the DD_URL in the configuration YAML: @@ -150,8 +150,8 @@ You can do this by setting up your Datadog Agent in the following way: ```yaml dd_url: http://: ``` -Or by setting an environment variable: + Or by setting an environment variable: ```bash DD_DD_URL='{"http://:": ["datadog-receiver"]}' @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit [Datadog]: https://www.datadoghq.com/ [Datadog Agent]: https://docs.datadoghq.com/agent/ [Prometheus]: https://prometheus.io -[OTLP]: https://opentelemetry.io/docs/specs/otlp/ -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[Components]: ../../get-started/components +[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/ +[Components]: ../../get-started/components/ diff --git a/docs/sources/collect/ecs-openteletry-data.md b/docs/sources/collect/ecs-opentelemetry-data.md similarity index 89% rename from docs/sources/collect/ecs-openteletry-data.md rename to docs/sources/collect/ecs-opentelemetry-data.md index 3a7a53a483..428bf0e926 100644 --- a/docs/sources/collect/ecs-openteletry-data.md +++ b/docs/sources/collect/ecs-opentelemetry-data.md @@ -1,5 +1,7 @@ --- canonical: https://grafana.com/docs/alloy/latest/collect/ecs-opentelemetry-data/ +alias: + - ./ecs-openteletry-data/ # /docs/alloy/latest/collect/ecs-openteletry-data/ description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint menuTitle: Collect ECS or Fargate OpenTelemetry data title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data @@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle 1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store). 1. [Create an ECS task definition](#create-an-ecs-task-definition). -1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar). +1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar) ## Before you begin @@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager 1. Choose *Create parameter*. 1. Create a parameter with the following values: - * `Name`: otel-collector-config - * `Tier`: Standard - * `Type`: String - * `Data type`: Text - * `Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. + * Name: `otel-collector-config` + * Tier: `Standard` + * Type: `String` + * Data type: `Text` + * Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. ### Run your task @@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet 1. Download the [ECS Fargate task definition template][template] from GitHub. 1. Edit the task definition template and add the following parameters. - * `{{region}}`: The region the data is sent to. + * `{{region}}`: The region to send the data to. * `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. * `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. * `command` - Assign a value to the command variable to select the path to the configuration file. The AWS Collector comes with two configurations. Select one of them based on your environment: - * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces. - * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics. + * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. + * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. -## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar +## Run Alloy directly in your instance, or as a Kubernetes sidecar SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate. diff --git a/docs/sources/collect/logs-in-kubernetes.md b/docs/sources/collect/logs-in-kubernetes.md index 3e02efa808..d8b8b17fb2 100644 --- a/docs/sources/collect/logs-in-kubernetes.md +++ b/docs/sources/collect/logs-in-kubernetes.md @@ -19,19 +19,19 @@ This topic describes how to: ## Components used in this topic -* [discovery.kubernetes][] -* [discovery.relabel][] -* [local.file_match][] -* [loki.source.file][] -* [loki.source.kubernetes][] -* [loki.source.kubernetes_events][] -* [loki.process][] -* [loki.write][] +* [`discovery.kubernetes`][discovery.kubernetes] +* [`discovery.relabel`][discovery.relabel] +* [`local.file_match`][local.file_match] +* [`loki.source.file`][loki.source.file] +* [`loki.source.kubernetes`][loki.source.kubernetes] +* [`loki.source.kubernetes_events`][loki.source.kubernetes_events] +* [`loki.process`][loki.process] +* [`loki.write`][loki.write] ## Before you begin * Ensure that you are familiar with logs labelling when working with Loki. -* Identify where you will write collected logs. +* Identify where to write collected logs. You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs. * Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. @@ -39,8 +39,8 @@ This topic describes how to: Before components can collect logs, you must have a component responsible for writing those logs somewhere. -The [loki.write][] component delivers logs to a Loki endpoint. -After a `loki.write` component is defined, you can use other {{< param "PRODUCT_NAME" >}} components to forward logs to it. +The [`loki.write`][loki.write] component delivers logs to a Loki endpoint. +After you define a `loki.write` component, you can use other {{< param "PRODUCT_NAME" >}} components to forward logs to it. To configure a `loki.write` component for logs delivery, complete the following steps: @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following Replace the following: - - _`