You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Description:**
The current description of the Datadog connector implies that it is only
useful in the presence of sampling. However, its use is actually
required to see trace-emitting services and their statistics in Datadog
APM. This PR rewords the README to reflect that more clearly.
I also fixed some indentation issues in the provided example.
**Link to tracking Issue:** No tracking issue on Github. Internal Jira
issue: OTEL-1776
---------
Co-authored-by: Pablo Baeyens <[email protected]>
Copy file name to clipboardExpand all lines: connector/datadogconnector/README.md
+14-53Lines changed: 14 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,29 +25,22 @@
25
25
26
26
## Description
27
27
28
-
The Datadog Connector is a connector component that computes Datadog APM Stats pre-sampling in the event that your traces pipeline is sampled using components such as the tailsamplingprocessor or probabilisticsamplerprocessor.
28
+
The Datadog Connector is a connector component that derives APM statistics, in the form of metrics, from service traces, for display in the Datadog APM product. This component is *required* for trace-emitting services and their statistics to appear in Datadog APM.
29
29
30
-
The connector is most applicable when using the sampling components such as the [tailsamplingprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor#tail-sampling-processor), or the [probabilisticsamplerprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor) in one of your pipelines. The sampled pipeline should be duplicated and the `datadog` connector should be added to the the pipeline that is not being sampled to ensure that Datadog APM Stats are accurate in the backend.
30
+
The Datadog connector can also forward the traces passed into it into another trace pipeline. Notably, if you plan to sample your traces with the [tailsamplingprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor#tail-sampling-processor) or the [probabilisticsamplerprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor), you should place the Datadog connector upstream to ensure that the metrics are computed before sampling, ensuring their accuracy. An example is given below.
31
31
32
32
## Usage
33
33
34
-
To use the Datadog Connector, add the connector to one set of the duplicated pipelines while sampling the other. The Datadog Connector will compute APM Stats on all spans that it sees. Here is an example on how to add it to a pipeline using the [probabilisticsampler]:
35
-
36
-
<table>
37
-
<tr>
38
-
<td> Before </td> <td> After </td>
39
-
</tr>
40
-
<tr>
41
-
<tdvalign="top">
42
-
43
34
```yaml
44
35
# ...
45
36
processors:
46
37
# ...
47
38
probabilistic_sampler:
48
39
sampling_percentage: 20
49
-
# add the "datadog" processor definition
50
-
datadog:
40
+
41
+
connectors:
42
+
# add the "datadog" connector definition and further configurations
# add the "datadog" connector definition and further configurations
81
-
datadog/connector:
82
-
83
-
exporters:
84
-
datadog:
85
-
api:
86
-
key: ${env:DD_API_KEY}
87
-
88
-
service:
89
-
pipelines:
90
-
traces:
91
-
receivers: [otlp]
92
-
processors: [batch]
93
-
exporters: [datadog/connector]
94
-
95
-
traces/2: # this pipeline uses sampling
96
-
receivers: [datadog/connector]
97
-
processors: [batch, probabilistic_sampler]
98
-
exporters: [datadog]
99
-
100
-
metrics:
101
-
receivers: [datadog/connector]
102
-
processors: [batch]
103
-
exporters: [datadog]
104
-
```
105
-
</tr></table>
106
-
107
-
Here we have two traces pipelines that ingest the same data but one is being sampled. The one that is sampled has its data sent to the datadog backend for you to see the sampled subset of the total traces sent across. The other non-sampled pipeline of traces sends its data to the metrics pipeline to be used in the APM stats. This unsampled pipeline gives the full picture of how much data the application emits in traces.
68
+
In this example configuration, incoming traces are received through OTLP, and processed by the Datadog connector in the `traces` pipeline. The traces are then forwarded to the `traces/2` pipeline, where a sample of them is exported to Datadog. In parallel, the APM stats computed from the full stream of traces are sent to the `metrics` pipeline, where they are exported to Datadog as well.
0 commit comments