You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
while Developing i experienced a weird issue when using your Container.
Basically if you use a Batch processor in your code to send Metrics, Traces and Logs to the Container, It can happen that the Logs will arrive there by a different time then the Trace. However the configuration for the data sources that you currently use does not account for that, maybe you could for now change it so that there is at least a small Timeframe where Log,s Traces and Metrics can arrive. Grafana allows that to be configured right here but nobody can really do that at runtime because you use provisioned datasources.
Picture of the current datasource behaviour.
Changing it to another time wil fix the issue.
Maybe you could also Add the Prometheus Datasource to the linked resources. And if you want togo above and beyond maybe add a Standard query or a editable mode to the grafana instance where Data sources can be modified.
I'll provide an example yaml on how you could make the change in the grafana-datasources.yaml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://localhost:9090
jsonData:
timeInterval: 60s
exemplarTraceIdDestinations:
- name: traceID
datasourceUid: tempo
urlDisplayLabel: 'Trace: $${__value.raw}'
- name: Tempo
type: tempo
uid: tempo
url: http://localhost:3200
jsonData:
tracesToLogsV2:
customQuery: true
datasourceUid: 'loki'
query: '{$${__tags}} | trace_id = "$${__trace.traceId}"'
tags:
- key: 'service.name'
value: 'service_name'
spanStartTimeShift: "-1m" # Start time shift for logs window
spanEndTimeShift: "1m" # End time shift for logs window
tracesToMetrics:
datasourceUid: 'prometheus'
queries:
- spanStartTimeShift: "-2m" # Start time shift for metrics window
spanEndTimeShift: "2m" # End time shift for metrics window
tags: []
serviceMap:
datasourceUid: 'prometheus'
search:
hide: false
nodeGraph:
enabled: true
lokiSearch:
datasourceUid: 'loki'
- name: Loki
type: loki
uid: loki
url: http://localhost:3100
jsonData:
derivedFields:
- name: 'trace_id'
matcherType: 'label'
matcherRegex: 'trace_id'
url: '$${__value.raw}'
datasourceUid: 'tempo'
urlDisplayLabel: 'Trace: $${__value.raw}'
This would be a really good maybe even quick quality of life change.
Sadly I can not test if this really works and would really epreciate your help.
The text was updated successfully, but these errors were encountered:
Hi,
while Developing i experienced a weird issue when using your Container.
Basically if you use a Batch processor in your code to send Metrics, Traces and Logs to the Container, It can happen that the Logs will arrive there by a different time then the Trace. However the configuration for the data sources that you currently use does not account for that, maybe you could for now change it so that there is at least a small Timeframe where Log,s Traces and Metrics can arrive. Grafana allows that to be configured right here but nobody can really do that at runtime because you use provisioned datasources.
Picture of the current datasource behaviour.
Changing it to another time wil fix the issue.
Maybe you could also Add the Prometheus Datasource to the linked resources. And if you want togo above and beyond maybe add a Standard query or a editable mode to the grafana instance where Data sources can be modified.
I'll provide an example yaml on how you could make the change in the grafana-datasources.yaml
This would be a really good maybe even quick quality of life change.
Sadly I can not test if this really works and would really epreciate your help.
The text was updated successfully, but these errors were encountered: