-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8s env vars #1279
K8s env vars #1279
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! As long as the current failing test is fixed
|
||
// ip to generic IP info (Node, Service, *including* Pods) | ||
ipInfos map[string]*informer.ObjectMeta | ||
ipInfos map[string]*informer.ObjectMeta | ||
otelServiceInfoByIP map[string]OTelServiceNamePair |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this won't be active in non-kubernetes environments. Is it ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be fine, because we fetch the service name from env vars too, so non kube shouldn't be affected. Actually, I meant to ask, is there a way I can tell early in Beyla's startup if k8s is enabled and if we have our metadata service on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use the IsKubeEnabled
method in MetadataProvider
to know that during startup.
For the metadata service, I'm working on a PR that would let Beyla know when all the kubernetes entities have been locally synced from the Kube metadata. So I guess that it would let knowing certainly that the metadata service is on.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1279 +/- ##
==========================================
+ Coverage 80.22% 80.59% +0.37%
==========================================
Files 140 141 +1
Lines 14058 14211 +153
==========================================
+ Hits 11278 11454 +176
+ Misses 2242 2216 -26
- Partials 538 541 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@@ -73,7 +73,7 @@ func (wk *watcherKubeEnricher) ID() string { return "unique-watcher-kube-enriche | |||
// handling in the enrich main loop | |||
func (wk *watcherKubeEnricher) On(event *informer.Event) { | |||
// ignoring updates on non-pod resources | |||
if event.Resource.Pod == nil { | |||
if event == nil || event.Resource == nil || event.Resource.Pod == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you rebase, you will have a conflict, as this has been already fixed in main: https://github.com/grafana/beyla/blob/main/pkg/internal/discover/watcher_kube.go#L75
@@ -72,7 +72,7 @@ func (wk *watcherKubeEnricher) ID() string { return "unique-watcher-kube-enriche | |||
// handling in the enrich main loop | |||
func (wk *watcherKubeEnricher) On(event *informer.Event) { | |||
// ignoring updates on non-pod resources | |||
if event == nil || event.Resource == nil || event.Resource.Pod == nil { | |||
if event == nil || event.GetResource() == nil || event.GetResource().GetPod() == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: event.GetResource().GetPod()
will do the job, as internally these methods check wether the receiver is nil.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK, I misunderstood what you meant as conflict. I'll revert.
This PR adds support for parsing OTEL name variables in k8s deployments. Much like we look for OTEL_SERVICE_NAME and OTEL_ATTRIBUTES in the regular env variables.