-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloud Run example #62
Comments
Any progress on this? OpenTelemetry doesn't work on Cloud Run / Cloud Functions yet, correct? |
@patryk-smc I don't have an actual example but tracing should work fine. Use a regular BatchSpanProcessor and call Metrics are a little more complicated unfortunately. |
Are there any news on the metrics side of things? I did a quick test run and got errors like:
Checking the code, a label In Cloud Run the label value will always be something like What's the plan forward for supporting open telementry metrics from cloud run? Is there a workaround for the problems? |
Thanks for the report on that @pebo, I was not aware of that, but our plans should fix this. We are not actively working on the metrics exporter right now because the upstream OTel metrics API is going through many breaking changes. |
@aabmass Any update on this? Looking at documentation here: https://cloud.google.com/trace/docs/setup/nodejs-ot, support seems to be implied? For anyone else looking at this, switching from a |
For anyone else having issues with tracing and Cloud Run, there were two primary changes I had to implement for them to be exported properly:
Obviously an The second change, forcing all traces to be sampled with an If I ran my instance locally and exported traces to Cloud Trace, all of them would appear. However, after pushing my instance to Cloud Run, only traces associated with serving |
@aabmass opentelemetry-operations-js/packages/opentelemetry-cloud-trace-exporter/src/trace.ts Line 92 in c64613e
Regardless, calling |
It's interesting you needed
We don't have any buffering in the trace exporter so the empty implementation is intentional. Calling TracerProvider.shutdown() will also call shutdown() on the BatchSpanProcessor which will flush the spans it has batched. In cases with very sparse/bursty traffic, many serverless processes may be short lived and the buffered spans would never be sent without calling shutdown(). It sounds like at least a minimal example would be useful here, so I'll leave this issue open and up for grabs. |
@AkselAllas what do you mean samples by default? Afaik it will do some adaptive sampling based on current QPS, which is why the author of the first blog post had missing spans unless they used always-on sampling. |
Ok. It might be adaptive sampling. 🤔 I experienced ~0.5 sampling even with very low QPS on cloud run. |
Hi; just to check if anyone is having issues to get the |
@eduardosanzb those metrics are not related to this repo or OpenTelemetry. I'd recommend reaching out to support, but I don't think the |
@aabmass Thanks! |
@legendsjohn , Regarding this point, did you also set the CPU Allocation to CPU always allocated? The documentation states that:
My hope is to run with CPU only allocated during request processing as this has major cost implications, but it is not clear to me if if the solution @aabmass proposed of trapping |
That's fair, I don't think that alone would help. However, with retries (#523), hopefully the request would succeed in the background when CPU is next allocated. Another option is to call |
Anyone else coming to this issue, have you tried the OTLP exporter to an OpenTelemetry collector sidecar in Cloud Run? The collector has more robust retry and batching logic and could solve your issues. You can flush data from OpenTelemetry SDK to the collector at the end of each request which should be very fast. |
To make sure OpenTelemetry and the JS exporter supports Cloud run, write an example that can be deployed to Cloud Run and includes:
Then:
Optional:
This ticket is can be split into sub-tickets if appropriate.
The text was updated successfully, but these errors were encountered: