-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): Add prototype for long running API calls for Archived Recordings #698
base: main
Are you sure you want to change the base?
feat(api): Add prototype for long running API calls for Archived Recordings #698
Conversation
src/main/java/io/cryostat/recordings/ArchiveRequestGenerator.java
Outdated
Show resolved
Hide resolved
src/main/java/io/cryostat/recordings/ArchiveRequestGenerator.java
Outdated
Show resolved
Hide resolved
src/main/java/io/cryostat/recordings/ArchiveRequestGenerator.java
Outdated
Show resolved
Hide resolved
@andrewazores which other endpoints do you think could use a framework like this? What other operations are likely to block the client for a long time? |
…it application scoped
Archiving an active recording has the obvious long-running concerns where the current architecture can lead to the client waiting a long time for Cryostat to respond. Another similar case right now is uploading recordings to the jfr-datasource for viewing in Grafana. Both the Grafana feature and the report generation feature also run into it for both active and archived recordings. In the future, features like taking a heap dump should also fit into this same long-running task architecture. |
@Inject private RecordingHelper recordingHelper; | ||
@Inject ReportsService reportsService; | ||
|
||
private Map<String, Map<String, AnalysisResult>> jobResults; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to avoid adding something like this that hangs on to the report generation results in memory. The report generation system already has tiered in-memory and S3-backed caching, so it'd be best to lean on that.
For archiving recordings, or for uploading recording files to jfr-datasource, it's a simpler design because there is no response payload to send back to the client - the client just needs to know that the job has been completed and whether it completed successfully.
For report generation, where it takes time and the client needs to not only not that it has completed and whether it was a success, but also needs a response payload (report result), I think an API that behaves something like Map.computeIfAbsent()
would be best. I'm thinking that the ReportsService
interface should gain methods for checking whether a result is already present for a given key. The default implementation would trivially return false, and the two caching tier implementations would check their respective caches and delegates.
HTTP 200 responses would contain the Map<String, AnalysisResult>
as the response body for the client to use. HTTP 202 responses would contain a Job UUID in the response body, maybe a Location
header to tell the client the URL where to find the result when it's ready (1), and that's it. Later, the completion notification containing the Job UUID would be emitted. When the client receives the the notification containing that UUID it sends a follow-up GET to the Location
header URL, and this time around the result is ready to go so the response is an HTTP 200 with the expected data. This way, the existing tiered caching infrastructure is reused and there are no new endpoints introduced, only different response status behaviours on the existing endpoints.
(1) right now, the URL would be the same URL that the request was sent to initially, ex. /api/v4/reports/{encodedKey}
Hi,
This adds a prototype for long running API operations (first step of #286).
The workflow proceeds as described in the issue:
TODO: Web Client needs to be adjusted to account for the new notifications, refreshing the tables when it receives them.
TODO: Implement the same framework for Grafana uploads of active/archived recordings, as well as report generation for active/archived recordings