-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic CLI completion #2285
Dynamic CLI completion #2285
Conversation
# TODO: Feedback needed: Refactor this to avoid duplication with APIBaseCommand._register(). | ||
# The client has to be explicitly initialized here since argcomplete does not trigger APIBaseCommand._command(), | ||
# and APIBaseCommand().api is not being set. This is a temporary hack. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a better way to init the api client since we need parsed_args to support --project
, and it's only passed in __call__
. I think the current solution is ok. Client.from_config()
is cheap (does not involve network calls), so it should be fine to call it on every __call__
.
I've quickly ran a profiler, and - as expected from a naive argcomplete integration - most of the delays for completions are caused by cascading imports, not by the server requests. This could be handled either by reworking the imports or by precalculating and caching of the completions, kind of like it's done in Heroku CLI: https://github.com/heroku/cli/tree/main/packages/cli/src/lib/autocomplete |
except Exception as e: | ||
debug("Error fetching run completions: " + str(e)) | ||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I try to dstack stats <TAB HIT>
when the server is unavailable/shutdown, the shell gets stuck for some time. In debug mode I see
Error fetching run completions: Failed to connect to dstack server http://127.0.0.1:8000
I think the errors should be printed with argcomplete.warn
so that users can see that autocomplete does not work and why.
Autocomplete appears to get stuck because for some reason it gets the exception three times:
Error fetching run completions: Failed to connect to dstack server http://127.0.0.1:8000
Error fetching run completions: Failed to connect to dstack server http://127.0.0.1:8000
Error fetching run completions: Failed to connect to dstack server http://127.0.0.1:8000
We do retry server requests three times, but it should not cause the exception being propagated three times. It seems something calls RunNameCompleter three times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, that seems to be caused by argcomplete iterating through different parsers if a completer returns an empty list of completions.
Another thing I've noticed - connection timeouts are not handled well due to the missing timeouts in the client. I think a timeout of 5 to 10 seconds would be a reasonable choice. https://github.com/solovyevt/dstack/blob/d03ac2557d45af7c725ae525f87024a45013db88/src/dstack/api/server/__init__.py#L127
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could not find a way to circumvent this with native mechanisms, so as a quick fix a completer returns a single empty string if API connection fails - no repeated calls now. The only issue is that now it automatically concatenates an additional space to an empty string, couldn't find a way to disable append_space
exclusively for custom parsers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure the repeated calls are caused by argcomplete? I couldn't find any logic related to this. Maybe it's shell that does the repeated calls (I'm on zsh)? There are still repeated calls if the server is available but returns empty results. Or if the server is not available and you continue dstack stats a
. (3 retries in all cases.) Returning an empty string seems to help only with dstack stats
for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@r4victor You are right, the empty string mitigates repeated calls only for dstack stats
.
Upon further inspection, when I tried to reproduce this issue both in a fresh MacOS VM and in a container with a plain zsh
and autoload -U +X compinit && compinit && eval "$(dstack completion zsh)"
, this issue does not occur at all. Same with bash
.
My current guess is that it might be related to zsh configuration frameworks s.a. oh-my-zsh
and their fiddling with completions.
UPD: Confirmed, oh-my-zsh
in the same fresh environments causes repeated completions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the investigation! This is no blocker then, although we should figure out what to do with oh-my-zsh eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more detail for future reference - the repeated calls seem to be triggered by a non-zero return code from _describe
on empty completions list, which - as far as I can guess - triggers some sort of fallback completion mechanism introduced by oh-my-zsh
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I filed an issue in argcomplete: kislyuk/argcomplete#523. Seems like it can be fixed on their side.
I'd guess cache completion is mainly to get rid of server requests. I can see cache invalidation being tricky in this approach. Maybe we can optimize imports. Can you share the profiler results? |
For me, |
@solovyevt, what are you plans on this PR's scope / completion MVP? Do you want to implement name completion for all commands? Any help needed? |
@r4victor Basic completion for all commands and some fields, like Besides that, I will try to resolve this issue with repeated completion calls, it might be tricky. |
Seems like backward compatibility should not be impacted. If you don't change the server API, the new CLI version can get working completions with older servers. |
@r4victor I think it is basically complete, only missing the documentation. Please let me know if I missed something. UPD: discovered an issue with the project completion, the logic doesn't make sense as the user likely expects the completions of the projects defined in |
I'll test it again early next week and merge it. @solovyevt would you be willing to update the docs with installation instructions in this or separate PR? I think it can be an expandable section in dstack/docs/docs/installation/index.md Line 63 in c4ddee1
|
@solovyevt, fixed typing with minor refactoring, LGTM. Please confirm if it's ok to merge on your side. |
@r4victor Added completion docs, check two previous commits. I've started with a more detailed version, then reduced it to match the Installation page. Nothing else on my behalf, feel free to merge if lgty. |
This reverts commit e671090.
docs/docs/installation/index.md
Outdated
|
||
```shell | ||
$ eval "$(dstack completion bash)" | ||
$ echo "$(dstack completion bash)" >> ~/.bashrc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it's a good idea to write directly to ~/.bashrc or ~/.zshrc. Let me fix it.
I reverted to a more detailed version, recommended not writing to shell profile directly, and added commands for oh my zsh. Merging it. |
@solovyevt, thanks for the contribution. We really appreciate it. Feel free to open more PRs from new branches if it needs more changes. |
WIP #2284