Skip to content

fix: set default request timeout to 120 seconds #3253

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

maxsxu
Copy link

@maxsxu maxsxu commented Apr 8, 2025

Motivation

Fixes #3207

Modifications

  • Change default request timeout to 120s from 5s. Since 5s is too short for most of requests, especially for large clusters in production environment.

And according to kubectl docs, the default kubectl request timeout is 0 which means don't timeout requests. Maybe we should do so too for k9s, but let's see.

@derailed
Copy link
Owner

@maxsxu Thank you for this update! The current default timeout is 10s.
I am hesitating making it any bigger, since you can easily override it with a cli arg.
What do you think?

@derailed derailed added the question Further information is requested label Apr 13, 2025
@maxsxu
Copy link
Author

maxsxu commented Apr 14, 2025

@maxsxu Thank you for this update! The current default timeout is 10s. I am hesitating making it any bigger, since you can easily override it with a cli arg. What do you think?

@derailed Thanks for reply!

And yes we can override via cli arg, but if this is something users' frequent use case, then we can make it a common option.

So wondering what's the backwards if making the default timeout bigger? If that won't break us, then I suggest making it bigger.

@robert-openai
Copy link

robert-openai commented Apr 14, 2025

@derailed Could you at least make it something you can set in config.yaml? I didn't see anything mentioned in the documentation about it. The K8s clusters I deal with regularly are far too big to reliably reply within the default request timeout.

@vitali-raikov
Copy link

vitali-raikov commented Apr 15, 2025

We are in the same boat, we are dealing with a pretty big clusters (~20K pods) so a lot of k9s functionality is just not working unless we set timeout to something like 20-30 seconds.

Setting CLI arg is a workaround and I could probably alias my k9s command to be something like k9s --request-timeout=30s to k9s but it would be nice to be able to override it via config.yaml as @robert-openai proposed.

@MrVinkel
Copy link

I work with crossplane clusters which has 100k+ resources and I have to specify the --request-timeout for doing anything in those. Working with any bigish cluster a timeout of 10s is just too small.

I agree with @vitali-raikov that it would be nice if it could be configured in the config.yaml

@gottschd
Copy link

Please allow me to share my user experience with K9s, in the spirit of honest feedback.

As a casual, non-professional user, I've found K9s incredibly user-friendly, particularly when installed and updated via Chocolatey on Windows. It has consistently worked seamlessly for me with the default settings, requiring no additional configuration, which has been greatly appreciated.

However, with the recent change in the default request time, I encountered some unexpected challenges. This led to a bit of a troubleshooting journey to identify the source of the issue. My DevOps colleagues (who were using an much much older version of K9s 😄 ) , initially speculated about network infrastructure and "TCP keep-alive" settings, topics that are quite complex for me to grasp.

I am grateful for the timeout fix that has been proposed, as it allows me to return to the straightforward experience of having K9s work effortlessly out of the box for my needs.

Thank you for your effort and continued dedication to improving the software.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Frequent timeout when listing some resources but kubectl works
6 participants