Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify CLI options (verbosity, version) #685

Merged
merged 4 commits into from
Feb 18, 2025
Merged

Conversation

mkesper
Copy link
Collaborator

@mkesper mkesper commented Jan 31, 2025

Feel free to adjust block size. But 1k blocks are ridiculously low for downloading GiB of LLM data.

Related: #684

Summary by Sourcery

New Features:

  • Added a --quiet flag to suppress the progress bar during downloads.

Copy link
Contributor

sourcery-ai bot commented Jan 31, 2025

Reviewer's Guide by Sourcery

This pull request introduces a new --quiet flag to suppress progress bars during downloads, and increases the download block size to 100KB to improve download speeds and reduce CPU usage.

Sequence diagram for the modified download process

sequenceDiagram
    participant Client
    participant DownloadManager
    participant HTTPClient

    Client->>DownloadManager: download_file(url, show_progress)
    DownloadManager->>HTTPClient: init(url, show_progress)
    activate HTTPClient
    loop Until EOF
        HTTPClient->>HTTPClient: read(100KB blocks)
        Note right of HTTPClient: Increased from 1KB to 100KB blocks
        alt show_progress is true
            HTTPClient->>Client: Update progress bar
        end
    end
    HTTPClient-->>DownloadManager: Download complete
    deactivate HTTPClient
    DownloadManager-->>Client: File downloaded
Loading

Class diagram showing modified download components

classDiagram
    class HTTPClient {
        -response
        -now_downloaded: int
        -start_time: float
        +init(url, headers, output_file, show_progress)
        +perform_download(file, progress)
    }

    class DownloadManager {
        +download_file(url, dest_path, headers, show_progress)
    }

    class PullCommand {
        +pull(args)
    }

    PullCommand ..> DownloadManager : uses
    DownloadManager ..> HTTPClient : uses

    note for HTTPClient "Block size increased to 100KB"
Loading

File-Level Changes

Change Details Files
Added a --quiet flag to suppress progress bars during downloads.
  • Added a --quiet argument to the pull_parser in ramalama/cli.py.
  • Modified ramalama/url.py to pass the quiet argument to the download_file function.
  • Modified ramalama/oci.py to pass the quiet argument to the container engine.
ramalama/cli.py
ramalama/url.py
ramalama/oci.py
Increased the download block size to 100KB.
  • Changed the read size in perform_download from 1024 bytes to 100 * 1024 bytes.
ramalama/http_client.py
Modified download_file to accept a show_progress argument.
  • Modified download_file in ramalama/common.py to accept a show_progress argument.
  • Modified init_pull and pull_blob in ramalama/ollama.py to pass the show_progress argument to download_file.
ramalama/common.py
ramalama/ollama.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @mkesper - I've reviewed your changes - here's some feedback:

Overall Comments:

  • The --quiet flag should use action="store_true" instead of action="store" with default=False to be consistent with other boolean flags in the codebase
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -45,9 +45,9 @@ def pull_blob(repos, layer_digest, accept, registry_head, models, model_name, mo
run_cmd(["ln", "-sf", relative_target_path, model_path])


def init_pull(repos, accept, registry_head, model_name, model_tag, models, model_path, model):
def init_pull(repos, accept, registry_head, model_name, model_tag, models, model_path, model, show_progress):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Function signature mismatch: pull_config_blob is called with show_progress parameter but doesn't accept it

@ericcurtin
Copy link
Collaborator

Change looks fine to me @mkesper you just need to sign your commits with something like:

git commit -a -s --amend

to satisfy the DCO bot.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 1, 2025

Lots of tests failed too, we need to get this build green somehow, at least some of the failures are flakes, not sure how many.

This will all be rebuilt anyway when you the commits are signed.

@@ -177,7 +177,7 @@ def download_file(url, dest_path, headers=None, show_progress=True):
show_progress = False

try:
http_client.init(url=url, headers=headers, output_file=dest_path, progress=show_progress)
http_client.init(url=url, headers=headers, output_file=dest_path, show_progress=show_progress)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This rename needs to be made to HttpClient.init as well.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 1, 2025

Anybody know what "podman pull" or "podman artifact pull" block size is? @baude @rhatdan

We could just standardise on that, it's likely a sane value.

I probably care about reliability over cpu usage. I know in practice at the TCP level of the stack a max packet size is 1500 bytes, but that's probably not so important at the http level.

@ericcurtin
Copy link
Collaborator

It could be the progress bar causing the CPU usage issue also, maybe we could change it so that it doesn't update every single block iteration.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 1, 2025

Does quiet alone lower the CPU usage significantly @mkesper ?

@jhjaggars
Copy link
Collaborator

@ericcurtin Best I can tell, the image copier used in podman updates once per second by default:
https://github.com/containers/podman/blob/main/vendor/github.com/containers/common/libimage/copier.go#L315

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 3, 2025

If you guys could check if this makes a difference I'd appreciate it, not much CPU usage issues on my machine (and maybe the time calculation is as expensive that printing a progress bar update, I've no idea):

#717

@@ -26,11 +26,11 @@ def pull_config_blob(repos, accept, registry_head, manifest_data):
download_file(url, config_blob_path, headers=headers, show_progress=False)
Copy link
Collaborator

@swarajpande5 swarajpande5 Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def pull_config_blob(repos, accept, registry_head, manifest_data):
cfg_hash = manifest_data["config"]["digest"]
config_blob_path = os.path.join(repos, "blobs", cfg_hash)
os.makedirs(os.path.dirname(config_blob_path), exist_ok=True)
url = f"{registry_head}/blobs/{cfg_hash}"
headers = {"Accept": accept}
download_file(url, config_blob_path, headers=headers, show_progress=False)

Please add the show_progress parameter in this function and use the same in the download_file() call, removing the hard coded False.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do this to make it more consistent although this will change behaviour to before the patch.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think False is always correct here, it's not a bulky file, the progress means very little, we only show progress for the gguf file

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good !

@mkesper
Copy link
Collaborator Author

mkesper commented Feb 5, 2025

Does quiet alone lower the CPU usage significantly @mkesper ?

Sorry, I had no time to check yet.

@mkesper
Copy link
Collaborator Author

mkesper commented Feb 6, 2025

I have updated the commits after testing them. With the higher block size I do not get any relevant CPU usage shown for terminal, even without --quiet. With the changes I get about 15% on one CPU thread with progress and about 10% with --quiet turned on. Before, it was like in the screenshot when pulling fro ollama: about 50% of a CPU thread for the Python process, about 30% for tmux and additional 25% for konsole (terminal application).
Screenshot_20250206_180551

@mkesper
Copy link
Collaborator Author

mkesper commented Feb 6, 2025

To add: Interrupting the download is still perfectly possible with Ctrl-C.

@rhatdan
Copy link
Member

rhatdan commented Feb 7, 2025

Thanks @mkesper
LGTM

@@ -60,7 +60,7 @@ def perform_download(self, file, progress):
self.now_downloaded = 0
self.start_time = time.time()
while True:
data = self.response.read(1024)
data = self.response.read(100 * 1024)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The patch looks good to me. How about we add a constant 100KB=100*1024 and use it here? It's faster reading for all people.

Copy link
Collaborator

@ericcurtin ericcurtin Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is the best fix, I think this works because the read is so big it delays the time that the next iteration of the loop is called. In practice you cannot depend on TCP packet sizes being larger than 1400 bytes (because of middleboxes, legacy, etc.) and this encourages fragmenting the bytes, which isn't spectacular for reliability, performance (from the transport perspective), etc.

I think it's likely the whole printing the progress bar thing is contributing to spiking the CPU usage, so if we just avoid updating that I think it could be a better fix to try (just updated this PR):

#717

But I don't have CPU spike issues personally so cannot test the above PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But reading python http library, they seem to default to 8kb, that seems more reasonable than 100kb:

https://github.com/python/cpython/blob/main/Lib/http/client.py

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could try both PRs together

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to drop #717 if it doesn't make much difference to CPU usage, although it does seem excessive to update the progress bar on each iteration.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is the best fix, I think this works because the read is so big it delays the time that the next iteration of the loop is called. In practice you cannot depend on TCP packet sizes being larger than 1400 bytes (because of middleboxes, legacy, etc.) and this encourages fragmenting the bytes, which isn't spectacular for reliability, performance (from the transport perspective), etc.

As far as I understand the Python http client library code, read() does not operate at TCP level but at HTTP stream level, right? That would render this concern moot.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, but it's still a consideration as HTTP is on top of TCP

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, Splitting in two patches make sense, one for the progress bar and another for buffer size to be discussed. The progress bar skipping should already help as @ericcurtin mentioned.

@dougsland
Copy link
Collaborator

LGTM I add minor comment.

@dougsland
Copy link
Collaborator

CI/CD is complaning about:

xref-helpmsgs-manpages: 'ramalama pull --help' lists '--quiet', which is not in docs/ramalama-pull.1.md
make: *** [Makefile:133: validate] Error 1

@dougsland
Copy link
Collaborator

The another issue in CI/CD don't see related to the patch:

# tags: distro-integration
# (from function `bail-now' in file test/system/helpers.podman.bash, line 122,
#  from function `die' in file test/system/helpers.podman.bash, line 848,
#  from function `run_ramalama' in file test/system/helpers.bash, line 187,
#  in test file test/system/055-convert.bats, line 12)
#   `run_ramalama 1 convert bogus foobar' failed
#
# [10:29:52.5[68](https://github.com/containers/ramalama/actions/runs/13183889748/job/36843129951?pr=685#step:5:69)712822] $ /home/runner/work/ramalama/ramalama/bin/ramalama convert
# [10:29:52.667041482] usage: ramalama convert [-h] [--type {car,raw}] [--network-mode NETWORK_MODE]
#                         SOURCE TARGET
# ramalama convert: error: the following arguments are required: SOURCE, TARGET
# [10:29:52.6[70](https://github.com/containers/ramalama/actions/runs/13183889748/job/36843129951?pr=685#step:5:71)168914] [ rc=2 (expected) ]
#
# [10:29:52.681213373] $ /home/runner/work/ramalama/ramalama/bin/ramalama convert tiny
# [10:29:52.777658054] usage: ramalama convert [-h] [--type {car,raw}] [--network-mode NETWORK_MODE]
#                         SOURCE TARGET
# ramalama convert: error: the following arguments are required: TARGET
# [10:29:52.7809198[75](https://github.com/containers/ramalama/actions/runs/13183889748/job/36843129951?pr=685#step:5:76)] [ rc=2 (expected) ]
#
# [10:29:52.792616049] $ /home/runner/work/ramalama/ramalama/bin/ramalama convert bogus foobar
# [10:29:52.892675855] usage: ramalama [-h] [--container] [--debug] [--dryrun] [--engine ENGINE]
#                 [--gpu] [--ngl NGL] [--keep-groups] [--image IMAGE]
#                 [--nocontainer] [--runtime {llama.cpp,vllm}] [--store STORE]
#                 [-v]
#                 {help,bench,benchmark,containers,ps,convert,info,list,ls,login,logout,perplexity,pull,push,rm,run,serve,stop,version}
#                 ...
# ramalama: requires a subcommand
# #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# #| FAIL: exit code is 0; expected 1
# #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# # [teardown]
ok 34 [055] ramalama convert file to image in 3116ms
ok 35 [055] ramalama convert tiny to image in 16685ms
ok 36 [060] ramalama info in 16[83](https://github.com/containers/ramalama/actions/runs/13183889748/job/36843129951?pr=685#step:5:84)ms
make: *** [Makefile:149: bats-nocontainer] Error 1e

@rhatdan
Copy link
Member

rhatdan commented Feb 7, 2025

Was this a duplicate of the other PR?

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 7, 2025

They are related, we can take this PR in too if the tests are fixed so it's passing CI. 100kb is an excessively high block size though, we'd have to reduce that

@ericcurtin
Copy link
Collaborator

High block sizes can break some network connections outright...

@rhatdan
Copy link
Member

rhatdan commented Feb 13, 2025

The add subcommand means most likely that --quiet option was not added to one of the parsers.

@mkesper mkesper changed the title Fix high CPU usage during ramalama pull Unify CLI options (verbosity, version) Feb 13, 2025
mkesper added a commit to mkesper/ramalama that referenced this pull request Feb 13, 2025
Related: containers#685
Signed-off-by: Michael Kesper <[email protected]>
mkesper added a commit to mkesper/ramalama that referenced this pull request Feb 13, 2025
Related: containers#685
Signed-off-by: Michael Kesper <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Feb 13, 2025

Squash and sign your commits.

mkesper added a commit to mkesper/ramalama that referenced this pull request Feb 13, 2025
Related: containers#685
Signed-off-by: Michael Kesper <[email protected]>
mkesper added a commit to mkesper/ramalama that referenced this pull request Feb 15, 2025
Related: containers#685
Signed-off-by: Michael Kesper <[email protected]>
@mkesper
Copy link
Collaborator Author

mkesper commented Feb 15, 2025

Seems I'm stuck now, can't see how to fix this (seems to be connected to autocomplete):

./hack/xref-helpmsgs-manpages
xref-helpmsgs-manpages: 'ramalama  --help' lists '--quiet,', which is not in docs/ramalama.1.md
xref-helpmsgs-manpages: 'ramalama ': '--quiet' in docs/ramalama.1.md, but not in --help```

mkesper added a commit to mkesper/ramalama that referenced this pull request Feb 15, 2025
Related: containers#685
Signed-off-by: Michael Kesper <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Feb 17, 2025

#835 Should fix the --quiet issue.

@rhatdan
Copy link
Member

rhatdan commented Feb 17, 2025

@mkesper Rebase and repush and we should get this in before the release.

@ericcurtin
Copy link
Collaborator

Can we change --dry-run back to --dryrun please

@ericcurtin
Copy link
Collaborator

If there is inconsistence "--dry-run" -> "--dryrun"

@rhatdan
Copy link
Member

rhatdan commented Feb 17, 2025

We should alias the two, since I can never remember to use --dryrun or --dry-run.

Podman actually uses both spellings. podman auto update uses --dry-run and quadlet uses --dryrun.

@ericcurtin
Copy link
Collaborator

Yeah I actually copied quadlet at the time

@mkesper
Copy link
Collaborator Author

mkesper commented Feb 17, 2025

I added a clarifying comment to the second dryrun argument. I honestly thought that it was mistakenly added twice.

mkesper and others added 4 commits February 18, 2025 00:33
Add --quiet as mutually exclusive with --debug.
Maybe a --verbose switch could also be added.

Related: containers#684
Signed-off-by: Michael Kesper <[email protected]>
`version` is already a subcommand.

Signed-off-by: Michael Kesper <[email protected]>
Related: containers#684
Signed-off-by: Michael Kesper <[email protected]>
@ericcurtin ericcurtin merged commit f7c2302 into containers:main Feb 18, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants