-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using uint8 as token type to reduce memory usage #872
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #872 +/- ##
==========================================
- Coverage 80.90% 80.89% -0.02%
==========================================
Files 54 54
Lines 6788 6789 +1
==========================================
Hits 5492 5492
- Misses 1069 1070 +1
Partials 227 227
Continue to review full report at Codecov.
|
@aisk thanks for the PR! do you have any numbers to show the amount of memory it reduces though? asking because our CI does a comparison benchmarking between the PR branch & master branch at the end of each build, like this. and this PR's builds don't show an obvious difference as far as I can see:
I'm not sure if our default benchmarking cases are the right ones for this change though. so perhaps you can show some statistics as well? 😄 |
I did not run any bench mark and think thie optimise is obvious, but to my surprise, I and test it and found the memory reduce is so minimum. I think maybe go have some string intern stuff? |
@aisk I think one reason could be the tokenizer is only called once when running a program. so we may need to see it take effect in a large codebase. while I do appreciate your attempt to optimize Goby's performance, I think at this stage such optimization is not necessary, especially when it could increase the complexity of the code 🙂 (also sorry for the late reply, I've been quite busy working on multiple projects recently 🙏) |
No description provided.