-
Notifications
You must be signed in to change notification settings - Fork 2
Meta Llama Models Added #159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the library's marketing and descriptions to better reflect its purpose as a "Reproducible Structured Memory for LLMs" rather than just managing conversational memory. The changes focus on clarifying Memor's role in providing structured data formats for LLM interactions and enabling reproducible conversation logs.
Key changes include:
- Updated package description and marketing copy across multiple files
- Added comprehensive Meta Llama model support with 20+ new model variants
- Enhanced README documentation with detailed API reference and usage examples
Reviewed Changes
Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
setup.py | Updated package description and fallback description text |
otherfiles/meta.yaml | Updated summary and description for package metadata |
memor/params.py | Added extensive Meta Llama model variants to LLMModel enum |
README.md | Comprehensive rewrite with new tagline, detailed API docs, and usage examples |
CHANGELOG.md | Added changelog entry for README update |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## dev #159 +/- ##
==========================================
+ Coverage 98.43% 98.48% +0.05%
==========================================
Files 11 11
Lines 1399 1443 +44
Branches 138 138
==========================================
+ Hits 1377 1421 +44
Misses 5 5
Partials 17 17 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sadrasabouri Thank you for your efforts! 💯
Please consider the following points:
- Follow a single name structure; for example, this one is good:
LLAMA_2_7B = "llama-2-7b"
. - Use float numbers in names like other models:
LLAMA_3_1_405B = "llama-3.1-405b"
. - Add
instruct
models. - Add
vision
models.
memor/params.py
Outdated
LLAMA_PROMPT_GUARD_2_86M = "llama-prompt-guard-2-86m" | ||
LLAMA_4_SCOUT_17B_128E = "llama-4-scout-17b-128e" | ||
LLAMA_4_SCOUT_17B_16E = "llama-4-scout-17b-16e" | ||
META_LLAMA_3_8B = "meta-llama-3-8b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe META_LLAMA_3
is equivalent to LLAMA_3
, so I suggest converting these model names to LLAMA_3_xx
.
memor/params.py
Outdated
LLAMA_2_13B = "llama-2-13b" | ||
LLAMA_2_70B = "llama-2-70b" | ||
LLAMA_3_8B = "meta-llama-3-8b" | ||
LLAMA_3_8B_INSTRUCT = "meta-llama-3-8b-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove meta
--> llama-3-8b-instruct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my bad, fixed in ddacce7
memor/params.py
Outdated
LLAMA_2_7B = "llama-2-7b" | ||
LLAMA_2_13B = "llama-2-13b" | ||
LLAMA_2_70B = "llama-2-70b" | ||
LLAMA_3_8B = "meta-llama-3-8b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove meta
--> llama-3-8b
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in ddacce7
memor/params.py
Outdated
LLAMA_2_70B = "llama-2-70b" | ||
LLAMA_3_8B = "meta-llama-3-8b" | ||
LLAMA_3_8B_INSTRUCT = "meta-llama-3-8b-instruct" | ||
LLAMA_3_70B = "meta-llama-3-70b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove meta
--> llama-3-70b
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in ddacce7
memor/params.py
Outdated
LLAMA_3_8B = "meta-llama-3-8b" | ||
LLAMA_3_8B_INSTRUCT = "meta-llama-3-8b-instruct" | ||
LLAMA_3_70B = "meta-llama-3-70b" | ||
LLAMA_3_70B_INSTRUCT = "meta-llama-3-70b-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove meta
--> llama-3-70b-instruct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in ddacce7
memor/params.py
Outdated
GEMMA3N_E2B = "gemma3n-e2b" | ||
GEMMA3N_E4B = "gemma3n-e4b" | ||
GEMMA_2_9B = "gemma-2-9b" | ||
GEMM_3_1B = "gemma-3-1b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GEMM --> GEMMA
memor/params.py
Outdated
LLAMA_3_2_3B = "llama-3.2-3b" | ||
LLAMA_3_2_3B_INSTRUCT = "llama-3.2-3b-instruct" | ||
LLAMA_3_2_11B = "llama-3.2-11b" | ||
LLAMA_3_2_11B_INSTRUCT = "llama-3.2-11b-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check this model once again. I believe it should be defined as LLAMA_3_2_11B_VISION = "llama-3.2-11b-vision"
.
https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf
memor/params.py
Outdated
LLAMA_3_2_11B = "llama-3.2-11b" | ||
LLAMA_3_2_11B_INSTRUCT = "llama-3.2-11b-instruct" | ||
LLAMA_3_2_11B_VISION_INSTRUCT = "llama-3.2-11b-vision-instruct" | ||
LLAMA_3_2_90B = "llama-3.2-90b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
memor/params.py
Outdated
LLAMA_3_2_11B_INSTRUCT = "llama-3.2-11b-instruct" | ||
LLAMA_3_2_11B_VISION_INSTRUCT = "llama-3.2-11b-vision-instruct" | ||
LLAMA_3_2_90B = "llama-3.2-90b" | ||
LLAMA_3_2_90B_INSTRUCT = "llama-3.2-90b-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
memor/params.py
Outdated
LLAMA_3_2_90B_INSTRUCT = "llama-3.2-90b-instruct" | ||
LLAMA_3_2_90B_VISION = "llama-3.2-90b-vision" | ||
LLAMA_3_2_90B_VISION_INSTRUCT = "llama-3.2-90b-vision-instruct" | ||
LLAMA_3_3_70B = "llama-3.3-70b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
memor/params.py
Outdated
LLAMA_3_3_70B_INSTRUCT = "llama-3.3-70b-instruct" | ||
LLAMA_4_MAVERICK_17B_128E = "llama-4-maverick-17b-128e" | ||
LLAMA_4_MAVERICK_17B_128E_INSTRUCT = "llama-4-maverick-17b-128e-instruct" | ||
LLAMA_4_MAVERICK_17B_16E = "llama-4-maverick-17b-16e" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
memor/params.py
Outdated
LLAMA_4_MAVERICK_17B_128E = "llama-4-maverick-17b-128e" | ||
LLAMA_4_MAVERICK_17B_128E_INSTRUCT = "llama-4-maverick-17b-128e-instruct" | ||
LLAMA_4_MAVERICK_17B_16E = "llama-4-maverick-17b-16e" | ||
LLAMA_4_MAVERICK_17B_16E_INSTRUCT = "llama-4-maverick-17b-16e-instruct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
memor/params.py
Outdated
LLAMA_GUARD_4_12B = "llama-guard-4-12b" | ||
LLAMA_PROMPT_GUARD_2_22M = "llama-prompt-guard-2-22m" | ||
LLAMA_PROMPT_GUARD_2_86M = "llama-prompt-guard-2-86m" | ||
LLAMA_4_SCOUT_17B_128E = "llama-4-scout-17b-128e" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the existence of this model.
I fixed the suggestions you gave and also checked the enum again searchin for more cases. I couldn't find any other case. It's now cometed and ready for review. |
Reference Issues/PRs
#153
What does this implement/fix? Explain your changes.
Any other comments?