Skip to content

Llama-3.3-70B-Instruct-4bit LoRA Fine-Tuning: No Change (or Instability) - Adapter Issue? #1147

Answered by awni
corozcop1980 asked this question in Q&A
Discussion options

You must be logged in to vote

I tried training this:

mlx_lm.lora --model mlx-community/Llama-3.3-70B-Instruct-4bit --data mlx-community/wikisql --iters 100 --batch-size 1 --num-layers 8 --train             

And then evaluating it like this:

mlx_lm.generate --model mlx-community/Llama-3.3-70B-Instruct-4bit --adapter-path adapters --max-tokens 50 \
               --prompt "table: 1-10015132-16
columns: Player, No., Nationality, Position, Years in Toronto, School/Club Team
Q: What is terrence ross' nationality
A: "

And it generated the following which is very reasonable:

Prompt: <|begin_of_text|><|start_header_id|>system<|end_header_id|>

Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024

<|eot_id|><|start_…

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@corozcop1980
Comment options

@awni
Comment options

awni Dec 11, 2024
Maintainer

@corozcop1980
Comment options

Answer selected by corozcop1980
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants