Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new feature of SafeLoRA #2201

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open

Conversation

chiayi-hsu
Copy link

The pull request was closed due to syncing with the latest version of PEFT, so I have requested the pull request again.
I have made all the necessary changes based on our previous conversations in this version.

If there are any issues, please let me know.

Thank you.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update to the SafeLoRA PR. I did another review and found a few areas to improve. Please take a look. Also, please run make style once you're finished with your changed.

examples/safelora/README.md Outdated Show resolved Hide resolved
examples/safelora/README.md Outdated Show resolved Hide resolved
save_weights=True)

final_lora_weight = apply_safelora(config)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a bit more to the example. For instance, how to save and load these weights?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added more descriptions to the example.
If you feel there are still any missing parts, please let me know.

Comment on lines 15 to 16
config = SafeLoraConfig(base_model_path='../LLM_Models/llama-2-7b-hf/',\
aligned_model_path='../LLM_Models/llama-2-7b-chat-fp16/',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the HF model ids for these two.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been modified.

Comment on lines 215 to 217
peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
else:
peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 2 lines are identical

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been modified.

- if (safelora_config.devices).lower() == "cpu":
-        peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
- else:
-        peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
+ peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}

]
align_model_parameters = [
name for name in sl_align.weight_map.keys() if any(v in name for v in list(peft_config.target_modules))
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also check that base_model_parameters and align_model_parameters are the same?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a check to verify if the model weights are the same.

+ if (sl_base.get_tensor(name_base) == sl_align.get_tensor(name_align)).all():
+        raise ValueError("The weights of the base Model and the aligned Model should be different.")

return safety_vector


def project_weights(configs, peft_weights, v):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's rename configs to config or safelora_config.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been modified.

metadata={"help": "The path of the LoRA wieghts and configs."},
)

select_layers_type: str = field(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of str, we can annotate this as Literal["threshold", "number"].

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been modified.

src/peft/utils/safelora.py Outdated Show resolved Hide resolved
select_layers_type='threshold',
save_weights=True)

final_lora_weight = apply_safelora(config)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example should show inference, here we only create the weights. What are the next steps?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added more explanations in the README.md and also included code on how to use the SafeLoRA model.

@BenjaminBossan
Copy link
Member

@chiayi-hsu Once you're finished with your changes and want me to give another review, please ping me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants