Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between 2 LLM models #13

Open
NamburiSrinath opened this issue Sep 24, 2023 · 0 comments
Open

Difference between 2 LLM models #13

NamburiSrinath opened this issue Sep 24, 2023 · 0 comments

Comments

@NamburiSrinath
Copy link

NamburiSrinath commented Sep 24, 2023

Hi @samuela, @PythonNut,

Great stuff and congratulations for the ICLR :)

I've a quick question and am wondering whether it's feasible with the current repo:

  1. Suppose I've a model (Llama-1) with weights original_model.pt
  2. Assume, I fine-tuned/modified the model for some use-case and let the weights be modified_model.pt

My high-level question is to understand the difference between these 2 functions (i.e difference between original_model.pt and modified_model.pt!) in a quantitative measure. I am assuming your paper deals with similar stuff (the computed Barrier is basically a quantitative measure which tells the difference between these 2 functions).

Is my understanding correct? If so, can you give some instructions on how your repo can be extended to Huggingface models (Llama, Vicuna, GPT, T5 etc;) I am assuming the logic stays the same!

If not, please provide some insights on how this use-case can be done! I'm assuming Wasserstein, MMD might be the next best bet but would like to try your repository!

P.S: Happy to close either of the issue depending on Hugginface support!

Any help is super appreciated :)
Thanks
Srinath

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant