Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about the gradient matrix used #13

Open
MohammadHossein-Bahari opened this issue Jun 22, 2023 · 1 comment
Open

Confusion about the gradient matrix used #13

MohammadHossein-Bahari opened this issue Jun 22, 2023 · 1 comment

Comments

@MohammadHossein-Bahari
Copy link

Hello,

Thanks for the great work.
As asked before (here) I do not see why in several methods like GraNd and submodular functions, you use the concatenation of loss gradient and its multiplication with the last feature embedding as shown here:

            bias_parameters_grads = torch.autograd.grad(loss, outputs)[0] #size: batch_size,num_classes
            weight_parameters_grads = self.model.embedding_recorder.embedding.view(batch_num, 1,
                                    self.embedding_dim).repeat(1, self.args.num_classes, 1) *\
                                    bias_parameters_grads.view(batch_num, self.args.num_classes,
                                    1).repeat(1, 1, self.embedding_dim)
            gradients.append(torch.cat([bias_parameters_grads, weight_parameters_grads.flatten(1)],
                                        dim=1).cpu().numpy()) 

You are basically using the last layer features scaled by the gradient. Do you have any reasons why you choose this instead of the ones common in the literature, like gradient with respect to the last layer parameters?

Thanks!

@MohammadHossein-Bahari MohammadHossein-Bahari changed the title Confusion about the gradient metric used Confusion about the gradient matrix used Jun 22, 2023
MaxiBoether added a commit to eth-easl/modyn that referenced this issue Jun 14, 2024
…503)

We had our own version of
PatrickZH/DeepCore#11 because our version of
their implementation confused where the inversion is placed. I thought
it through and think we don't need to do any inversion. I added some
comments explaining the thoughts.

Note that this does not address
PatrickZH/DeepCore#13!
@XianzheMa
Copy link

@PatrickZH @Chengcheng-Guo Hello we would also be curious why you didn't just use the last layer gradients but this form. Could you share with us some thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants