Case of no-conflict #393
-
Is it so that if there is no conflict in gradients in an instance, all these methods will not "trigger" and it will be regular gradient update? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Assuming you are talking about aggregators that are non-conflicting: Definitely true for Note that if you goal is to find a Pareto optimal point, and the Pareto front is not trivial, then an (interior) Pareto optimal point |
Beta Was this translation helpful? Give feedback.
I am not entirely sure I understand your question, let me know if I got this wrong.
The main reason to scale losses is to ensure that the gradients are not too imbalanced (they cannot typically be completely balanced but that's already a start). One way to do that is to scale the loss vector, i.e.$[\alpha L_1, \beta L_2, \gamma L_3 ]^\top$ . If say $\alpha=0$ , then the first gradient of this vector-valued loss is $0$ and therefore does not conflict with any vector, it will then be completely ignored by $L_1$ might not have zero gradient, and you may want to have a decision that does not conflict with its gradient, even if you give it a weight of $0$ . To d…
UPGrad
. But still, at that point,