2019.9.6 note

2019.9.6 note Meta-Learning with Implicit Gradients

Similar to paper darts, this work formulates Meta-learning as a two-level (inner/outer) optimization problem. To make the meta-learning process model agnostic, this work uses implicit gradients to optimiza the outer-level loss and uses a techique to approximate Hessian Matrix in the implicit gradients.

SoftTriple Loss: Deep Metric LearningWithout Triplet Sampling
  1. It shows that the softmax function is equivalent to the triplet loss function when every class has one center.
  2. When every class may have multiple centers, it proposes hard triple loss and shows it equivalence to triplet loss function.
  3. It proposes soft triplet loss function and uses a regularizer to adapt the number of centers. It sets a large center numbers and encourages similar centers to merge with each other.
最新回复(0)
/jishumcNy8MHze8EdiOTP9StvJYFMm9v9owGsX3Lbww_3D_3D4794696
8 简首页