Recently, in paper A Metric Learning Reality Check, it is reported that the improvements over time have been marginal at best. Is it true? I present my personal viewpoints as follows:
- First of all, acedemic research progress is naturally slow, continuous and tortuous. Beyond, it is full of flaws on its progress. For example,
- In person re-identification, several years ago, some researchers vertically split one image into several parts for alignment, which is against the design of CNNs and non-meaningful. Because deep CNNs are designed to be invariant against translation, so that hand-crafted alignment is unnecessary.
For source codes, the usage is conditioned on academic use only and kindness to cite our work: Derivative Manipulation and IMAE.
As a young researcher, your interest and kind citation (star) will definitely mean a lot for me and my collaborators.
For any specific discussion or potential future collaboration, please feel free to contact me.
means being highly related to my personal research interest.
- arXiv 2020-On the Fairness of Deep Metric Learning
- ICCV 2019, CVPR 2020 Deep Metric Learning
- CVPR 2019 Deep Metric Learning
- Few-shot Learning
- Large Output Spaces
- Poincaré, Hyperbolic, Curvilinear
- Wasserstein
- Semi-supervised or Unsupervised Learning
- NeurIPS 2019-Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers
means being highly related to my personal research interest.
- Label Smoothing
- Confidence Penalty
- Label Correction
- Example Weighting
- Know the unknown
- Semi-supervised learning
- Importance sampling
means being highly related to my personal research interest.