In deep metric learning, The improvements over time have been marginal?

Recently, in paper A Metric Learning Reality Check, it is reported that the improvements over time have been marginal at best. Is it true? I present my personal viewpoints as follows:

  • First of all, acedemic research progress is naturally slow, continuous and tortuous. Beyond, it is full of flaws on its progress. For example,
    • In person re-identification, several years ago, some researchers vertically split one image into several parts for alignment, which is against the design of CNNs and non-meaningful. Because deep CNNs are designed to be invariant against translation, so that hand-crafted alignment is unnecessary.

Paper Summary on Distance Metric, Representation Learning

:+1: means being highly related to my personal research interest.

  1. arXiv 2020-On the Fairness of Deep Metric Learning
  2. ICCV 2019, CVPR 2020 Deep Metric Learning
  3. CVPR 2019 Deep Metric Learning
  4. Few-shot Learning
  5. Large Output Spaces
  6. Poincaré, Hyperbolic, Curvilinear
  7. Wasserstein
  8. Semi-supervised or Unsupervised Learning
  9. NeurIPS 2019-Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers

Pagination

# #