Robust DL/ML

In general, robust deep learning covers: missing labels (semisupervised learning); noisy labels (noise detection and correction); regularisation techniques; sample imbalance (long-tailed class distribution); adversarial learning; and so on.

In deep metric learning, The improvements over time have been marginal?

Recently, in paper A Metric Learning Reality Check, it is reported that the improvements over time have been marginal at best. Is it true? I present my personal viewpoints as follows:

  • First of all, acedemic research progress is naturally slow, continuous and tortuous. Beyond, it is full of flaws on its progress. For example,
    • In person re-identification, several years ago, some researchers vertically split one image into several parts for alignment, which is against the design of CNNs and non-meaningful. Because deep CNNs are designed to be invariant against translation, so that hand-crafted alignment is unnecessary.

Progressive Self Label Correction (ProSelfLC) for Training Robust Deep Neural Networks

For any specific discussion or potential future collaboration, please feel free to contact me.
As a young researcher, your interest and star (citation) will mean a lot for me and my collaborators.
Paper link:

Cite our work kindly if you find it useful:
    title={ {ProSelfLC}: Progressive Self Label Correction for Training Robust Deep Neural Networks}, 
    author={Wang, Xinshao and Hua, Yang and Kodirov, Elyor and Clifton, David A and Robertson, Neil M}, 
    journal={arXiv preprint arXiv:2005.03788}, 

Paper Summary on Distance Metric, Representation Learning

:+1: means being highly related to my personal research interest.

  1. arXiv 2020-On the Fairness of Deep Metric Learning
  2. ICCV 2019, CVPR 2020 Deep Metric Learning
  3. CVPR 2019 Deep Metric Learning
  4. Few-shot Learning
  5. Large Output Spaces
  6. Poincaré, Hyperbolic, Curvilinear
  7. Wasserstein
  8. Semi-supervised or Unsupervised Learning
  9. NeurIPS 2019-Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers


# #