# CVPR 2021-Progressive Self Label Correction (ProSelfLC) for Training Robust Deep Neural Networks

Keywords: Label correction and smoothing, in defense of entropy minimisation, predictive uncertainty, progressive trust of a model's knowledge.


# Robust DL/ML

In general, robust deep learning covers: missing labels (semisupervised learning); noisy labels (noise detection and correction); regularisation techniques; sample imbalance (long-tailed class distribution); adversarial learning; and so on.

# In deep metric learning, The improvements over time have been marginal?

Recently, in paper A Metric Learning Reality Check, it is reported that the improvements over time have been marginal at best. Is it true? I present my personal viewpoints as follows:

• First of all, acedemic research progress is naturally slow, continuous and tortuous. Beyond, it is full of flaws on its progress. For example,
• In person re-identification, several years ago, some researchers vertically split one image into several parts for alignment, which is against the design of CNNs and non-meaningful. Because deep CNNs are designed to be invariant against translation, so that hand-crafted alignment is unnecessary.

# Robust Deep Learning via Derivative Manipulation and IMAE

For source codes, the usage is conditioned on academic use only and kindness to cite our work: Derivative Manipulation and IMAE.
As a young researcher, your interest and kind citation (star) will definitely mean a lot for me and my collaborators.
For any specific discussion or potential future collaboration, please feel free to contact me.

# CVPR 2021-Progressive Self Label Correction (ProSelfLC) for Training Robust Deep Neural Networks

Keywords: Label correction and smoothing, in defense of entropy minimisation, predictive uncertainty, progressive trust of a model's knowledge.


# Paper Summary on Label Manipulation, Output Regularisation (Optimisation tricks)

means being highly related to my personal research interest.

# #