Xinshao Wang, PhD Student, Queens University Belfast, Anyvision.

Machine Learning (Deep Metric Learning, Robust Representation Learning under Adverse Conditions, e.g., Noisy Data and Sample Imbalance).

Computer Vision (Image/Video Recognition, Person Re-identification).

What am I working on now? Discussions are Welcome!

### Robust Learning and Inference under Adverse Conditions, e.g., noisy labels, noisy observations, outliers, adversaries, etc.

**Why is it important?**

DNNs can fit well training examples with random lables. ‘Understanding deep learning requires rethinking generalization, https://arxiv.org/abs/1611.03530’

In the large-scale training datasets, noisy training data points generally exist. Specifically and explicitly, the observations and their corresponding semantic labels may not matched.

Fortunately, the concept of adversarial examples become universe/unrestricted now, i.e., any examples that fool a model can be viewed as a adversary, e.g., examples with noisy labels which are fitted well during training, outliers which are fitted well during training or get high confidence scores during testing, examples with small pixel perturbation and perceptually ignorable which fool the model.

### Research Delivering:

- Derivative Manipulation for General Example Weighting
- IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude’s Variance Matters
- Instance Cross Entropy for Deep Metric Learning and its application in SimCLR-A Simple Framework for Contrastive Learning of Visual Representations
- Code Releasing of My Recent Work-Derivative Manipulation

### Are deep models robust to massive noise intrinsically?

- No: DNNs can fit well training examples with random lables.
- You may have your own answer if you read Research Delivering, Confidence penalty & Label Smoothing && Ouput Regularisation

### Intuitive concepts to keep in mind

The definition of abnormal examples: A training example, i.e., an observation-label pair, is abnormal when an obserevation and its corresponding annotated label for learning supervision are semantically unmatched.

Fitting of abnormal examples: When a deep model fits an abnormal example, i.e., mapping an oberservation to a semantically unmatched label, this abnormal example can be viewed as an successful adversary, i.e., an unrestricted adversarial example.

Learning objective: A deep model is supposed to extract/learn meaningful patterns from training data, while avoid fitting any anomaly.