Xinshao Wang, PhD Student, Queens University Belfast, Anyvision.

  • Machine Learning (Deep Metric Learning, Robust Representation Learning under Adverse Conditions, e.g., Noisy Data and Sample Imbalance).

  • Computer Vision (Image/Video Recognition, Person Re-identification).

What am I working on now? Discussions are Welcome!

Robust Learning and Inference under Adverse Conditions, e.g., noisy labels, noisy observations, outliers, adversaries, etc.

Why is it important?

DNNs can fit well training examples with random lables. ‘Understanding deep learning requires rethinking generalization, https://arxiv.org/abs/1611.03530

In the large-scale training datasets, noisy training data points generally exist. Specifically and explicitly, the observations and their corresponding semantic labels may not matched.

Fortunately, the concept of adversarial examples become universe/unrestricted now, i.e., any examples that fool a model can be viewed as a adversary, e.g., examples with noisy labels which are fitted well during training, outliers which are fitted well during training or get high confidence scores during testing, examples with small pixel perturbation and perceptually ignorable which fool the model.

Research Delivering:

Are deep models robust to massive noise intrinsically?

Intuitive concepts to keep in mind

  • The definition of abnormal examples: A training example, i.e., an observation-label pair, is abnormal when an obserevation and its corresponding annotated label for learning supervision are semantically unmatched.

  • Fitting of abnormal examples: When a deep model fits an abnormal example, i.e., mapping an oberservation to a semantically unmatched label, this abnormal example can be viewed as an successful adversary, i.e., an unrestricted adversarial example.

  • Learning objective: A deep model is supposed to extract/learn meaningful patterns from training data, while avoid fitting any anomaly.


© 2019-2020. All rights reserved.

Welcome to Xinshao Wang's Personal Website