Overview
- A former postdoc of University of Oxford.
- Google Scholar
Machine Learning: Deep Metric Learning, Robust Representation Learning under Adverse Conditions, e.g., missing labels (semi-supervised learning), noisy labels, sample imbalance, etc.
Computer Vision: Image/Video Recognition, Person Re-identification.
Academic Reviewer: TPAMI, TNNLS, Knowledge Based Systems, AAAI, etc
- I am working on AI for synthetic biology now, which is exciting and has huge potential
Featured Research Delivering
Hightlight: Robust Learning and Inference under Adverse Conditions, e.g., noisy labels or observations, outliers, adversaries, sample imbalance (long-tailed), etc.
Why important? DNNs can brute forcelly fit well training examples with random lables (non-meaningful patterns):
- Derivative Manipulation and IMAE
- Progressive Self Label Correction (ProSelfLC) for Training Robust Deep Neural Networks
- Understanding deep learning requires rethinking generalization
- A Closer Look at Memorization in Deep Networks
- Fortunately, the concept of adversarial examples become universe/unrestricted now, i.e., any examples that fool a model can be viewed as a adversary. For example:
- Examples with noisy labels which are fitted well during training;
- Out-of-distribution data points which are fitted well during training or get high confidence scores during testing;
- Examples with small pixel perturbation and perceptually ignorable which fool a model.
In the large-scale training datasets, noisy training data points generally exist. Specifically and explicitly, the observations and their corresponding semantic labels may not matched.
Are deep models robust to massive noise intrinsically?
- No: DNNs can fit well training examples with random lables.
- You may have your own answer if you read Featured Research Delivering, ProSelfLC & Confidence penalty & Label Smoothing & Ouput Regularisation
Intuitive concepts to keep in mind
The definition of abnormal examples: A training example, i.e., an observation-label pair, is abnormal when an obserevation and its corresponding annotated label for learning supervision are semantically unmatched.
Fitting of abnormal examples: When a deep model fits an abnormal example, i.e., mapping an oberservation to a semantically unmatched label, this abnormal example can be viewed as an successful adversary, i.e., an unrestricted adversarial example.
Learning objective: A deep model is supposed to extract/learn meaningful patterns from training data, while avoid fitting any anomaly.