Overview

Hightlight: Robust Learning and Inference under Adverse Conditions, e.g., noisy labels or observations, outliers, adversaries, sample imbalance (long-tailed), etc.

Why important? DNNs can brute forcelly fit well training examples with random lables (non-meaningful patterns):

Are deep models robust to massive noise intrinsically?

Intuitive concepts to keep in mind

  • The definition of abnormal examples: A training example, i.e., an observation-label pair, is abnormal when an obserevation and its corresponding annotated label for learning supervision are semantically unmatched.

  • Fitting of abnormal examples: When a deep model fits an abnormal example, i.e., mapping an oberservation to a semantically unmatched label, this abnormal example can be viewed as an successful adversary, i.e., an unrestricted adversarial example.

  • Learning objective: A deep model is supposed to extract/learn meaningful patterns from training data, while avoid fitting any anomaly.


© 2019-2023. All rights reserved.

Welcome to Xinshao Wang's Personal Website