Learning Bayesian Deep Learning, Uncertainty & Variational Techniques
in Blogs
What am I working on now? Discussions are Welcome!
Going to stop treating \(p(y\|x)\) as a classfication confidence metric, since it is determinstic. \(p(y\|x)\) is not for deciding whether certain or uncertain.
\(p(y\|x)\) is good as a metric of whether x matches y, though not a good metric indicating whether x is blur or not.
Utilities of Uncertainties
Blogs
- Everything that Works Works Because it’s Bayesian: Why Deep Nets Generalize?
- Yann LeCun’s Comments
- YARIN GAL’s PhD Thesis
Papers on Theories
- Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning-ICML 2016-YARIN GAL
- YARIN GAL’s PhD Thesis
- A Bayesian Perspective on Generalization and Stochastic Gradient Descent-ICLR 2018 Google Brain-Samuel L. Smith and Quoc V. Le
- Bayesian Deep Learning and a Probabilistic Perspective of Generalization–arXiv 2020 New York University-Andrew Gordon Wilson Pavel Izmailov
- Sharp Minima Can Generalize For Deep Nets-ICML 2017
- Theory of Deep Learning III: Generalization Properties of SGD
- On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima-ICLR 2017
- The Marginal Value of Adaptive Gradient Methods in Machine Learning-NIPS 2017
- Stochastic Gradient Descent as Approximate Bayesian Inference-JMLR 2017
A Variational Analysis of Stochastic Gradient Algorithms-ICML 2016
- Deep Learning and the Information Bottleneck Principle
- On the Difference Between the Information Bottleneck and the Deep Information Bottleneck
- Mutual Information Neural Estimation
Papers on Applications
- Robust Person Re-Identification by Modelling Feature Uncertainty
- Probabilistic Face Embeddings
- Rethinking Person Re-Identification with Confidence
- Learning Confidence for Out-of-Distribution Detection in Neural Networks
- Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples