Wasserstein

:+1: means being highly related to my personal research interest.

NeurIPS 2019-Generalized Sliced Wasserstein Distances

NOTE: Wasserstein Distances

NeurIPS 2019-Tree-Sliced Variants of Wasserstein Distances

NOTE: Wasserstein Distances

NeurIPS 2019-Sliced Gromov-Wasserstein

NOTE: Wasserstein Distances

NeurIPS 2019-Wasserstein Dependency Measure for Representation Learning

NOTE: Mutual information maximization has emerged as a powerful learning objective for unsupervised representation learning obtaining state-of-the-art performance in applications such as object recognition, speech recognition, and reinforcement learning. However, such approaches are fundamentally limited since a tight lower bound on mutual information requires sample size exponential in the mutual information. This limits the applicability of these approaches for prediction tasks with high mutual information, such as in video understanding or reinforcement learning. In these settings, such techniques are prone to overfit, both in theory and in practice, and capture only a few of the relevant factors of variation. This leads to incomplete representations that are not optimal for downstream tasks. In this work, we empirically demonstrate that mutual information-based representation learning approaches do fail to learn complete representations on a number of designed and real-world tasks. To mitigate these problems we introduce the Wasserstein dependency measure, which learns more complete representations by using the Wasserstein distance instead of the KL divergence in the mutual information estimator. We show that a practical approximation to this theoretically motivated solution, constructed using Lipschitz constraint techniques from the GAN literature, achieves substantially improved results on tasks where incomplete representations are a major challenge.


© 2019-2023. All rights reserved.

Welcome to Xinshao Wang's Personal Website