Learning from Unlabeled Videos for Recognition, Prediction, and Control

Published on 15 Sep 2021, 18:46
Deep learning has brought tremendous progress to visual recognition, thanks to big labeled data and fast compute. To transfer such success to our daily life, we still need to develop machine intelligence that recognizes hierarchical, composite human activities, and predicts how events unfold over time. These tasks are often too rich to be discretized into categorical labels, or too ambiguous to be manually labeled by humans, making standard supervised deep learning unfit for the tasks.
In this talk, I will introduce several recent works on learning rich semantic and dynamic information from unlabeled videos. The first part of the talk focuses on recognition, where the goal is to learn temporally aware visual representations via self-supervised learning. I will discuss the principles of view construction for contrastive learning, how the vanilla contrastive learning objective loses temporal information, and how to fix it. In the second part of my talk, I will describe our work on predicting key moments into the longer-term time horizons, using only narrated videos as training signals. Finally, I will show how multimodal representation learning leads to agents that better navigate, and interact with the environment, following human instructions.

Speaker: Chen Sun, Brown University

MSR Deep Learning team: microsoft.com/en-us/research/g...