Frontiers in Machine Learning: Machine Learning Reliability and Robustness

Published on 28 Jul 2020, 20:10
As Machine Learning (ML) systems are increasingly becoming part of user-facing applications, their reliability and robustness are key to building and maintaining trust with users and customers, especially for high-stake domains. While advances in learning are continuously improving model performance on expectation, there is an emergent need for identifying, understanding, and mitigating cases where models may fail in unexpected ways. This session is going to discuss ML reliability and robustness from both a theoretical and empirical perspective. In particular, the session will aim at summarizing important ongoing work that focuses on reliability guarantees but also on how such guarantees translate (or not) to real-world applications. Further, the talks and the panel will aim at discussing (1) properties of ML algorithms that make them more preferable than others from a reliability and robustness lens such as interpretability, consistency, transportability etc. and (2) tooling support that is needed for ML developers to check and build for reliable and robust ML. The discussion will be grounded on real-world applications of ML in vision and language tasks, healthcare, and decision making.

Session Lead: Besmira Nushi, Microsoft

Speaker: Thomas Dietterich, Oregon State University
Talk Title: Anomaly Detection in Machine Learning and Computer Vision

Speaker: Ece Kamar, Microsoft
Talk Title: AI in the Open World: Discovering Blind Spots of AI

Speaker: Suchi Saria, Johns Hopkins University
Talk Title: Implementing Safe & Reliable ML: 3 key areas of development

Q&A panel with all 3 speakers

See more on-demand sessions from Microsoft Research's Frontiers in Machine Learning 2020 virtual event: