Interpreting ML models with explainable AI

774
0%
Published on 15 Sep 2020, 16:34
We often trust our high-accuracy ML models to make decisions for our users, but it’s hard to know exactly why or how these models came to specific conclusions. Explainable AI provides a suite of tools to help you interpret your ML model’s predictions. Listen to this discussion regarding how to use Explainable AI to ensure our ML models are treating all users fairly. Watch for a presentation on how to analyze image, text, and tabular models from a fairness perspective, using Explanations on AI Platform. Finally, learn how to use the What-if Tool, an open source visualization tool for optimizing your ML model’s performance and fairness.

Speaker: Sara Robinson

Watch more:
Google Cloud Next ’20: OnAir → goo.gle/next2020

Subscribe to the GCP Channel → goo.gle/GCP

#GoogleCloudNext

AI218
product: Explainable AI, Cloud AutoML, AI Platform Training; fullname: Sara Robinson;
newstechmusickids