Behind the Black Box: How to Understand Any ML Model Using SHAP

Jonathan Bechtel
.png)
Details
This workshop explains how to take the highest performing ML models such as gradient boosting and neural networks and understand what contributes to their predictions at a local and global level to make their output easily understood by practitioners and non-practitioners alike.
Description
Part 1: An Introduction to Interpretable ML
- Interpretability is why bad models are used more than they ought to be
- The current state of interpreting non-linear ML models, and their major shortcomings
- What's currently missing in the toolkit for understanding black box ML models
Part 2: An Introduction to SHAP
- A brief history of understanding black box models, and how it lead to the need for SHAP
- Why SHAP is a theoretically sound application of game theory to understand any ML model, regardless of how it generates predictions
- A close look at SHAP's source code to understand how it's used to compute its results
Part 3: SHAP in the Wild
- How to derive local explanations for a single model prediction (or how to be more like linear regression)
- Creating odds ratios
- Using SHAP to understand feature interaction effects among correlated data
- Using SHAP with neural networks and unstructured data: understanding word contributions to a Transformed NLP model
- Examples of SHAP being used in production
While this event is FREE, tickets are required & space is limited!
Attend this event