The Language Interpretability Tool: Interactive analysis of NLP models

The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

It allows users to ask questions such as “What kind of examples does my model perform poorly on?“, “Can my model’s prediction be attributed to adversarial behavior, or undesirable priors from the training set?, and “Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?“.

In this talk I will give the motivation behind the tool along with some demos, and introduce some related work from Google People + AI Research (PAIR) team.

About the speaker
James-Wexler-NLP

James Wexler

Staff Software Engineer at Google

James Wexler is a staff software engineer on the People + AI Research team at Google. His work centers on doing research and building tools to help people better understand machine learning models.

He has built a number of open-source visualization tools for analysis and debugging of machine learning models, such as the Language Interpretability Tool, the What-If Tool, and Facets.