was successfully added to your cart.

The Language Interpretability Tool: Interactive analysis of NLP models

The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

It allows users to ask questions such as “What kind of examples does my model perform poorly on?“, “Can my model’s prediction be attributed to adversarial behavior, or undesirable priors from the training set?, and “Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?“.

In this talk I will give the motivation behind the tool along with some demos, and introduce some related work from Google People + AI Research (PAIR) team.

Connecting the Dots in Clinical Document Understanding and Information Extraction

Electronic health records (EHRs) are the primary source of information for clinicians tracking the care of their patients. Due to innate obstacles...