was successfully added to your cart.

Applying off-the-shelf NLP in healthcare? More dangerous than you think

Transformers have come to define what NLP means in 2021. Not only are these black-box decision-makers, but also interpret the free-text in a counter-intuitive way. This introduces hidden risk especially for mission-intensive NLP applications e.g. healthcare.

In this talk, we will see how this behavior can make your systems go sideways in production.

I’ll also cover how even the simpler models and preprocessing techniques exhibit these attributes, if not used with due diligence. We’ll study its implications in the context of NLP on healthcare data.

Lastly, I will provide guardrails you can put on your systems before and after you train your models. This will make sure you’re not only deploying ethically sound but also trustworthy models in the wild.

Sending BERT to Med School – Injecting Medical Knowledge into BERT

General NLP research has greatly advanced over the past several years thanks to large pre-trained neural language models such as BERT and...