was successfully added to your cart.

Language based Pre-training for Drug Discovery

Pretraining has taken the NLP world by storm as ever larger language models have broken successive benchmarks.

In this talk, I’ll review some recent work applying pretraining to scientific challenges, and in particular will discuss the challenges of pretraining for molecular machine learning.

I’ll introduce our new architecture, ChemBERTa, which explores the use of BERT-style pretraining for machine learning problems inspired by drug discovery applications.

Learning to Summarize Visits from Doctor-Patient Conversations

Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes...