The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. This trend has led to unprecedented success...
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language...
During this talk we will present Stanford CRFM’s efforts to train foundation models for biomedical NLP, culminating in a 2.7B parameter GPT-style model trained on PubMed abstracts and documents in...
While there’s a lot of work done on defining guidelines and policies for Responsible AI, there are far fewer that data scientists can apply today to build safe, fair, and...
Visual Question Answering is emerging as a valuable tool for NLP practitioners. New “OCR-Free” models deliver better accuracy than ever before for information extraction from forms, reports, receipts, tickets, and...