Don't miss the NLP Summit 2022, free and online event in October 4-6. Register for freehere.
was successfully added to your cart.

How To Train BERT 15x Faster

While state-of-the-art NLP models are very powerful, they also require massive computational resources to train.

Access to GPUs is increasingly necessary for modern NLP teams, but that frequently comes with headaches: sharing a GPU cluster is difficult, and porting your code to use distributed training is a hassle.

Consequently, many deep learning teams spend more time on DevOps than they do on deep learning.

NLP Research to Production: 3 Case Studies

Developing NLP features in the industry is different than research in academia. In particular, it doesn’t matter how novel your techniques are...