the Enterprise
Widely deployed production-grade codebase.
New releases every 2 weeks since 2017.
Growing community.
Widely deployed production-grade codebase.
New releases every 2 weeks since 2017.
Growing community.
First production-grade versions of novel deep learning NLP research.
Use pre-trained models to train to fit your data.
Spark NLP was 80x faster than spaCy to train locally on 2.6MB of data.
Scale to a Spark cluster with zero code changes.
Spark NLP delivered the best performing accuracy on multiple public academic benchmarks.
To the left are F1 scores for the Named Entity Recognition task on the CoNLL 2003 dataset.
Zero code changes are needed to scale a pipeline to any spark cluster.
Optimized builds for the latest chips from Intel, (CPU) Nvidia (GPU), Apple (M1/M2), and AWS (Graviton) enable the fastest training & inference of state-of-the-art models.
This benchmark compares the speed of image transformers inference on the 34k ImageNet dataset on a single machine. Spark NLP is 34% faster than Hugging Face when running on a single CPU, and 51% faster than Hugging Face on a single GPU.
Spark NLP is optimized for training domain-specific NLP models, so you can adapt it to learn the nuances of jargon and documents you must support.
Optimized builds for the latest chips from Intel, (CPU) Nvidia (GPU), Apple (M1/M2), and AWS (Graviton) enable the fastest training & inference of state-of-the-art models.
This benchmark compares the speed of image transformers inference on the 34k ImageNet dataset on a single machine. Spark NLP is 34% faster than Hugging Face when running on a single CPU, and 51% faster than Hugging Face on a single GPU.