SelectData interprets millions of patient stories with deep learned OCR and NLP

SelectData Case Study

Many businesses still depend on documents stored as images—from receipts, manifests, invoices, medical reports, and ID cards snapped with mobile phone cameras to contracts, waivers, leases, forms, and audit records digitized with scanners. Extracting high-quality data from these images comes with three challenges.

First is OCR, as in dealing with crumpled receipts photographed from an angle in a dimly lit room.

Second is NLP, extracting normalized values and entities from the natural language text.

The third is building predictors or recommendations that suggest the best next action—and in particular can deal with missing, wrong, or conflicting information generated by the previous steps.


This case study illustrates an AI system that reads millions of pages of patient information, gathered from hundreds of sources, resulting in a great variety of image formats, templates, and quality. It explores the solution architecture and key lessons learned in going from raw images to a deployed predictive workflow based on facts extracted from the scanned documents.

Get your Case Study

Hear from SelectData

The good news is that state-of-the-art deep learning techniques can now approach human accuracy in these three tasks—and do so at scale.