was successfully added to your cart.

SelectData interprets millions of patient stories with deep learned OCR and NLP

Many businesses 
still depend on documents stored as images—from receipts, manifests, invoices, medical reports, and ID cards snapped with mobile phone cameras to contracts, waivers, leases, forms, and audit records digitized with scanners. Extracting high-quality data from these images comes with three challenges.

First is OCR, as in dealing with crumpled receipts photographed from an angle in a dimly lit room.

Second is NLP, extracting normalized values and entities from the natural language text.

The third is building predictors or recommendations that suggest the best next action—and in particular can deal with missing, wrong, or conflicting information generated by the previous steps.

This case study illustrates an AI system that reads millions of pages of patient information, gathered from hundreds of sources, resulting in a great variety of image formats, templates, and quality. It explores the solution architecture and key lessons learned in going from raw images to a deployed predictive workflow based on facts extracted from the scanned documents.

Using Spark NLP to De-Identify Doctor Notes in the German Language

The ability to extract clinical information at large scale and in real time from unstructured clinical notes is becoming a mission critical...