Builders and buyers of AI systems are required to test and show that their systems comply with legislation – on safety, discrimination, privacy, transparency, and accountability. This talk covers recent regulation in this space, limitations that current Generative AI models have, and an automated testing framework that mitigates them.We describe the open-source LangTest library, which can automate the generation and execution of more than 100 types of Responsible AI tests. We then introduce Pacific AI, which provides a no-code interface for this capability for domain experts, as well as automating many of the best practices on how these tools should be used.
Builders and buyers of AI systems are required to test and show that their systems comply with legislation – on safety, discrimination, privacy, transparency, and accountability. This talk covers recent...
What is Clinical Data Abstraction Creating large-scale structured datasets containing precise clinical information on patient itineraries is a vital tool for medical care providers, healthcare insurance companies, hospitals, medical research,...
Healthcare NLP employs advanced filtering techniques to refine entity recognition by excluding irrelevant entities based on specific criteria like whitelists or regular expressions. This approach is essential for ensuring precision...
In this notebook, RoBertaForQuestionAnswering was used for versatile Named Entity Recognition (NER) without extensive domain-specific training. This blog post walks through the ZeroShotNerModel implementation and explores its ability to adapt...