The release of Generative AI Lab 7.4 brought new features to enhance annotation and configuration, as covered in Part 1.
Now, in Part 2, we explore the practical improvements that make your daily work with LLM projects smoother and more efficient. These updates focus on analytics, prompt imports, LLM integrations, and project setup — designed to save time and improve collaboration across teams.
To illustrate, let’s follow the journey of a data science team at a big healthcare organization client, as they tackle a large-scale LLM evaluation project. Their goal is to assess model performance on patient outcome predictions, involving multiple team members. Here’s how the recent improvements support their work.
Enhanced Analytics for Clearer Insights
The team starts by reviewing their LLM project’s progress. Previously, analytics struggled to show detailed results when evaluations included multiple rating systems or complex labels. With Generative AI Lab 7.4, the Analytics page now supports multiple rating sections, HypertextLabels, and Choices within evaluation blocks. Chart titles are always visible, and if no data is available, a subtitle reads “No data available yet,” avoiding confusion.
For example, when the team analyzes ratings on accuracy and relevance for 300 patient prediction responses. They see clear visualizations of HypertextLabels marking specific medical terms and Choices reflecting reviewer decisions. This helps them identify patterns, such as a model’s tendency to overpredict certain outcomes, enabling faster adjustments to improve accuracy.
Simplified Prompt Imports for Efficiency
Next, the team needs to import a new set of 500 test prompts to compare model versions. Now, they use a simple JSON format like
{ “text”: “Your Prompt Here” }
or a CSV file with the same data. A downloadable sample JSON from the import page guides them, reducing setup errors.
They upload the CSV in minutes, focusing on analyzing model quality rather than data formatting. This consistency across LLM Evaluation and Comparison projects (for Text and HTML types) lets them scale testing efforts without extra effort, keeping their project on track.
Streamlined LLM Integration with Admin Approval
Integrating new language models like Claude was a multi-step process before. The team selects Claude from the Configuration page and submits a request for admin approval. Once approved, Claude becomes available for response generation across the project. Admins can revoke access if needed, maintaining control without disrupting other models. User-created ad-hoc providers are also listed, improving visibility.
In practice, the team requests Claude for synthetic data generation. The admin approves within a day, and they start generating responses immediately. This cuts setup delays, allowing them to meet a tight deadline for a client presentation.
Flexible Project Setup Without Delays
The team often needs to start projects quickly to explore ideas. With Generative AI Lab 7.4, the project configuration wizard lets them skip external LLM setup. They create a custom LLM, customize labels, and view initial analytics without waiting for external service integration.
For instance, they set up a pilot project to test label structures for a new evaluation type. Skipping the LLM config saves them a week of coordination with IT, letting them refine labels and gather early insights before full deployment.
Why These Improvements Matter
The enhancements address real-world challenges in LLM projects today. They save time, improve decision-making, and support collaboration — critical for both the healthcare sector and enterprise teams. Whether it’s faster imports, clearer analytics, or easier integrations, Generative AI Lab helps you focus on results.
Follow for updates on Generative AI Lab and more.