was successfully added to your cart.

    Revenue Cycle Optimization with AI: Streamlining Billing Processes

    Avatar photo
    Data Scientist at John Snow Labs

    Preventing the preventable: how smart AI systems can reduce claim denials

    The envelope arrived on a Friday afternoon.

    Elaine Carter had just returned from her second round of physical therapy, her body still aching in familiar ways that were now a constant part of life after breast cancer surgery. The soft blue logo of her insurance carrier was unmistakable. She sat down at the kitchen table, peeled the flap open, and unfolded the single-page letter. Her eyes scanned past the formal greeting, the legal language, and then stopped cold.

    “Your recent treatment with Kadcyla (trastuzumab-emtansine) has been denied. This service was not covered under your plan due to lack of prior authorization.”

    The words didn’t register at first. Denied? She had shown up to her appointment. Her oncologist had prescribed it. The infusion center had administered it without question. How could something be denied after the fact?

    Her fingers tightened around the paper. Beneath the formal phrasing was a quiet but unmistakable suggestion: she might be on the hook for more than $9,000. For a single dose.

    It wasn’t just the number. It was the feeling of falling through the cracks of a system she had trusted. A system that had delivered her diagnosis, her surgery, her chemotherapy, now suddenly pausing, shifting the burden back to her. Elaine had fought to keep her sense of control through the uncertainty of cancer, and this letter was a small but deeply personal rupture. It made her question whether the care she was receiving was truly coordinated, or just a chain of disconnected actions.

    She called the billing office. The woman on the other end of the line was kind, but the answer was frustratingly vague: “It looks like the authorization hadn’t been approved in time. We’re working on an appeal.” Elaine hung up without knowing what that really meant, or whether the next dose would go forward.

    She wasn’t angry. Not yet. But she was shaken. And beneath the unease was a question she couldn’t put into words: If this happened once, could it happen again?

    Meanwhile, just down the hall in the administrative wing of the hospital, the claim had already surfaced in the weekly denial report.

    From the perspective of the revenue cycle team, this wasn’t an unfamiliar pattern. A high-cost oncology drug. A generic diagnosis code. A missed payer message. A denial that, on paper, was technically valid, but operationally preventable.

    Any revenue cycle leader would come to recognize this for what it is: a failure not of intent, but of alignment, between the clinical world and the administrative machinery that supports it. It’s in this liminal space, between prescription and payment, that we must now ask ourselves harder questions.

    Not just how did this happened, but why are these avoidable denials still reaching patients like Elaine? And what would it take to build a system smart enough, and sensitive enough, to intercept them before the damage is done?

    That’s where the conversation must shift, from the personal to the systemic, from the emotional to the architectural. And it begins with the tools we use, and the intelligence we embed, deep inside the revenue cycle.

    In the daily grind of revenue cycle management, few frustrations sting more than seeing a meticulously submitted claim denied due to a mismatch between clinical documentation and the codes ultimately billed. It is a needle-in-a-haystack problem with million-dollar consequences. And yet, this exact breakdown, bridging unstructured clinical narratives with billing-compliant coding, continues to drain resources, delay revenue, and erode staff morale across the healthcare landscape. In this article, we focus on this critical point of failure: managing wrongful denials. More specifically, we explore how generative AI can not only address denials after they occur but play an even more powerful role in preventing them in the first place by identifying documentation gaps, detecting mismatches early, and ensuring coding accuracy before a claim is ever submitted.

    From systemic challenge to individual consequence: The Elaine Carter case

    Mrs. Carter is a 58-year-old patient diagnosed with HER2-positive breast cancer. She’s being treated at an academic medical center and has already completed surgery and initial chemotherapy. Her oncologist now prescribed trastuzumab-emtansine (Kadcyla) as adjuvant therapy, a targeted biologic agent known for its high cost—over $9,000 per dose.

    Her insurance plan is a commercial PPO that requires prior authorization for all specialty infusions, including Kadcyla. The oncologist placed the order in the EHR. A prior authorization was submitted electronically, but due to a documentation gap, the note lists “HER2-positive status” in the pathology report, but the diagnosis code used in the submission is a generic “C50.9 – malignant neoplasm of breast, unspecified”, the authorization request pends for more information.

    The clinic, juggling hundreds of auth requests, missed the payer’s message requesting clarification. The clock runs out. No approval.

    But the infusion team, seeing the order in the system and the appointment on the books, proceeded with the administration. Mrs. Carter receives her first cycle of Kadcyla on a Tuesday morning.

    Two weeks later, the claim was submitted. Three days after that, there was the response: Denied. No valid prior authorization on file.

    Result: Authorization Denied

    Denial Reason Code: 197 – “Precertification/authorization/notification absent.”

    Payer Notes: “Claim includes HCPCS J9354 which requires prior authorization under policy ONC-2023-37.2. Authorization not valid at time of service. Clinical criteria not met due to unspecified ICD-10 code.”

    The business office routes the denial to our appeals team. They re-initiate the prior authorization, submit additional documentation, and appeal. The payer agrees to retro-authorize, but only prospectively. The first infusion remains denied.

    The hospital absorbs the loss: $9,100 in unreimbursed drug costs, plus nursing and pharmacy overhead. Worse, Mrs. Carter gets a confusing letter from her insurer and fears she’s financially responsible.

    This is not a rare event. In oncology, this scenario plays out hundreds of times a year in large centers. In fact, a recent cohort study of Medicare claims found that 23.3% of cancer-related next-generation sequencing (NGS) claims were denied, and this rate increased to 27.4% following updated national coverage determinations in 2020 [1]. These denials are often due to documentation issues, coding discrepancies, or payer rule misinterpretation.

    This is the kind of loss that chips away at margins, yes, but also at trust, efficiency, and the provider-patient relationship. It’s the kind of denial that every RCM executive has seen, and dreads. And it’s entirely addressable.

    In fact, a survey of cancer patients revealed that 69% experienced delays in care due to prior authorization, with 73% of those delays lasting two weeks or more, and 22% ultimately not receiving the recommended care due to such delays or denials [2]. In radiation oncology, such delays have also been shown to affect patient anxiety and even clinical trial enrollment, underscoring the downstream consequences of slow or misaligned administrative processes [3].

    This is where automation isn’t just about efficiency. It’s about protecting patient care, strengthening compliance, and ensuring financial sustainability for complex, high-acuity service lines like oncology.

    Rethinking leadership priorities in Revenue Cycle Management

    From an RCM standpoint, commercial PPOs are complex, because each plan has its own authorization rules, coding guidelines, and tiered reimbursement policies, risky, because they do cover out-of-network care, but often with limited reimbursement, higher patient responsibility, and strict documentation requirements and critical to get right, because they are a major source of revenue and a major source of denials if the nuances aren’t handled carefully.

    So when we say a patient like Mrs. Carter is covered under a commercial PPO, we’re dealing with a plan that offers flexibility but places significant administrative burden on us to ensure every service is properly authorized, coded, and aligned to the plan’s rules to avoid denials.

    At any flagship hospital, thousands of oncology patients are served each month, with complex care paths and a heavy administrative footprint. One persistent thorn in RCM departments has been eligibility-related denials, claims denied not because the care was unwarranted, but because the payer believed the patient was ineligible for coverage at the time of service.

    We’ve known for years that these denials are often preventable. Front-end staff are tasked with verifying eligibility through payer portals or clearinghouses before the appointment. But in reality, what should be a straightforward check often becomes a game of telephone, details change last minute, patients forget to mention secondary insurance, or out-of-network issues are flagged too late. It’s a solvable problem, but when you’re dealing with 10,000+ encounters per week, that manual effort adds up fast. Even a 2% error rate leads to hundreds of denials monthly.

    RCM departments usually have teams of full time dedicated people just working eligibility appeals, manually re-verifying coverage, resubmitting claims, or contacting patients. This work is repetitive, cognitively low value, and deeply frustrating for RCM staff and patients.

    From missed signals to smart safeguards: Rethinking eligibility and denials with AI

    Here’s where automation, particularly Generative AI layered onto real-time data access, is promising to change the game.

    Imagine a system that continuously monitors eligibility data, not just at check-in, but in the days leading up to the visit. A generative AI layer could intelligently flag mismatches between scheduled services and coverage policies based on payer documentation, historical denial patterns, and even recent updates from payer feeds. Instead of static “yes/no” eligibility checks, we’d have a nuanced, predictive warning system that helps staff act before the visit, not just after the denial.

    When denials still occur, AI-driven triage could sort and classify them by likelihood of successful appeal, root cause category, and payer-specific behavior. Rather than wasting staff time on low-probability reversals, our team could focus on high-value recovery, targeting the denials that are both winnable and material to our bottom line.

    For many revenue cycle leaders, the allure of general-purpose large language models is hard to resist. These platforms promise sweeping, near-magical solutions to nearly every challenge, from documentation to denial management, invoking terms like prompt engineering, retrieval-augmented generation, and AI agents. But in the echo chamber of social media and conference panels, these buzzwords often obscure more than they reveal. The prevailing narrative centers around an ever-expanding monolithic model that, through sheer scale, claims to deliver unparalleled reasoning, capable of understanding and resolving even the most complex reimbursement scenarios. It’s a seductive vision, but one that often underestimates the real-world constraints and specialized demands of healthcare environments.

    In practice, what healthcare professionals are encountering is far removed from the idealized promise of monolithic AI platforms. Once deployed in real-world environments, the reliability of these systems often falls short of what polished demos suggest. The core issue lies in their opacity: these models operate as black boxes, with limited visibility into how decisions are made and no clear path for auditing or validation. Compounding this challenge are significant privacy concerns. The computational demands of large-scale models frequently necessitate cloud-based infrastructures, forcing organizations to transfer sensitive patient data outside their own systems, an arrangement that raises both regulatory and ethical red flags. And even when compliance can be managed, the cost of inference at scale quickly becomes prohibitive, turning initial enthusiasm into long-term financial strain.

    A smarter alternative: Tailored AI that understands healthcare

    Rather than relying solely on generic AI models or fragmented automation tools, some healthcare organizations are exploring alternatives designed specifically for the complexity of clinical and operational workflows. A common approach gaining traction is to move away from a single, massive language model that attempts to do everything, and instead adopt a modular system. In this setup, a healthcare-specific large language model (LLM) is fine-tuned for clinical language and works alongside a suite of proven tools that each serve a distinct, well-defined function.

    For example, while the LLM can help interpret unstructured notes, it is complemented by other components such as named entity recognition models that pinpoint key medical concepts, ontology mappers that assign those concepts to standard vocabularies like ICD-10 or SNOMED, and assertion models that determine whether those conditions are actually present and relevant to the patient’s current treatment. Each of these tools can be audited, tuned, and understood individually, unlike a monolithic model that delivers an answer with little transparency into how it was derived.

    This modular design makes it easier to incorporate rule-based logic, align outputs with payer expectations, and adapt quickly as clinical documentation practices or regulatory requirements evolve. It also allows organizations to focus on high-impact denial scenarios one at a time, making improvements that are both measurable and manageable. The result is a more trustworthy, adaptable, and practical use of AI, one that supports both operational efficiency and patient-centered care.

    Unlike API-based solutions that often require routing sensitive data through third-party cloud environments, these specialized NLP platforms can be deployed on-premise. This gives healthcare organizations greater control over data privacy, system performance, and compliance posture, while also reducing latency by keeping processing close to where the data lives.

    In the sections that follow, we explore how models like those from John Snow Labs (JSL) might be applied in practical scenarios like Mrs. Carter’s case. While no model is perfect, healthcare-focused solutions trained on clinical data and tailored to real-world documentation workflows offer an increasingly compelling alternative to more general-purpose tools.

    Precise extraction of HER2-Positive diagnosis from clinical notes

    In Mrs. Carter’s case, the HER2-positive status was documented in the pathology note, but the prior authorization submission used a generic ICD-10 code (C50.9).

    A general LLM might recognize the mention of HER2, but John Snow Labs’ clinical NER models, like ner_jsl_enriched, can:

    • Precisely extract structured entities like “HER2-positive breast carcinoma”
    • Link that to the appropriate ICD-10-CM code (e.g., C50.911) using models like ChunkEntityResolver or ICD10Mapper
    • Flag any mismatch between the extracted diagnosis and the billing code actually being used

    A general LLM won’t automatically tie “HER2-positive” to a specific billable code, John Snow Labs models are trained to do exactly that, with medical ontologies in mind.

    Context-aware assertion and negation detection

    The pathology report might say something like:

    “HER2 (3+) confirmed by IHC. ER/PR negative. Final diagnosis: Invasive ductal carcinoma, right breast.”

    John Snow Labs’ AssertionDLModel ensures that the model doesn’t just see the word “HER2”, it confirms:

    • It’s affirmed (not “negative for HER2”)
    • It applies to this patient
    • It’s clinically active at the time of treatment

    This is critical for payer-required documentation, especially in automated pre-auth workflows. A generic model could hallucinate or misinterpret these clinical subtleties.

    Automated prior authorization readiness screening

    JSL models can be embedded into an AI pipeline that:

    • Ingests clinical notes and orders as they’re placed
    • Extracts diagnoses, treatments, prior therapies, and lab/path results
    • Runs rule-based or ML-powered logic to determine whether the documentation matches payer authorization criteria

    For instance, a JSL NLP information extraction pipeline can encode payer-specific rules like:

    “Kadcyla requires HER2+ dx + prior taxane-based chemo + adjuvant setting”

    If something’s missing, a DocumentFiltererByClassifier can trigger a real-time alert to the care team before treatment is delivered.

    Improved claims audit and denial prevention

    Even post-service, John Snow Labs’ technological stack can be used to:

    • Audit documentation before claim submission
    • Flag ICD/CPT mismatches
    • Detect likely denial triggers
    • Automatically suggest code corrections or even write draft appeal justifications using Generative AI layers (e.g. MedicalLLM-14B)

    This moves the process from reactive appeals to proactive compliance.

    Built atop a scalable Spark cluster, this architecture ensures that even health systems with thousands of concurrent encounters per day can keep pace without performance degradation. Whether processing surgical summaries for CPT codes or cross-validating diagnosis clusters against payer-specific rule sets, the system adjusts in real time to organizational volume and complexity.

    A practical path forward

    In an environment where many AI solutions operate as opaque, one-size-fits-all platforms, healthcare organizations may benefit from exploring more transparent and modular alternatives, systems designed with clinical and operational specificity in mind. A key advantage of such systems is their ability to incorporate well-defined, auditable rule-based logic alongside machine learning, enabling organizations to maintain traceability over how decisions are made. Furthermore, the use of healthcare ontologies and controlled vocabularies enhances the precision, explainability, and regulatory robustness of automated outputs.

    While the promise of a single, monolithic model that solves all denial issues at once is attractive, a more pragmatic and effective path often begins with focusing on high-impact, well-scoped use cases. These targeted applications are not only easier to audit and validate, but they also yield insights that can be generalized and scaled over time. Ultimately, this modular approach, leveraging tailored AI tools that align with healthcare’s complex documentation and compliance environment, offers a more reliable and sustainable path to reducing denials, strengthening financial outcomes, and most importantly, protecting the patient experience. By preventing avoidable billing errors and ensuring timely access to authorized care, such systems can reduce unnecessary stress and confusion for patients, helping preserve the trust and continuity that are foundational to quality healthcare.

     

    How useful was this post?

    Try The Generative AI Lab - No-Code Platform For Model Tuning & Validation

    See in action
    Avatar photo
    Data Scientist at John Snow Labs
    Our additional expert:
    Julio Bonis is a data scientist working on Spark NLP for Healthcare at John Snow Labs. Julio has broad experience in software development and design of complex data products within the scope of Real World Evidence (RWE) and Natural Language Processing (NLP). He also has substantial clinical and management experience – including entrepreneurship and Medical Affairs. Julio is a medical doctor specialized in Family Medicine (registered GP), has an Executive MBA – IESE, an MSc in Bioinformatics, and an MSc in Epidemiology.

    Reliable and verified information compiled by our editorial and professional team. John Snow Labs' Editorial Policy.

    Generative AI in Healthcare: Use Cases, Benefits, and Challenges

    Generative AI in healthcare is a transformative technology that utilizes advanced algorithms to synthesize and analyze medical data, facilitating personalized and efficient...
    preloader