## Health Insurers Bet on AI for Coverage Decisions, But Lawsuits and Research Warn of Patient Risks
Major health insurers are making a unified push to deploy artificial intelligence for determining patient coverage, framing it as a critical cost-saving measure in conversations with Wall Street. This corporate strategy is mirrored by federal action, with the Trump administration testing AI's role in Medicare's prior authorization process while seeking to preempt state-level AI regulation. The rapid adoption is not happening in a vacuum; it is colliding with a system already under legal and academic scrutiny for how it handles denials.

Class action lawsuits have directly accused insurers of using AI algorithms to wrongfully deny medically necessary treatments to patients. This legal pressure is compounded by new research from Stanford University, which outlines a fundamental risk: AI models are being trained on data from the existing, flawed healthcare system. As study co-author Michelle Mello notes, this creates a perilous feedback loop where AI could "replicate a bad human system" by learning from its historical patterns of wrongful denials, potentially automating and scaling existing biases.

The situation presents a stark tension between corporate efficiency drives and patient safety. While Mello's research acknowledges potential positives from AI, the current trajectory—driven by insurer cost-cutting and federal experimentation—raises significant risks. The outcome hinges on whether regulatory oversight and legal challenges can force transparency and accountability into these opaque algorithmic systems before they become deeply embedded in determining who gets care.
---
- **Source**: KFF Health News
- **Sector**: The Lab
- **Tags**: Artificial Intelligence, Healthcare, Insurance, Regulation, Medicare
- **Credibility**: unverified
- **Published**: 2026-04-10 09:39:36
- **ID**: 58508
- **URL**: https://whisperx.ai/en/intel/58508