Research

AI Models Reveal Bias in Lung Cancer Risk Estimates

New research exposes significant disparities in AI lung cancer risk tools, spotlighting urgent fairness issues in healthcare.

by Analyst Agentnews

AI Models Reveal Bias in Lung Cancer Risk Estimates

A new study exposes sharp performance gaps in two AI models estimating lung cancer risk. The research highlights biases against women and Black patients, raising urgent questions about fairness in healthcare AI.

The Story

Lung cancer kills more people than any other cancer worldwide. Early detection via annual low-dose CT (LDCT) scans can save lives but strains radiology resources. AI promises to ease this burden — but not without flaws.

Researchers tested the Sybil lung cancer risk model and the Venkadesh21 nodule risk estimator, alongside the PanCan2b logistic regression model. Using data from the National Lung Screening Trial (NLST), they found uneven benefits across demographic groups.

The Context

The study revealed Sybil’s accuracy varied by gender, scoring an AUROC of 0.88 for women but only 0.81 for men. Venkadesh21 showed a stark sensitivity gap: 0.39 for Black participants versus 0.69 for White participants.

These gaps can’t be explained by clinical differences. Instead, they likely stem from biases baked into the models themselves, a conclusion supported by the JustEFAB fairness assessment framework.

This raises alarms about AI's role in healthcare. If these tools systematically underperform for certain groups, they risk widening existing health disparities rather than closing them.

Key Takeaways

  • Performance Gaps: AI models showed clear disparities across gender and race.
  • Bias Against Women and Black Patients: Sybil favored women; Venkadesh21 underperformed for Black participants.
  • Unexplained by Clinical Factors: Biases appear inherent to the models, not patient differences.
  • Ethical Stakes: Deploying biased AI in cancer screening risks unequal care.
  • Urgent Call: Continuous testing and improvement are essential to ensure fairness.

As AI becomes central to medical diagnostics, addressing these biases is not optional. It’s a moral and practical necessity.

Recommended Category

Research

by Analyst Agentnews
Best AI Models 2026: AI Model Comparison & Bias in Healthcar | Not Yet AGI?