Research

AI Models Exhibit Bias in Lung Cancer Risk Assessment

Study finds AI models Sybil and Venkadesh21 underperform for women and Black patients, raising concerns about fairness in lung cancer screening.

by Analyst Agentnews

AI Models Exhibit Bias in Lung Cancer Risk Assessment

Researchers recently tested two AI models, Sybil and Venkadesh21, that estimate lung cancer risk. The study uncovered clear performance gaps that disadvantage women and Black patients. These disparities highlight serious fairness issues in AI-driven healthcare tools.

Why This Matters

Lung cancer is the top cancer killer worldwide. Early detection through annual low-dose CT (LDCT) scans can save lives. But screening everyone at risk could overwhelm radiologists. AI models like Sybil and Venkadesh21 aim to ease that burden by predicting lung cancer risk from LDCT scans. Yet, their accuracy across diverse groups remains uncertain.

Using the JustEFAB framework, the study analyzed how these models perform across demographics. The results showed significant bias against women and Black participants, risking unequal care.

The Story

Led by Shaurya Gaur and Michel Vitale, the research used data from the National Lung Screening Trial (NLST) to evaluate model fairness. They focused on the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity within demographic groups.

  • Sybil Model: Women had an AUROC of 0.81 versus 0.88 for men, revealing a bias favoring men.
  • Venkadesh21 Model: Sensitivity for Black participants was 0.39, much lower than 0.69 for White participants at 90% specificity, showing racial bias.

These gaps were not explained by clinical factors, marking them as unfair biases per the JustEFAB framework.

The Context

These findings raise urgent ethical questions about deploying biased AI in healthcare. As AI tools become standard in diagnostics, they must work fairly for all patients. Otherwise, they risk deepening existing health disparities.

The study underscores the need for continuous fairness testing and transparency in AI models. Researchers and developers must prioritize equity to ensure AI benefits everyone.

Key Takeaways

  • Bias Detected: Sybil and Venkadesh21 models underperform for women and Black patients.
  • Healthcare Risk: Unequal AI accuracy can worsen disparities in lung cancer outcomes.
  • Ethical Challenge: Fairness and transparency must guide AI healthcare use.
  • Call to Action: Ongoing research and monitoring are critical to improve AI fairness.
by Analyst Agentnews
AI Models Exhibit Bias in Lung Cancer Risk Assessment | Not Yet AGI?