Methodological Challenges in AI Cancer Screening HEEs

João L. Carapinha, Ph.D.

Methodological Challenges in AI Cancer Screening HEEs

AI cancer screening HEEs reveal both promise and pitfalls as healthcare leaders assess these tools’ value. Artificial intelligence enhances diagnostic accuracy in oncology, yet economic evaluations must prove their worth amid rising cancer rates—over 20 million new cases yearly. This article examines trends, limitations, and paths forward in AI cancer screening HEEs, drawing from a systematic review referenced below to guide decisions on policy, pricing, and system integration.

Background and Current Landscape

Researchers have published most AI cancer screening HEEs since 2022, coinciding with AI’s clinical growth. These model-based studies, mainly from developed nations like the US and UK, evaluate costs against performance gains in screening for cancers such as colorectal or breast. Quality checks via CHEERS-AI and Philips checklists show median scores of 46 out of 62 items (range 36-50). However, gaps persist in transparency, particularly around AI-specific elements. For instance, developed-country focus limits applicability to lower-resource settings, where data shortages hinder adoption.

Modelling Choices and Their Shortcomings

Studies favour Markov cohort models for their efficiency, followed by microsimulations (six studies), decision trees, and discrete event simulations (DES; two studies). Markov models handle long-term pathways well but struggle with real-world complexities, such as clinicians overriding AI advice. Microsimulations suit heterogeneous populations better, yet individual-level data scarcity restricts them. DES offers flexibility without fixed cycles, making it ideal for AI dynamics; complexity curbs wider use, however.

All evaluations treat AI sensitivity and specificity as fixed, ignoring improvements from real-world learning. This approach understates benefits, as dynamic models could better reflect long-term impacts, similar to vaccine assessments accounting for herd effects.

Population Equity and Reporting Deficiencies

Equity issues surface in few studies—only three tackle population differences. Training data often skews towards European ancestry, reducing accuracy for other ethnicities. Merely six studies match national populations, questioning generalisability.

Common omissions include half-cycle corrections, data quality reviews, and subgroup analyses by race, gender, or socioeconomic status. Such lapses compromise findings’ robustness.

Cost Estimation and Sensitivity Insights

Costs dominate sensitivity analyses, often flipping conclusions, yet reports rely on assumptions rather than evidence. Direct procurement costs appear, but indirect ones—training, storage, maintenance—lack detail, breaching CHEERS-AI standards. Some studies demonstrated five-year savings from prevented colorectal cases; others flagged NHS setup barriers.

GapOccurrenceConsequence
Static AI MetricsAll studiesUndervalues future gains
Cost BreakdownFrequentSkews ratios
Barriers AssessmentRareDelays uptake

Strategic Implications and Actionable Steps

Flawed AI cancer screening HEEs risk misguided policies and uneven market access. Static assumptions may undervalue investments over time. Payers face reimbursement hesitancy; providers, adoption hurdles.

Leaders should act decisively:

  • Prioritise DES and dynamic parameters to capture AI evolution.
  • Enforce CHEERS-AI reporting, with full cost and subgroup details.
  • Build diverse datasets for equitable performance.
  • Blend models with trials for validation.
  • Assess barriers like initial fees alongside outcomes.

These measures promise quality gains, supporting fair pricing and global scalability.

Summing Up the Evidence

AI cancer screening HEEs confirm value through better diagnostics, despite costs, but methodological flaws demand reform. Decision-makers must push for rigorous, transparent evaluations to maximise impact. What steps will you take to strengthen AI cancer screening HEEs in your organisation?

Source

Yuyanzi Zhang, Lei Wang, Yifang Liang, Annushiah Vasan Thakumar, Hongfei Hu, Yan Li, Aixia Ma, Hongchao Li, Luying Wang, A Systematic Review of the Methods and Quality of Economic Evaluations for AI-assisted Cancer Screening or Diagnosis,Value in Health,2025, ISSN 1098-3015, https://doi.org/10.1016/j.jval.2025.10.014.