How well do practicing radiologists interpret the results of CAD technology? A quantitative characterization

Fallon Branch, K. Matthew Williams, Isabella Noel Santana, Jay Hegdé

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Many studies have shown that using a computer-aided detection (CAD) system does not significantly improve diagnostic accuracy in radiology, possibly because radiologists fail to interpret the CAD results properly. We tested this possibility using screening mammography as an illustrative example. We carried out two experiments, one using 28 practicing radiologists, and a second one using 25 non-professional subjects. During each trial, subjects were shown the following four pieces of information necessary for evaluating the actual probability of cancer in a given unseen mammogram: the binary decision of the CAD system as to whether the mammogram was positive for cancer, the true-positive and false-positive rates of the system, and the prevalence of breast cancer in the relevant patient population. Based only on this information, the subjects had to estimate the probability that the unseen mammogram in question was positive for cancer. Additionally, the non-professional subjects also had to decide, based on the same information, whether to recall the patients for additional testing. Both groups of subjects similarly (and significantly) overestimated the cancer probability regardless of the categorical CAD decision, suggesting that this effect is not peculiar to either group. The misestimations were not fully attributable to causes well-known in other contexts, such as base rate neglect or inverse fallacy. Non-professional subjects tended to recall the patients at high rates, even when the actual probably of cancer was at or near zero. Moreover, the recall rates closely reflected the subjects’ estimations of cancer probability. Together, our results show that subjects interpret CAD system output poorly when only the probabilistic information about the underlying decision parameters is available to them. Our results also highlight the need for making the output of CAD systems more readily interpretable, and for providing training and assistance to radiologists in evaluating the output.

Original languageEnglish (US)
Article number52
JournalCognitive Research: Principles and Implications
Volume7
Issue number1
DOIs
StatePublished - Dec 2022

Keywords

  • Assistive technologies
  • Base rate fallacy
  • Base rate neglect
  • Computer-assisted diagnosis
  • Miss rate neglect

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'How well do practicing radiologists interpret the results of CAD technology? A quantitative characterization'. Together they form a unique fingerprint.

Cite this