Making expert decisions easier to fathom: On the explainability of visual object recognition expertise

Jay Hegdé, Evgeniy Bart

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a “gut feeling” for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of “expert” subjects to categorize novel, naturalistic 3-D objects called “digital embryos” into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert's explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user's performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions.

Original languageEnglish (US)
Article number670
JournalFrontiers in Neuroscience
Volume12
Issue numberOCT
DOIs
StatePublished - Oct 12 2018
Externally publishedYes

Keywords

  • Classification
  • Machine learning
  • Objective explainability
  • Perceptual learning
  • Subjective explainability
  • Weakly guided learning

ASJC Scopus subject areas

  • General Neuroscience

Fingerprint

Dive into the research topics of 'Making expert decisions easier to fathom: On the explainability of visual object recognition expertise'. Together they form a unique fingerprint.

Cite this