본문

서브메뉴

Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality- [electronic resource]
Inhalt Info
Is Explainable Medical AI a Medical Reversal Waiting to Happen? Exploring the Impact of AI Explanations on Clinical Decision Quality- [electronic resource]
자료유형  
 학위논문
Control Number  
0016933451
International Standard Book Number  
9798379958619
Dewey Decimal Classification Number  
020
Main Entry-Personal Name  
Clement, Jeffrey.
Publication, Distribution, etc. (Imprint  
[S.l.] : University of Minnesota., 2023
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2023
Physical Description  
1 online resource(275 p.)
General Note  
Source: Dissertations Abstracts International, Volume: 85-02, Section: B.
General Note  
Advisor: Curley, Shawn;Ren, Yuqing.
Dissertation Note  
Thesis (Ph.D.)--University of Minnesota, 2023.
Restrictions on Access Note  
This item must not be sold to any third party vendors.
Summary, Etc.  
요약Medical AI systems generate personalized recommendations to improve patient care, but to have an impact, the system recommendation must be different from what the clinician would do on their own. This impact might be beneficial (in the case of a high-quality recommendation), and clinicians should follow the system; on the other hand, AI will be wrong occasionally and clinicians should override it. Resolving conflict between their own judgment and the system's is crucial to optimal AI-augmented clinical decision-making. To help resolve such conflict, there have been recent calls to design AI systems that provide explanations for the reasoning behind their recommendations, but it is unclear how system explanations affect how clinicians incorporate AI recommendations into care decisions. This dissertation explores this issue against the history of medical reversals-technologies and treatments that were initially thought beneficial but ultimately were ineffective or even harmful to patients. Ideally, AI explanations are helpful; however, practicing evidence-based medicine requires that we validate this normative statement to ensure that explanations are not either ineffective or worse, actively harmful.We employ mixed methods, combining semi-structured interviews and three computer-based experiments to examine factors posited to support proper system use. To ground the findings, Study 1 interviews with clinicians highlighted that disclosing the confidence level and explanations for AI recommendations might create new conflicts between the system and clinician. To evaluate this within the clinical context, we conducted a trio of experiments where clinical experts were faced with drug dosing decision tasks. In Study 2, participants received AI recommendations for drug dosing and were shown (or not) the confidence level and an explanation. In Study 3, participants were shown explanations (or not) and received two patient cases each with either a high- or low-quality AI recommendation. Contrary to theoretical predictions, providing explanations did not uniformly increase the influence of AI or improve clinical decision quality. Instead, explanations increased the influence of low-quality AI recommendations and decreased the influence of high-quality recommendations. In Study 4, participants were shown one of four types of explanations along with either a high- or low-quality recommendation. Again, explanations did not optimally improve the influence of AI or improve decision-quality. Two relevant findings help explain why. First, the initial disagreement between the users' a priori judgment and the AI recommendation, along with the quality of the recommendation, were much more influential than any of our explanation formats. And second, the qualitative and mediation analyses indicate that clinicians do carefully consider explanations, but the way that explanations impact overall perceptions of explanation detail and helpfulness of the explanation or decision conflict do not necessarily explain the observed effects. We see indications that the individual information cues in the explanations are compared against the users' own knowledge and experience, and the way discrepancies between the users' own opinion about these cues and the conclusions the system purports to draw from them can influence use of the recommendation. Further work is needed to understand exactly how the information from explanations is shaping use of AI advice.
Subject Added Entry-Topical Term  
Information science.
Subject Added Entry-Topical Term  
Health sciences.
Subject Added Entry-Topical Term  
Computer science.
Index Term-Uncontrolled  
Medical AI systems
Index Term-Uncontrolled  
Health care decision
Index Term-Uncontrolled  
Medical reversal
Index Term-Uncontrolled  
Clinical decision quality
Added Entry-Corporate Name  
University of Minnesota Business Administration
Host Item Entry  
Dissertations Abstracts International. 85-02B.
Host Item Entry  
Dissertation Abstract International
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:643241
New Books MORE
최근 3년간 통계입니다.

Buch Status

  • Reservierung
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • Meine Mappe
Sammlungen
Registrierungsnummer callnumber Standort Verkehr Status Verkehr Info
TQ0029146 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* Kredite nur für Ihre Daten gebucht werden. Wenn Sie buchen möchten Reservierungen, klicken Sie auf den Button.

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치