본문

서브메뉴

Incorporating Human Plausibility in Single- and Multi-agent AI Systems.
내용보기
Incorporating Human Plausibility in Single- and Multi-agent AI Systems.
자료유형  
 학위논문
Control Number  
0017161635
International Standard Book Number  
9798382806792
Dewey Decimal Classification Number  
004
Main Entry-Personal Name  
Barnett, Samuel A.
Publication, Distribution, etc. (Imprint  
[S.l.] : Princeton University., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
103 p.
General Note  
Source: Dissertations Abstracts International, Volume: 85-12, Section: B.
General Note  
Advisor: Adams, Ryan P.;Griffiths, Tom.
Dissertation Note  
Thesis (Ph.D.)--Princeton University, 2024.
Summary, Etc.  
요약As AI systems play a progressively larger role in human affairs, it becomes more important that these systems are built with insights from human behavior. In particular, models that are developed on the principle of human plausibility will more likely yield results that are more accountable and more interpretable, in a way that greater ensures an alignment between the behavior of the system and what its stakeholders want from it. In this dissertation, I will present three projects that build on the principle of human plausibility for three distinct applications:(i) Plausible representations: I present the Priority-Adjusted Reply for Successor Representations (PARSR) algorithm, a single-agent reinforcement learning algorithm that brings together the ideas of prioritization-based replay and successor representation learning. Both of these ideas lead to a more biologically plausible algorithm that captures human-like capabilities of transferring and generalizing knowledge from previous tasks to novel, unseen ones.(ii) Plausible inference: I present a pragmatic account of the weak evidence effect, a counterintuitive phenomenon of social cognition that occurs when humans must account for persuasive goals when incorporating evidence from other speakers. This leads to a recursive, Bayesian model that encapsulates how AI systems and their human stakeholders communicate with and understand one another in a way that accounts for the vested interests that each will have.(iii) Plausible evaluation: I introduce a tractable and generalizable measure for cooperative behavior in multi-agent systems that is counterfactually contrastive, contextual, and customizable with respect to different environmental parameters. This measure can be of practical use in disambiguating between cases in which collective welfare is achieved through genuine cooperation, or by each agent acting solely in its own self-interest, both of which result in the same outcome.
Subject Added Entry-Topical Term  
Computer science.
Subject Added Entry-Topical Term  
Computer engineering.
Index Term-Uncontrolled  
AI systems
Index Term-Uncontrolled  
Human plausibility
Index Term-Uncontrolled  
Priority-Adjusted Reply for Successor Representations
Index Term-Uncontrolled  
Multi-agent systems
Added Entry-Corporate Name  
Princeton University Computer Science
Host Item Entry  
Dissertations Abstracts International. 85-12B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:657820
신착도서 더보기
최근 3년간 통계입니다.

소장정보

  • 예약
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • 나의폴더
소장자료
등록번호 청구기호 소장처 대출가능여부 대출정보
TQ0034138 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* 대출중인 자료에 한하여 예약이 가능합니다. 예약을 원하시면 예약버튼을 클릭하십시오.

해당 도서를 다른 이용자가 함께 대출한 도서

관련도서

관련 인기도서

도서위치