본문

서브메뉴

Generalized Learning From Demonstrations for Embodied AI.
Contents Info
Generalized Learning From Demonstrations for Embodied AI.
자료유형  
 학위논문
Control Number  
0017160792
International Standard Book Number  
9798383184820
Dewey Decimal Classification Number  
621.3
Main Entry-Personal Name  
Wu, Yueh-Hua.
Publication, Distribution, etc. (Imprint  
[S.l.] : University of California, San Diego., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
176 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
General Note  
Advisor: Wang, Xiaolong.
Dissertation Note  
Thesis (Ph.D.)--University of California, San Diego, 2024.
Summary, Etc.  
요약Bridging the gap between human capabilities and AI, this dissertation explores Learning from Demonstrations (LfD) for embodied AI. While traditional imitation learning methods struggle with generalization to new environments and complex tasks, this work introduces novel approaches that enable AI agents to learn generalized and multi-task policies from diverse and even imperfect human demonstrations.We first delve into dexterous manipulation, drawing inspiration from the remarkable versatility of human hands. We introduce a novel platform and pipeline for learning from raw human videos, enabling dexterous manipulation for high-dimensional action spaces. Additionally, we propose a generalized policy learning approach based on human hand affordances and a behavior cloning regularization technique, empowering embodied agents to manipulate novel objects. We further explore multi-task real robot learning, integrating spatial and semantic information for enhanced decision-making. We propose a method to distill semantic knowledge from a vision-language foundation model (VLM) using a 3D volumetric representation inspired by human spatial understanding. Additionally, we improve the efficiency and generalizability of multi-task learning by decoupling knowledge distillation from action learning and incorporating diffusion training for more precise sequential decision-making.Finally, we tackle the real-world challenge of imperfect demonstrations, a common issue in practical scenarios. We investigate and address the trajectory stitching problem in Decision Transformers, proposing a solution that learns a superior multi-task policy by adaptively adjusting the model's context length with suboptimal data. This work underscores the development of robust AI systems capable of effectively leveraging imperfect human demonstrations.
Subject Added Entry-Topical Term  
Computer engineering.
Index Term-Uncontrolled  
Decision-making
Index Term-Uncontrolled  
Action learning
Index Term-Uncontrolled  
Vision-language
Index Term-Uncontrolled  
Robot learning
Added Entry-Corporate Name  
University of California, San Diego Computer Science and Engineering
Host Item Entry  
Dissertations Abstracts International. 86-01B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:653784
New Books MORE
최근 3년간 통계입니다.

פרט מידע

  • הזמנה
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • התיקיה שלי
גשמי
Reg No. Call No. מיקום מצב להשאיל מידע
TQ0031056 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* הזמנות זמינים בספר ההשאלה. כדי להזמין, נא לחץ על כפתור ההזמנה

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치