서브메뉴
검색
Robust Machine Learning for the Control of Real-world Robotic Systems- [electronic resource]
Robust Machine Learning for the Control of Real-world Robotic Systems- [electronic resource]
- 자료유형
- 학위논문
- Control Number
- 0016932366
- International Standard Book Number
- 9798380380713
- Dewey Decimal Classification Number
- 620
- Main Entry-Personal Name
- Westenbroek, Tyler.
- Publication, Distribution, etc. (Imprint
- [S.l.] : University of California, Berkeley., 2023
- Publication, Distribution, etc. (Imprint
- Ann Arbor : ProQuest Dissertations & Theses, 2023
- Physical Description
- 1 online resource(129 p.)
- General Note
- Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
- General Note
- Advisor: Sastry, S. Shankar.
- Dissertation Note
- Thesis (Ph.D.)--University of California, Berkeley, 2023.
- Restrictions on Access Note
- This item must not be sold to any third party vendors.
- Summary, Etc.
- 요약Optimal control is a powerful paradigm for controller design as it can be used to implicitly encode complex stabilizing behaviors using cost functions which are relatively simple to specify. On the other hand, the curse of dimensionality and the presence of non-convex optimization landscapes can make it challenging to reliably obtain stabilizing controllers for complex high-dimensional systems. Recently, sampling-based reinforcement learning approaches have enabled roboticists to obtain approximately optimal feedback controllers for high-dimensional systems even when the dynamics are unknown. However, these methods remain too unreliable for practical deployment in many application domains.This dissertation argues that the key to reliable optimization-based controller synthesis is obtaining a deeper understanding of how the cost functions we write down and the algorithms we design interact with the underlying feedback geometry of the control system. First, we next investigate how to accelerate model-free reinforcement learning by embedding control Lyapunov functions - which are energy like functions for the system- into the objective. Next we will introduce a novel data-driven policy optimization framework which embeds structural information from an approximate dynamics model and family of low-level feedback controllers into the update scheme. We then turn to a dynamic programming perspective, and investigate how the geometric structure of the system places fundamental limitations on how much computation is required to compute or learn a stabilizing controller. Finally, we investigate derivative-based search algorithms and investigate how to design 'good' cost functions for model predictive control schemes, which ensure these methods stabilize the system even when gradient-based methods are used to search over a non-convex objective. Throughout an emphasis will be placed on how structural insights gleaned from a simple analytical model can guide our design decisions, and we will discuss applications to dynamic walking, flight control, and autonomous driving.
- Subject Added Entry-Topical Term
- Engineering.
- Subject Added Entry-Topical Term
- Computer engineering.
- Subject Added Entry-Topical Term
- Robotics.
- Index Term-Uncontrolled
- Control theory
- Index Term-Uncontrolled
- Machine learning
- Index Term-Uncontrolled
- Autonomous driving
- Index Term-Uncontrolled
- Reinforcement learning
- Index Term-Uncontrolled
- Dynamic programming
- Added Entry-Corporate Name
- University of California, Berkeley Electrical Engineering & Computer Sciences
- Host Item Entry
- Dissertations Abstracts International. 85-03B.
- Host Item Entry
- Dissertation Abstract International
- Electronic Location and Access
- 로그인을 한후 보실 수 있는 자료입니다.
- Control Number
- joongbu:639601