서브메뉴
검색
Planning Under Uncertainty in Safety-Critical Systems.
Planning Under Uncertainty in Safety-Critical Systems.
- 자료유형
- 학위논문
- Control Number
- 0017162189
- International Standard Book Number
- 9798384198765
- Dewey Decimal Classification Number
- 371.3
- Main Entry-Personal Name
- Jamgochian, Arec Levon.
- Publication, Distribution, etc. (Imprint
- [S.l.] : Stanford University., 2024
- Publication, Distribution, etc. (Imprint
- Ann Arbor : ProQuest Dissertations & Theses, 2024
- Physical Description
- 122 p.
- General Note
- Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
- General Note
- Advisor: Kochenderfer, Mykel.
- Dissertation Note
- Thesis (Ph.D.)--Stanford University, 2024.
- Summary, Etc.
- 요약From warehouses and manufacturing lines to homes and offices, from roads and seas, to skies and space, autonomous systems promise to improve efficiency, unlock human potential, and explore new frontiers. Many autonomous systems already make decisions that impact our everyday lives. As technology continues to develop and the cost of compute continues to decrease, autonomous systems will continue integrating into society. However, a unifying necessity for safety-critical systems to deploy autonomously in the real world is the need to be able to reason about their environments and make good decisions to satisfy their objectives.For autonomous systems to be deployed successfully, it often does not suffice to plan deterministically, that is, assuming that everything will `go as planned` against a single string of outcomes. Rather, agents must reason about the uncertainty that can arise, either from inexact actuation or sensing, imperfect information, unclear objectives, unknown motives of other participants, or complex environments. These sources of uncertainty can significantly complicate autonomous decision-making and can ultimately lead to catastrophic errors. By explicitly reasoning about these sources of uncertainty, this thesis introduces new methods for planning safely against them.First, this thesis investigates methods that use data to overcome uncertainty in action outcomes and agent objectives. Specifically, we consider using human driving demonstrations alongside simulators to overcome objective uncertainty for autonomous driving in complex urban environments. Previous approaches that used simulators to help imitate human driving were typically limited to relatively simple scenarios. We introduce Safety-Aware Hierarchical Adversarial Imitation Learning (SHAIL), a method that scales safety-critical data-driven decision-making to complex problems through reliance on hierarchical decomposition and safety predictions. After building a simulator to test counterfactuals of real-world driving decisions, we demonstrate empirically that SHAIL can improve safety compared to other data-driven decision-making methods, especially in unseen driving scenarios.Next, we turn to safe planning under outcome and state uncertainty when models for those uncertainties are known a priori. Here, we impose safety through constraints on agent plans, modeling problems as constrained partially observable Markov decision processes (CPOMDPs). Approximate CPOMDP solutions are typically limited to small, discrete actions and observation spaces. We introduce algorithms that extend online search-based planning in CPOMDPs to domains with large or continuous state, action, and observation spaces by using methods that artificially limit the width of a search tree in unpromising areas and satisfy constraints using dual ascent. We empirically compare the effectiveness of our proposed algorithms on continuous CPOMDPs that model both toy and real-world safety-critical problems. In doing so, we demonstrate that CPOMDP planning can be effective in continuous domains.Unfortunately, the algorithms we introduce for safe online planning in continuous CPOMDPs are still restricted to relatively small problems. Fortunately, as noted for urban driving, many large planning problems can be decomposed hierarchically. In our final contribution, we introduce Constrained Options Belief Tree Search (COBeTS) to scale continuous CPOMDP planning to much larger problems with favorable hierarchical decompositions by planning over macro-actions (i.e. low-level controller options). We demonstrate COBeTS in several large, safety-critical, uncertain domains, showing that it can plan successfully while non-hierarchical baselines cannot. Importantly, we show that with constraint-satisfying macro-actions, COBeTS can guarantee safety regardless of planning time. In summary, our contributions improve planning safety in domains with quantifiable outcome, state, and/or objective uncertainty through novel applications of hierarchies and/or constraints.
- Subject Added Entry-Topical Term
- Teaching methods.
- Subject Added Entry-Topical Term
- Carbon sequestration.
- Subject Added Entry-Topical Term
- Planning.
- Subject Added Entry-Topical Term
- Autonomous vehicles.
- Subject Added Entry-Topical Term
- Decision making.
- Subject Added Entry-Topical Term
- Robots.
- Subject Added Entry-Topical Term
- Decomposition.
- Subject Added Entry-Topical Term
- Design.
- Subject Added Entry-Topical Term
- Markov analysis.
- Subject Added Entry-Topical Term
- Robotics.
- Subject Added Entry-Topical Term
- Environmental engineering.
- Subject Added Entry-Topical Term
- Transportation.
- Subject Added Entry-Topical Term
- Pedagogy.
- Added Entry-Corporate Name
- Stanford University.
- Host Item Entry
- Dissertations Abstracts International. 86-03B.
- Electronic Location and Access
- 로그인을 한후 보실 수 있는 자료입니다.
- Control Number
- joongbu:655232