본문

서브메뉴

Eye Gaze for Intelligent Driving.
Eye Gaze for Intelligent Driving.

상세정보

자료유형  
 학위논문
Control Number  
0017164153
International Standard Book Number  
9798896073161
Dewey Decimal Classification Number  
629.8
Main Entry-Personal Name  
Biswas, Abhijat.
Publication, Distribution, etc. (Imprint  
[S.l.] : Carnegie Mellon University., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
121 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-04, Section: A.
General Note  
Advisor: Admoni, Henny.
Dissertation Note  
Thesis (Ph.D.)--Carnegie Mellon University, 2024.
Summary, Etc.  
요약Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving "intelligence" today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on information about the state of the vehicle and surrounding traffic elements. However, another less explored domain of intelligence is cabin-facing modeling information about the driver's cognitive states. In this thesis, I investigate the utility of a signal that can help us achieve cabin-facing intelligence: driver eye gaze. Eye gaze allows us to infer driver internal cognitive states and we explore how this can improve both autonomous driving methods and intelligent driving assistance. To enable this research, I first contribute DReyeVR, an open-source virtual reality driving simulator, which was designed with behavioral and interaction research priorities in mind but exists in the same experimental environments used by vehicular autonomy researchers, effectively bridging the two fields. I show how DReyeVR can be used to conduct psycho-physical experiments by designing one to characterize the extent and dynamics of driver peripheral vision. Making good on the promise of bridging behavioral and autonomy research, I show how similar naturalistic driver gaze data can be used to provide additional supervision to autonomous driving agents trained via imitation learning to mitigate causal confusion.I then turn to the assistive domain. First, I describe a study of false positives in a real-world dataset of forward collision warnings (FCW) deployed in vehicles during previous a longitudinal study in-the-wild. Deploying FCWs purely based on scene physics without accounting for driver attention leads to overwhelming them with redundant alerts. I demonstrate a warning strategy that accounts for driver attention and explicitly models their hypothesis of other vehicles' behavior, hence improving both true positive and negative rates. Finally, I propose the shared awareness paradigm, a framework for continuously supporting driver situational awareness (SA) with an intelligent perception system. To build the driver situational awareness model, we first collect data using a novel SA labeling method, to obtain continuous, per-object driver awareness labels along with their gaze, driving actions and the simulated world state. I use this data to train a learned model to predict drivers' situational awareness of traffic elements given a history of their gaze and scene context. In parallel, we reason about the importance of objects in a counterfactual fashion by studying the impact of perturbing object actions on the ego vehicle's motion plan. Finally, I put it all together, in an offline demonstration on replayed simulated drives to show how we could alert drivers of important objects they are unaware of.I conclude by reflecting on how eye gaze can be used to model the internal cognitive states of human drivers, in service of improving both vehicle autonomy and driving assistance.
Subject Added Entry-Topical Term  
Robotics.
Subject Added Entry-Topical Term  
Engineering.
Subject Added Entry-Topical Term  
Computer science.
Subject Added Entry-Topical Term  
Transportation.
Index Term-Uncontrolled  
Autonomous driving
Index Term-Uncontrolled  
Driving assistance
Index Term-Uncontrolled  
Eye gaze
Index Term-Uncontrolled  
Intelligent driving
Added Entry-Corporate Name  
Carnegie Mellon University Robotics Institute
Host Item Entry  
Dissertations Abstracts International. 86-04A.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:658605

MARC

 008250224s2024        us  ||||||||||||||c||eng  d
■001000017164153
■00520250211152837
■006m          o    d                
■007cr#unu||||||||
■020    ▼a9798896073161
■035    ▼a(MiAaPQ)AAI31561939
■040    ▼aMiAaPQ▼cMiAaPQ
■0820  ▼a629.8
■1001  ▼aBiswas,  Abhijat.
■24510▼aEye  Gaze  for  Intelligent  Driving.
■260    ▼a[S.l.]▼bCarnegie  Mellon  University.  ▼c2024
■260  1▼aAnn  Arbor▼bProQuest  Dissertations  &  Theses▼c2024
■300    ▼a121  p.
■500    ▼aSource:  Dissertations  Abstracts  International,  Volume:  86-04,  Section:  A.
■500    ▼aAdvisor:  Admoni,  Henny.
■5021  ▼aThesis  (Ph.D.)--Carnegie  Mellon  University,  2024.
■520    ▼aIntelligent  vehicles  have  been  proposed  as  one  path  to  increasing  traffic  safety  and  reducing  on-road  crashes.  Driving  "intelligence"  today  takes  many  forms,  ranging  from  simple  blind  spot  occupancy  or  forward  collision  warnings  to  distance-aware  cruise  and  all  the  way  to  full  driving  autonomy  in  certain  situations.  Primarily,  these  methods  are  outward-facing  and  operate  on  information  about  the  state  of  the  vehicle  and  surrounding  traffic  elements.  However,  another  less  explored  domain  of  intelligence  is  cabin-facing  modeling  information  about  the  driver's  cognitive  states. In  this  thesis,  I  investigate  the  utility  of  a  signal  that  can  help  us  achieve  cabin-facing  intelligence:  driver  eye  gaze.  Eye  gaze  allows  us  to  infer  driver  internal  cognitive  states  and  we  explore  how  this  can  improve  both  autonomous  driving  methods  and  intelligent  driving  assistance.  To  enable  this  research,  I  first  contribute  DReyeVR,  an  open-source  virtual  reality  driving  simulator,  which  was  designed  with  behavioral  and  interaction  research  priorities  in  mind  but  exists  in  the  same  experimental  environments  used  by  vehicular  autonomy  researchers,  effectively  bridging  the  two  fields.  I  show  how  DReyeVR  can  be  used  to  conduct  psycho-physical  experiments  by  designing  one  to  characterize  the  extent  and  dynamics  of  driver  peripheral  vision.  Making  good  on  the  promise  of  bridging  behavioral  and  autonomy  research,  I  show  how  similar  naturalistic  driver  gaze  data  can  be  used  to  provide  additional  supervision  to  autonomous  driving  agents  trained  via  imitation  learning  to  mitigate  causal  confusion.I  then  turn  to  the  assistive  domain.  First,  I  describe  a  study  of  false  positives  in  a  real-world  dataset  of  forward  collision  warnings  (FCW)  deployed  in  vehicles  during  previous  a  longitudinal  study  in-the-wild.  Deploying  FCWs  purely  based  on  scene  physics  without  accounting  for  driver  attention  leads  to  overwhelming  them  with  redundant  alerts.  I  demonstrate  a  warning  strategy  that  accounts  for  driver  attention  and  explicitly  models  their  hypothesis  of  other  vehicles'  behavior,  hence  improving  both  true  positive  and  negative  rates.  Finally,  I  propose  the  shared  awareness  paradigm,  a  framework  for  continuously  supporting  driver  situational  awareness  (SA)  with  an  intelligent  perception  system.  To  build  the  driver  situational  awareness  model,  we  first  collect  data  using  a  novel  SA  labeling  method,  to  obtain  continuous,  per-object  driver  awareness  labels  along  with  their  gaze,  driving  actions  and  the  simulated  world  state.  I  use  this  data  to  train  a  learned  model  to  predict  drivers'  situational  awareness  of  traffic  elements  given  a  history  of  their  gaze  and  scene  context.  In  parallel,  we  reason  about  the  importance  of  objects  in  a  counterfactual  fashion  by  studying  the  impact  of  perturbing  object  actions  on  the  ego  vehicle's  motion  plan.  Finally,  I  put  it  all  together,  in  an  offline  demonstration  on  replayed  simulated  drives  to  show  how  we  could  alert  drivers  of  important  objects  they  are  unaware  of.I  conclude  by  reflecting  on  how  eye  gaze  can  be  used  to  model  the  internal  cognitive  states  of  human  drivers,  in  service  of  improving  both  vehicle  autonomy  and  driving  assistance.
■590    ▼aSchool  code:  0041.
■650  4▼aRobotics.
■650  4▼aEngineering.
■650  4▼aComputer  science.
■650  4▼aTransportation.
■653    ▼aAutonomous  driving
■653    ▼aDriving  assistance
■653    ▼aEye  gaze
■653    ▼aIntelligent  driving
■690    ▼a0800
■690    ▼a0771
■690    ▼a0984
■690    ▼a0537
■690    ▼a0709
■71020▼aCarnegie  Mellon  University▼bRobotics  Institute.
■7730  ▼tDissertations  Abstracts  International▼g86-04A.
■790    ▼a0041
■791    ▼aPh.D.
■792    ▼a2024
■793    ▼aEnglish
■85640▼uhttp://www.riss.kr/pdu/ddodLink.do?id=T17164153▼nKERIS▼z이  자료의  원문은  한국교육학술정보원에서  제공합니다.

미리보기

내보내기

chatGPT토론

Ai 추천 관련 도서


    New Books MORE
    Related books MORE
    최근 3년간 통계입니다.

    ค้นหาข้อมูลรายละเอียด

    • จองห้องพัก
    • 캠퍼스간 도서대출
    • 서가에 없는 책 신고
    • โฟลเดอร์ของฉัน
    วัสดุ
    Reg No. Call No. ตำแหน่งที่ตั้ง สถานะ ยืมข้อมูล
    TQ0034926 T   원문자료 열람가능/출력가능 열람가능/출력가능
    마이폴더 부재도서신고

    * จองมีอยู่ในหนังสือยืม เพื่อให้การสำรองที่นั่งคลิกที่ปุ่มจองห้องพัก

    해당 도서를 다른 이용자가 함께 대출한 도서

    Related books

    Related Popular Books

    도서위치