본문

서브메뉴

Efficient 3D Vision for Autonomous Driving.
Содержание
Efficient 3D Vision for Autonomous Driving.
자료유형  
 학위논문
Control Number  
0017163784
International Standard Book Number  
9798384450580
Dewey Decimal Classification Number  
621.3
Main Entry-Personal Name  
Jacobson, Philip.
Publication, Distribution, etc. (Imprint  
[S.l.] : University of California, Berkeley., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
107 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
General Note  
Advisor: Wu, Ming C.
Dissertation Note  
Thesis (Ph.D.)--University of California, Berkeley, 2024.
Summary, Etc.  
요약Self-driving vehicles have long been envisioned as a massive leap forward in transportation technology. Although several efforts to developing fully autonomous vehicles are currently being undertaken in both industry and academia, so far none have achieved the promise of full self-driving. Of the challenges in building the autonomous software for self-driving cars, one of the most prominent is perception, or the ability for the vehicle to sense the world around it. To meet the requirements for practical deployment onto autonomous vehicles, perception systems must meet four key metrics of efficiency: accuracy, low-latency, reasonable compute hardware, and training data efficiency.In this dissertation, we will introduce novel approaches to AV perception while aiming to address the four metrics for efficiency. We introduce four major new perception schemes during the course of this dissertation.In Chapter 2, we consider a combined hardware/algorithms approach to perception to achieve accelerated training speeds on limited compute hardware. We introduce system based on the principles of delayed-feedback reservoir computing implemented using an optoelectronic delay system. To tailor this approach to computer vision tasks, we combine it with a high-speed digital preprocessing through untrained convolutional layers to generate randomized feature maps that are then circulated through our reservoir. We experimentally validate our approach on the classic MNIST handwritten digit recognition task, and achiever performance on-par with a digitally-trained convolutional neural network, while achieving a training-time speed-up of up to 10x.In Chapter 3, we consider 3D object detection in autonomous driving settings, and specifically consider the problem of efficient LiDAR-camera fusion. We introduce a novel sensor fusion approach, dubbed Center Feature Fusion, which operates through fusing camera and LiDAR deep features in the bird's-eye-view space. To enable low-latency fusion, we propose a sparse feature fusion, we projects only a set of identified key camera features to bird's-eye-view. As a result, we are able to achieve performance on-par with competing sensor fusion approaches, while reducing runtime latency by several times.In Chapter 4, we consider the problem of 3D object detection from the data efficiency angle, aiming to reduce the labeled data requirement needed to train the computer vision models necessary for autonomous vehicles. In this chapter, we introduce doubly-robust self-training, a novel generalized approach to semi-supervised learning. We conduct both theoretical analysis to demonstrate its superiority over the standard self-training approaches regardless of teacher model quality, and experimental analysis on both image classification and object detection. For both vision tasks, we achieve performance superior to the self-training baseline with no extra computational costs.In Chapter 5, we continue exploring semi-supervised 3D object detection through leveraging the motion forecasting component of the autonomy stack to improve perception models. We introduce our novel algorithm , TrajSSL, which uses a pre-trained prediction model to generate a set of synthetic labels to enhance the training of a student detector model. The generate synthetic labels are used to establish temporal consistency, and thus filter out low-quality pseudo-labels during training, while simultaneously correcting for missing pseudo-labels. TrajSSL outperforms the state-of-the-art for semi-supervised 3D object detection across a wide variety of scenarios.
Subject Added Entry-Topical Term  
Electrical engineering.
Subject Added Entry-Topical Term  
Computer engineering.
Subject Added Entry-Topical Term  
Automotive engineering.
Index Term-Uncontrolled  
Self-driving vehicles
Index Term-Uncontrolled  
Transportation technology
Index Term-Uncontrolled  
Autonomous vehicles
Index Term-Uncontrolled  
Autonomous software
Index Term-Uncontrolled  
Novel algorithm
Added Entry-Corporate Name  
University of California, Berkeley Electrical Engineering & Computer Sciences
Host Item Entry  
Dissertations Abstracts International. 86-03B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:658327
New Books MORE
최근 3년간 통계입니다.

Подробнее информация.

  • Бронирование
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • моя папка
материал
Reg No. Количество платежных Местоположение статус Ленд информации
TQ0034649 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* Бронирование доступны в заимствований книги. Чтобы сделать предварительный заказ, пожалуйста, нажмите кнопку бронирование

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치