본문

서브메뉴

Accelerating Machine Learning Algorithms with Adaptive Sampling- [electronic resource]
Contents Info
Accelerating Machine Learning Algorithms with Adaptive Sampling- [electronic resource]
자료유형  
 학위논문
Control Number  
0016934539
International Standard Book Number  
9798380485944
Dewey Decimal Classification Number  
510
Main Entry-Personal Name  
Tiwari, Mohit.
Publication, Distribution, etc. (Imprint  
[S.l.] : Stanford University., 2023
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2023
Physical Description  
1 online resource(114 p.)
General Note  
Source: Dissertations Abstracts International, Volume: 85-04, Section: B.
General Note  
Advisor: Piech, Christopher;Thrun, Sebastian;Valiant, Gregory.
Dissertation Note  
Thesis (Ph.D.)--Stanford University, 2023.
Restrictions on Access Note  
This item must not be sold to any third party vendors.
Summary, Etc.  
요약The era of huge data necessitates highly efficient machine learning algorithms. Many common machine learning algorithms, however, rely on computationally intensive subroutines that are prohibitively expensive on large datasets. Oftentimes, existing techniques subsample the data or use other methods to improve computational efficiency, at the expense of incurring some approximation error. This thesis demonstrates that it is often sufficient, instead, to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality.The results in this thesis are based on techniques from the adaptive sampling literature. Chapter 1 begins with an introduction to a specific adaptive sampling problem: that of best-arm identification in multi-armed bandits. We first provide a formal description of the setting and the best-arm identification problem. We then present a general algorithm, called successive elimination, for solving the best-arm identification problem.The techniques developed in Chapter 1 will be applied to different problems in Chapters 2, 3, and 4. In Chapter 2, we discuss an how the k-medoids clustering problem can be reduced to a sequence of best-arm identification problems. We use this observation to present a new algorithm, based on successive elimination, that matches the prior state-of-the-art in clustering quality but reaches the same solutions much faster. Our algorithm achieves an O(n/logn) reduction in sample complexity over prior state-of-the-art, where n is the size of the dataset, under general assumptions over the data generating distribution.In Chapter 3, we analyze the problem of training tree-based models. The majority of the training time for such models is in splitting each node of the tree, i.e., determining the feature and corresponding threshold at which to split each node. We show that the node-splitting subroutine can be reduced to a best-arm identification problem and present a state-of-the-art algorithm for training trees. Our algorithm depends only on the relative quality of each possible split, rather than explicitly depending on the size of the training dataset, and reduces the explicit dependence on dataset size n from O(n), for the most commonly-used prior algorithm, to O(1). Our algorithm applies generally to many tree-based models, such as Random Forests and XGBoost.In Chapter 4, we study the Maximum Inner Product Search problem. We observe that, as with the k-medoids and node-splitting problems, the Maximum Inner Product Search problem can be reduced to a best-arm identification problem. Armed with this observation, we present a novel algorithm for the Maximum Inner Product Search problem in high dimensions. Our algorithm reduces the explicit scaling with d, the dimensionality of the dataset, from O(√ d) to O(1) under reasonable assumptions on the data. Our algorithm has several advantages: it requires no preprocessing of the data, naturally deals with the addition or removal of new data points, and includes a hyperparameter to trade off accuracy and efficiency.Chapter 5 concludes this thesis with a summary of its contributions and possible directions for future work.
Subject Added Entry-Topical Term  
Sample size.
Subject Added Entry-Topical Term  
Computer science.
Added Entry-Corporate Name  
Stanford University.
Host Item Entry  
Dissertations Abstracts International. 85-04B.
Host Item Entry  
Dissertation Abstract International
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:642248
New Books MORE
최근 3년간 통계입니다.

detalle info

  • Reserva
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • Mi carpeta
Material
número de libro número de llamada Ubicación estado Prestar info
TQ0028165 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* Las reservas están disponibles en el libro de préstamos. Para hacer reservaciones, haga clic en el botón de reserva

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치