본문

서브메뉴

Matrix Sketching in Optimization.
내용보기
Matrix Sketching in Optimization.
자료유형  
 학위논문
Control Number  
0017162735
International Standard Book Number  
9798342103176
Dewey Decimal Classification Number  
330
Main Entry-Personal Name  
Dexter, Gregory.
Publication, Distribution, etc. (Imprint  
[S.l.] : Purdue University., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
199 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-04, Section: B.
General Note  
Advisor: Drineas, Petros;Khanna, Rajiv.
Dissertation Note  
Thesis (Ph.D.)--Purdue University, 2024.
Summary, Etc.  
요약Continuous optimization is a fundamental topic both in theoretical computer science and applications of machine learning. Meanwhile, an important idea in the development modern algorithms it the use of randomness to achieve empirical speedup and improved theoretical runtimes. Stochastic gradient descent (SGD) and matrix-multiplication time linear program solvers [1] are two important examples of such achievements. Matrix sketching and related ideas provide a theoretical framework for the behavior of random matrices and vectors that arise in these algorithms, thereby provide a natural way to better understand the behavior of such randomized algorithms. In this dissertation, we consider three general problems in this area:1. Linear Programming: Interior point methods (IPMs) are a common approach for solving linear programs (LPs) with strong theoretical guarantees and solid empirical performance. The time complexity of these methods is dominated by the cost of solv- ing a linear system of equations at each iteration. In common applications of linear programming, particularly in machine learning and scientific computing, the size of this linear system can become prohibitively large, requiring the use of iterative solvers, which provide an approximate solution to the linear system. However, approximately solving the linear system at each iteration of an IPM invalidates the theoretical guar- antees of common IPM analyses. To remedy this, we theoretically and empirically analyze (slightly modified) predictor-correctorIPMs when using approximate linear solvers: our approach guarantees that, when certain conditions are satisfied, the num- ber of IPM iterations does not increase and that the final solution remains feasible. We also provide practical instantiations of approximate linear solvers that satisfy these conditions for special classes of constraint matrices using randomized linear algebra.2. Eigenvalue Estimation: We study the problem of approximating the eigenspectrum of a symmetric matrix A ∈ R with bounded entries (i.e.. ||A||s≤ 1). We present a simple sublinear time algorithm that approximates all eigenvalues of A up to additive error ten using those of a randomly sampled()x() principal submatrix. Our result can be viewed as a concentration bound on the complete eigenspectrum of a random submatrix, significantly extending known bounds on just the singular values (the magnitudes of the eigenvalues). We give improved error bounds of ennz(A) and Ar when the rows of A can be sampled with probabilities proportional to their sparsities or their squared l2 norms respectively. Here nnz(A) is the number of non-zero entries in A and AF is its Frobenius norm. Even for the strictly easier problems of approximating the singular values or testing the existence of large negative eigenvalues [2], our results are the first that take advantage of non-uniform sampling to give improved error bounds. From a technical perspective, our results require several new eigenvalue concentration and perturbation bounds for matrices with bounded entries. Our non-uniform sampling bounds require a new algorithmic approach, which judiciously zeroes out entries of a randomly sampled submatrix to reduce variance, before computing the eigenvalues of that submatrix as estimates for those of A. We complement our theoretical results with numerical simulations, which demonstrate the effectiveness of our algorithms in practice.
Subject Added Entry-Topical Term  
Sparsity.
Subject Added Entry-Topical Term  
Computer science.
Subject Added Entry-Topical Term  
Neural networks.
Subject Added Entry-Topical Term  
Probability.
Subject Added Entry-Topical Term  
Linear programming.
Subject Added Entry-Topical Term  
Data science.
Subject Added Entry-Topical Term  
Eigenvalues.
Subject Added Entry-Topical Term  
Optimization algorithms.
Subject Added Entry-Topical Term  
Linear algebra.
Added Entry-Corporate Name  
Purdue University.
Host Item Entry  
Dissertations Abstracts International. 86-04B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:654206
신착도서 더보기
최근 3년간 통계입니다.

소장정보

  • 예약
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • 나의폴더
소장자료
등록번호 청구기호 소장처 대출가능여부 대출정보
TQ0030128 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* 대출중인 자료에 한하여 예약이 가능합니다. 예약을 원하시면 예약버튼을 클릭하십시오.

해당 도서를 다른 이용자가 함께 대출한 도서

관련도서

관련 인기도서

도서위치