본문

서브메뉴

Effective Differentially Private Deep Learning.
ข้อมูลเนื้อหา
Effective Differentially Private Deep Learning.
자료유형  
 학위논문
Control Number  
0017163719
International Standard Book Number  
9798342113472
Dewey Decimal Classification Number  
006.35
Main Entry-Personal Name  
Li, Xuechen.
Publication, Distribution, etc. (Imprint  
[S.l.] : Stanford University., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
154 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-04, Section: B.
General Note  
Advisor: Guestrin, Carlos;Hashimoto, Tatsunori.
Dissertation Note  
Thesis (Ph.D.)--Stanford University, 2024.
Summary, Etc.  
요약Deep learning models trained on sensitive data can leak privacy when deployed. For example, language models trained with standard algorithms can regurgitate training data and reveal membership information of data contributors. Differential Privacy (DP) is a formal guarantee that provably limits privacy leakage and has become the gold standard for privacy-preserving statistical data analysis. However, most approaches for training deep learning models with DP were computationally intensive and incurred substantial task performance penalties on the resulting model. This thesis presents improved techniques for deep learning with DP that are much more efficient and performant. These techniques have seen growing interest in the industry and have been used in differentially private machine learning deployments at major technology companies, protecting users' privacy and providing substantial computational savings.We show that Differentially Private Stochastic Gradient Descent (DP-SGD), when properly applied to fine-tune pretrained models of increasing size and quality, produces consistently better privacy-utility tradeoffs. DP-SGD is much more memory-intensive and slower compared to standard training algorithms. We present algorithmic and implementation modifications of DP-SGD, rendering it as efficient as standard training for Transformers models. Our empirical findings challenge the prevailing belief that DP-SGD performs poorly for optimizing high-dimensional objectives. To understand and explain our empirical results, we additionally present novel theoretical analyses on toy models that resemble large-scale fine-tuning and show that DP-SGD has dimension-independent bounds for a class of unconstrained convex optimization problems.
Subject Added Entry-Topical Term  
Text categorization.
Subject Added Entry-Topical Term  
Deep learning.
Subject Added Entry-Topical Term  
Fines & penalties.
Subject Added Entry-Topical Term  
Information processing.
Subject Added Entry-Topical Term  
Privacy.
Added Entry-Corporate Name  
Stanford University.
Host Item Entry  
Dissertations Abstracts International. 86-04B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:657570
New Books MORE
최근 3년간 통계입니다.

ค้นหาข้อมูลรายละเอียด

  • จองห้องพัก
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • โฟลเดอร์ของฉัน
วัสดุ
Reg No. Call No. ตำแหน่งที่ตั้ง สถานะ ยืมข้อมูล
TQ0033788 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* จองมีอยู่ในหนังสือยืม เพื่อให้การสำรองที่นั่งคลิกที่ปุ่มจองห้องพัก

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치