본문

서브메뉴

Compiling Deep Learning Kernels to Locality-Aware Dataflow- [electronic resource]
内容资讯
Compiling Deep Learning Kernels to Locality-Aware Dataflow- [electronic resource]
자료유형  
 학위논문
Control Number  
0016931988
International Standard Book Number  
9798379651602
Dewey Decimal Classification Number  
005
Main Entry-Personal Name  
Zhao, Tian.
Publication, Distribution, etc. (Imprint  
[S.l.] : Stanford University., 2023
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2023
Physical Description  
1 online resource(110 p.)
General Note  
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
General Note  
Advisor: Raina, Priyanka;Re, Christopher;Olukotun, Oyekunle.
Dissertation Note  
Thesis (Ph.D.)--Stanford University, 2023.
Restrictions on Access Note  
This item must not be sold to any third party vendors.
Summary, Etc.  
요약Emerging deep learning applications require unprecedented computation and memory capacity. To accelerate these applications, novel processing systems such as dataflow accelerators strive to exploit multiple dimensions of parallelism within deep learning models, e.g., tensor and pipeline parallelism. Although these systems provide ultrahigh performance when fully utilized, compiling deep learning applications to harness their computation capability remains a challenging problem. With recent advances in domain-specific programming language, accelerator design, and machine learning, we now have the potential to better serve the needs of training and evaluating large deep learning applications on dataflow accelerators through algorithm, software, and hardware co-design.In this dissertation, I present the design and development of efficient deep learning optimizations and programming frameworks. I present two frameworks: SpatialRNN for accelerating recurrent neural network language models on spatial accelerators and Sigma for expressing and accelerating high-data-reuse deep learning kernels using reconfigurable dataflow accelerators. Our end-to-end evaluation using Sigma demonstrates a 5.4x speedup on kernels encompassing financial applications, traditional machine learning, language modeling and computer vision tasks over an Nvidia V100 GPU accelerator.
Subject Added Entry-Topical Term  
Programming languages.
Subject Added Entry-Topical Term  
Deep learning.
Subject Added Entry-Topical Term  
Bandwidths.
Subject Added Entry-Topical Term  
Optimization techniques.
Subject Added Entry-Topical Term  
Neural networks.
Subject Added Entry-Topical Term  
Design.
Subject Added Entry-Topical Term  
Keyboards.
Subject Added Entry-Topical Term  
Linear algebra.
Subject Added Entry-Topical Term  
Computer science.
Subject Added Entry-Topical Term  
Mathematics.
Added Entry-Corporate Name  
Stanford University.
Host Item Entry  
Dissertations Abstracts International. 84-12B.
Host Item Entry  
Dissertation Abstract International
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:643217
New Books MORE
최근 3년간 통계입니다.

高级搜索信息

  • 预订
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • 我的文件夹
材料
注册编号 呼叫号码. 收藏 状态 借信息.
TQ0029123 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

*保留在借用的书可用。预订,请点击预订按钮

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치