본문

서브메뉴

Theoretical Foundations and Applications of Integrated Learning Architectures for Graphs.
Sommaire Infos
Theoretical Foundations and Applications of Integrated Learning Architectures for Graphs.
자료유형  
 학위논문
Control Number  
0017162972
International Standard Book Number  
9798384342045
Dewey Decimal Classification Number  
658
Main Entry-Personal Name  
Dax, Victoria Magdalena.
Publication, Distribution, etc. (Imprint  
[S.l.] : Stanford University., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
116 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-03, Section: A.
General Note  
Advisor: Kochenderfer, Mykel.
Dissertation Note  
Thesis (Ph.D.)--Stanford University, 2024.
Summary, Etc.  
요약Graph Neural Networks (GNNs) have become important in the machine learning landscape because of their ability to model complex, structured data. This thesis presents approaches for blending GNNs with other deep learning methods, such as the decision-making capabilities of reinforcement learning (RL) or the generative abilities of variational auto-encoders (VAEs), to enhance the practical functionality of GNNs and to expand their applicability in various domains.The fundamental challenge addressed in this thesis is overcoming the inherent difficulties in integrating GNNs with other methodologies. GNNs excel in processing structured data but face issues like oversmoothing, particularly with increasing network depth. On the other hand, methods such as VAEs offer flexible generative abilities but have their own set of training and scalability challenges. And, while recurrent neural networks (RNNs) excel at processing temporal patterns, they introduce concerns of catastrophic forgetting and vanishing gradients. The solutions explored here involve novel combinations of these diverse techniques, aiming to leverage their strengths while mitigating their weaknesses.We start by exploring GNNs' theoretical properties, especially their generalization properties, before transitioning into practical applications. First, we demonstrate an enhancement of GNNs by integrating RNNs for advanced time-series predictions in interconnected systems. Then, we combine GNNs with variational auto-encoders (VAEs) to improve out-of-distribution generalizability and model interpretability through disentanglement of the embedding space in motion prediction tasks. We end by discussing using GNNs in deep RL techniques, specifically for combinatorial optimization tasks.
Subject Added Entry-Topical Term  
Decision making.
Subject Added Entry-Topical Term  
Neural networks.
Subject Added Entry-Topical Term  
Web studies.
Added Entry-Corporate Name  
Stanford University.
Host Item Entry  
Dissertations Abstracts International. 86-03A.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:657992
New Books MORE
최근 3년간 통계입니다.

Info Détail de la recherche.

  • Réservation
  • 캠퍼스간 도서대출
  • 서가에 없는 책 신고
  • My Folder
Matériel
Reg No. Call No. emplacement Status Lend Info
TQ0034310 T   원문자료 열람가능/출력가능 열람가능/출력가능
마이폴더 부재도서신고

* Les réservations sont disponibles dans le livre d'emprunt. Pour faire des réservations, S'il vous plaît cliquer sur le bouton de réservation

해당 도서를 다른 이용자가 함께 대출한 도서

Related books

Related Popular Books

도서위치