서브메뉴
검색
Tackling Bias Within Computer Vision Models- [electronic resource]
Tackling Bias Within Computer Vision Models- [electronic resource]
- 자료유형
- 학위논문
- Control Number
- 0016932236
- International Standard Book Number
- 9798379718015
- Dewey Decimal Classification Number
- 004
- Main Entry-Personal Name
- Ramaswamy, Vikram V.
- Publication, Distribution, etc. (Imprint
- [S.l.] : Princeton University., 2023
- Publication, Distribution, etc. (Imprint
- Ann Arbor : ProQuest Dissertations & Theses, 2023
- Physical Description
- 1 online resource(198 p.)
- General Note
- Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
- General Note
- Advisor: Russakovsky, Olga.
- Dissertation Note
- Thesis (Ph.D.)--Princeton University, 2023.
- Restrictions on Access Note
- This item must not be sold to any third party vendors.
- Summary, Etc.
- 요약Over the past decade the rapid increase in the ability of computer vision models has led to their applications in a variety of real-world applications from self-driving cars to medical diagnoses. However, there is increasing concern about the fairness and transparency of these models. In this thesis, we tackle these issue of bias within these models along two different axes.First, we consider the datasets that these models are trained on. We use two different methods to create a more balanced training dataset. First, we create a synthetic balanced dataset by sampling strategically from the latent space of a generative network. Next, we explore the potential of creating a dataset through a method other than scraping the internet: we solicit images from workers around the world, creating a dataset that is balanced across different geographical regions. Both techniques are shown to help create models with less bias.Second, we consider methods to improve interpretability of these models, which can then reveal potential biases within the model. We investigate a class of interpretability methods called concept-based methods that output explanations for models in terms of human understandable semantic concepts. We demonstrate the need for more careful development of the datasets used to learn the explanation as well as the concepts used within these explanations. We construct a new method that allows for users to select a trade-off between the understandability and faithfulness of the explanation. Finally, we discuss how methods that completely explain a model can be developed, and provide heuristics for the same.
- Subject Added Entry-Topical Term
- Computer science.
- Index Term-Uncontrolled
- Concept-based explanations
- Index Term-Uncontrolled
- ML systems
- Index Term-Uncontrolled
- Computer vision
- Index Term-Uncontrolled
- Medical diagnoses
- Index Term-Uncontrolled
- Real-world applications
- Added Entry-Corporate Name
- Princeton University Computer Science
- Host Item Entry
- Dissertations Abstracts International. 84-12B.
- Host Item Entry
- Dissertation Abstract International
- Electronic Location and Access
- 로그인을 한후 보실 수 있는 자료입니다.
- Control Number
- joongbu:639128