본문

서브메뉴

Prompt and Circumstance: Investigating the Relationship Between College Writing and Postsecondary Policy.
Prompt and Circumstance: Investigating the Relationship Between College Writing and Postsecondary Policy.

상세정보

자료유형  
 학위논문
Control Number  
0017164387
International Standard Book Number  
9798384043393
Dewey Decimal Classification Number  
401
Main Entry-Personal Name  
Godfrey, Jason Michael.
Publication, Distribution, etc. (Imprint  
[S.l.] : University of Michigan., 2024
Publication, Distribution, etc. (Imprint  
Ann Arbor : ProQuest Dissertations & Theses, 2024
Physical Description  
279 p.
General Note  
Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
General Note  
Advisor: Aull, Laura.
Dissertation Note  
Thesis (Ph.D.)--University of Michigan, 2024.
Summary, Etc.  
요약In US-based postsecondary education, first-year students commonly have their compositional ability consequentially assessed on the basis of standardized tests. As a result, students who score above certain thresholds on ACT, SAT, or AP exams often are placed into honors or remedial courses; receive credit remissions; and/or test out of general education classes such as first-year composition. While the thresholds and applicable tests vary from institution to institution, over 2000 Title IV schools implement policies based on such tests. However, there is little evidence that the linguistic patterns that correlate with success on timed, high-stakes tests carry forward to college-level writing tasks. Consequently, contemporary composition scholars call for research that centers examinations of student writing itself rather than assessments of writing quality such as standardized tests. This dissertation responds to that call by answering the questions, How do linguistic features observed in college-level writing relate to institutionally sanctioned measures of writing quality? And, what are the implications for policy levers based on those measures?To answer these questions, I leverage a longitudinal corpus (2009-2019) of approximately 47,000 student essays, matched with data on test scores. Together, these data allow me to investigate whether the test scores, implemented as boolean policy levers, meaningfully distinguish between students who write using measurably distinct linguistic patterns. To measure such distinctions, this study employs natural language processing by incorporating large language models designed for text classification tasks: BERT, RoBERTa, and XLNet. The methods employed in this study identify a quadratic weighted kappa of 0.43, which indicates that the model was able to classify student essays better than random assignment; however, the relationship between student writing and test scores maintain a minimal relationship. Ideally, educational policy that consequentially sorts students into different educational tracks at the most vulnerable point of their college career would bear more than a weak relationship to their college-level performance.To uncover which linguistic features are most correlated with higher scores, I employ OLS, multiple, and logistic regression. These models find significant differences between the essays of students with high and low test scores. Across most models, students with higher test scores have on average fewer clauses per sentence; more prepositions, adverbs, colons, and adjectives; and write with the same number of personal pronouns. While these findings are statistically significant, they only weakly describe the differences between high- and low-scoring, such that distinguishing between essays of students who are near common policy thresholds would be an error-prone task for any human or algorithm. Additionally, while the logistic regression based on the existing policy threshold at University of Michigan had the greatest explanatory power (Pseudo R2 0.09), linear regressions based on a normalized ACT-SAT score had more explanatory power (R2 0.161). While these metrics cannot be directly compared, the difference in their relative strength nonetheless reveals a disparity in goodness-of-fit that demonstrates how educational policy based on a boolean threshold from one test is functionally less discriminating than the metric that is based on multiple measures. Significance notwithstanding, the overall weak correlation between standardized test scores and college-level writing evidences the inability for a timed, high-stakes writing test to relate to writing in other circumstances, including college-level writing tasks. These results evidence the brittleness of these test scores as measures of writing quality and cast doubt as to their utility as policy levers.
Subject Added Entry-Topical Term  
Linguistics.
Subject Added Entry-Topical Term  
Education policy.
Subject Added Entry-Topical Term  
Computer science.
Subject Added Entry-Topical Term  
Educational evaluation.
Index Term-Uncontrolled  
College writing
Index Term-Uncontrolled  
Standardized tests
Index Term-Uncontrolled  
Advanced Placement
Index Term-Uncontrolled  
Scholastic Aptitude Test
Index Term-Uncontrolled  
Natural language processing
Added Entry-Corporate Name  
University of Michigan English & Education
Host Item Entry  
Dissertations Abstracts International. 86-03B.
Electronic Location and Access  
로그인을 한후 보실 수 있는 자료입니다.
Control Number  
joongbu:655267

MARC

 008250224s2024        us  ||||||||||||||c||eng  d
■001000017164387
■00520250211152956
■006m          o    d                
■007cr#unu||||||||
■020    ▼a9798384043393
■035    ▼a(MiAaPQ)AAI31631201
■035    ▼a(MiAaPQ)umichrackham005695
■040    ▼aMiAaPQ▼cMiAaPQ
■0820  ▼a401
■1001  ▼aGodfrey,  Jason  Michael.
■24510▼aPrompt  and  Circumstance:  Investigating  the  Relationship  Between  College  Writing  and  Postsecondary  Policy.
■260    ▼a[S.l.]▼bUniversity  of  Michigan.  ▼c2024
■260  1▼aAnn  Arbor▼bProQuest  Dissertations  &  Theses▼c2024
■300    ▼a279  p.
■500    ▼aSource:  Dissertations  Abstracts  International,  Volume:  86-03,  Section:  B.
■500    ▼aAdvisor:  Aull,  Laura.
■5021  ▼aThesis  (Ph.D.)--University  of  Michigan,  2024.
■520    ▼aIn  US-based  postsecondary  education,  first-year  students  commonly  have  their  compositional  ability  consequentially  assessed  on  the  basis  of  standardized  tests.  As  a  result,  students  who  score  above  certain  thresholds  on  ACT,  SAT,  or  AP  exams  often  are  placed  into  honors  or  remedial  courses;  receive  credit  remissions;  and/or  test  out  of  general  education  classes  such  as  first-year  composition.  While  the  thresholds  and  applicable  tests  vary  from  institution  to  institution,  over  2000  Title  IV  schools  implement  policies  based  on  such  tests.  However,  there  is  little  evidence  that  the  linguistic  patterns  that  correlate  with  success  on  timed,  high-stakes  tests  carry  forward  to  college-level  writing  tasks.  Consequently,  contemporary  composition  scholars  call  for  research  that  centers  examinations  of  student  writing  itself  rather  than  assessments  of  writing  quality  such  as  standardized  tests.  This  dissertation  responds  to  that  call  by  answering  the  questions,  How  do  linguistic  features  observed  in  college-level  writing  relate  to  institutionally  sanctioned  measures  of  writing  quality?  And,  what  are  the  implications  for  policy  levers  based  on  those  measures?To  answer  these  questions,  I  leverage  a  longitudinal  corpus  (2009-2019)  of  approximately  47,000  student  essays,  matched  with  data  on  test  scores.  Together,  these  data  allow  me  to  investigate  whether  the  test  scores,  implemented  as  boolean  policy  levers,  meaningfully  distinguish  between  students  who  write  using  measurably  distinct  linguistic  patterns.  To  measure  such  distinctions,  this  study  employs  natural  language  processing  by  incorporating  large  language  models  designed  for  text  classification  tasks:  BERT,  RoBERTa,  and  XLNet.  The  methods  employed  in  this  study  identify  a  quadratic  weighted  kappa  of  0.43,  which  indicates  that  the  model  was  able  to  classify  student  essays  better  than  random  assignment;  however,  the  relationship  between  student  writing  and  test  scores  maintain  a  minimal  relationship.  Ideally,  educational  policy  that  consequentially  sorts  students  into  different  educational  tracks  at  the  most  vulnerable  point  of  their  college  career  would  bear  more  than  a  weak  relationship  to  their  college-level  performance.To  uncover  which  linguistic  features  are  most  correlated  with  higher  scores,  I  employ  OLS,  multiple,  and  logistic  regression.  These  models  find  significant  differences  between  the  essays  of  students  with  high  and  low  test  scores.  Across  most  models,  students  with  higher  test  scores  have  on  average  fewer  clauses  per  sentence;  more  prepositions,  adverbs,  colons,  and  adjectives;  and  write  with  the  same  number  of  personal  pronouns.  While  these  findings  are  statistically  significant,  they  only  weakly  describe  the  differences  between  high-  and  low-scoring,  such  that  distinguishing  between  essays  of  students  who  are  near  common  policy  thresholds  would  be  an  error-prone  task  for  any  human  or  algorithm.  Additionally,  while  the  logistic  regression  based  on  the  existing  policy  threshold  at  University  of  Michigan  had  the  greatest  explanatory  power  (Pseudo  R2  0.09),  linear  regressions  based  on  a  normalized  ACT-SAT  score  had  more  explanatory  power  (R2  0.161).  While  these  metrics  cannot  be  directly  compared,  the  difference  in  their  relative  strength  nonetheless  reveals  a  disparity  in  goodness-of-fit  that  demonstrates  how  educational  policy  based  on  a  boolean  threshold  from  one  test  is  functionally  less  discriminating  than  the  metric  that  is  based  on  multiple  measures.  Significance  notwithstanding,  the  overall  weak  correlation  between  standardized  test  scores  and  college-level  writing  evidences  the  inability  for  a  timed,  high-stakes  writing  test  to  relate  to  writing  in  other  circumstances,  including  college-level  writing  tasks.  These  results  evidence  the  brittleness  of  these  test  scores  as  measures  of  writing  quality  and  cast  doubt  as  to  their  utility  as  policy  levers.
■590    ▼aSchool  code:  0127.
■650  4▼aLinguistics.
■650  4▼aEducation  policy.
■650  4▼aComputer  science.
■650  4▼aEducational  evaluation.
■653    ▼aCollege  writing
■653    ▼aStandardized  tests
■653    ▼aAdvanced  Placement  
■653    ▼aScholastic  Aptitude  Test
■653    ▼aNatural  language  processing
■690    ▼a0458
■690    ▼a0290
■690    ▼a0984
■690    ▼a0443
■71020▼aUniversity  of  Michigan▼bEnglish  &  Education.
■7730  ▼tDissertations  Abstracts  International▼g86-03B.
■790    ▼a0127
■791    ▼aPh.D.
■792    ▼a2024
■793    ▼aEnglish
■85640▼uhttp://www.riss.kr/pdu/ddodLink.do?id=T17164387▼nKERIS▼z이  자료의  원문은  한국교육학술정보원에서  제공합니다.

미리보기

내보내기

chatGPT토론

Ai 추천 관련 도서


    New Books MORE
    Related books MORE
    최근 3년간 통계입니다.

    詳細情報

    • 予約
    • 캠퍼스간 도서대출
    • 서가에 없는 책 신고
    • 私のフォルダ
    資料
    登録番号 請求記号 場所 ステータス 情報を貸す
    TQ0031289 T   원문자료 열람가능/출력가능 열람가능/출력가능
    마이폴더 부재도서신고

    *ご予約は、借入帳でご利用いただけます。予約をするには、予約ボタンをクリックしてください

    해당 도서를 다른 이용자가 함께 대출한 도서

    Related books

    Related Popular Books

    도서위치