Testing Committee - Meeting Minutes

ILR TESTING COMMITTEE PRESENTATIONS 2018

January 19, 2018
Title: Impact of FLPB Policy Changes on US Army Special Forces Soldiers Testing and Proficiency

Presenters: Dr. Eric A. Surface and Dr. Reanna P. Harman, ALPS Insights.

Abstract: US Army Special Forces (SF) personnel train to develop organic language, regional expertise and culture (LREC) capability to meet LREC mission requirements. The Foreign Language Proficiency Bonus (FLPB) program is designed to create foreign language capability at the organizational level by influencing individual motivation to learn foreign language and to demonstrate their learning via officially recognized foreign language proficiency assessments (e.g., DLPT and OPI). As part of their mission, the Special Operations Forces Language Office (SOFLO) sponsored a study to investigate the efficacy and impact of changes to the FLPB policy for SOF operators, including SF. This study focuses on two primary changes to the Army’s FLPB and language and testing policy: (1) investigating the eligibility criteria for FLPB payment lowered from ILR level 2/2 to ILR level 1/1 and (2) investigating the OPI as an explicitly accepted verification test for FLPB payment. In addition to the empirical analysis of archival FLPB and language proficiency data spanning 17 years (1998-2015), interviews with leaders and operators, and surveys were all used to provide a full picture of the effectiveness of the FLPB program.

February 16, 2018

Title: Randomly Distributed Comparative Judgment: A Cost-Benefit Analysis in Rating ESL Writing

Presenters: Troy Cox

Abstract: Comparative Judgment of direct writing assessment asks raters to select the better of two essays instead of rating each one individually with a rubric. In this study, experienced and novice raters used both approaches and found CJ to be a viable option when used in conjunction with anchored essays. Traditional rubric rating is a standard approach to direct writing assessment (Park, 2004), yet it is a highly resource-intensive rating method. An alternative, potentially less expensive rating method called Comparative Judgment (CJ), may offer a more practical approach with comparable results. The CJ model was first proposed in 1927 by L.L. Thurstone but required extensive paper shuffling and onerous mathematical procedures. Randomly Distributed Comparative Judgment (RDCJ), made possible through computer algorithms, requires fewer resources than traditional rubric rating, and shows promising results in terms of reliability and validity. This session presents results of a comparative study. Using 60 essays that had been previously double rubric-rated, analyzed with many facet Rasch modeling (MFRM), and selected using stratified random sampling, we employed two groups of raters: novice (n=8) and experienced (n=8). We compared the results of these two groups against initial ratings while additionally collecting and comparing data regarding the amount of time spent to complete each rating session. RDCJ produced highly reliable results comparable to traditional methods (MFRM r=.97; RDCJr=.92), and significant correlations with initial ratings (MFRM rs=.95; RDCJrs=.93), with fewer training requirements and shorter rating times. Additionally, the RDCJ approach is more transferable to novel assessment tasks while still providing context-specific scores. Results from this study can guide writing programs in identifying rating models best suited to their specific needs. This presentation briefly reviews relevant research, reports on study results, outlines potential uses for RDCJ, and provides practical hands on experience using RDCJ to rate ESL writing. In addition to relevant research and study findings, attendees will receive an introduction to RDCJ, instruction in its use, and access to free online RDCJ software.

March 16, 2018

Title: Vocabulary Size as a Screener for Reading Proficiency

Presenters: Erwin Tschirner, University of Leipzig

Abstract: Vocabulary size may be the most important predictor of second language reading proficiency, commonly explaining about 50% of the variation in reading proficiency (Tschirner, Hacking, Rubio in press). A handful of studies focusing on the CEFR suggested that the receptive knowledge of the most frequent 3,000 lexemes of a language is related to the CEFR B1 level, whereas knowledge of the most frequent 5,000 lexemes is related to the C1 level (Milton 2010). This paper looks at vocabulary sizes associated with ILR reading proficiency levels in German, Russian, and Spanish. Standard Vocabulary Levels Tests (VLT) and official ACTFL Reading Proficiency Tests (RPT) were administered to a total of 184 college students at all levels of instruction. Reading proficiency ranged from ILR 0 to 3 and vocabulary sizes ranged from less than 1000 to 5000 words. Linear regression analyses predicting ILR levels on the basis of vocabulary size suggested that ILR 1/1+ is associated with having a receptive mastery of 2,000 words, ILR 2/2+ with 3,000 to 4,000 words, and ILR 3 with 5,000 words for all three languages. The discussion will focus on the predictive power of vocabulary size tests and their potential to screen students for low stakes purposes such as determining preliminary levels of reading proficiency.

May 18, 2018

Title: Approaches to Low-Density Language Training

Presenters: Sandro Alisic, Johnathan Gajdos, Scott McGinnis, Inna Kerlin, and Joemer Ta-ala, DLI-Washington

Abstract: Beginning with an overview of DLI-Washington's instructional mission, a consideration of the meaning of 'low density', and a discussion of some of the challenges of low density language training, the presentation will also include a look at DLI-Washington's current actions and future plans to address these challenges with a focus on improving proficiency outcomes.

June 8, 2018

Title: How LTAU Rates Translation Exams

Presenters: Maria Manfre, Don Smith

Abstract: FBI presentation on the holistic method of scoring translation exams.

 

Committee Meeting Topics for June 2017

This presentation will describe the development and implementation of a fully online asynchronous writing course focused on a specific professional task: summary writing. Course developers used a backward design approach to inform participants about the goals of instruction and to teach them how to evaluate their writing and their progress as writers using a rubric. Participants thus knew from the outset how they would be assessed, and worked with the instructors to assess their own work throughout the course. Course materials, evaluation rubrics, and program outcomes will be shared.

  • Title: Curriculum, instruction, and assessment in task-based backward design

    Presenter: Deborah Kennedy, consultant, Center for Applied Linguistics and George Mason University

  • Archives

    Committee Meeting Topics for 2017

    This presentation will describe the development and implementation of a fully online asynchronous writing course focused on a specific professional task: summary writing. Course developers used a backward design approach to inform participants about the goals of instruction and to teach them how to evaluate their writing and their progress as writers using a rubric. Participants thus knew from the outset how they would be assessed, and worked with the instructors to assess their own work throughout the course. Course materials, evaluation rubrics, and program outcomes will be shared.

      • Title: Curriculum, instruction, and assessment in task-based backward design

        Presenter: Deborah Kennedy, consultant, Center for Applied Linguistics and George Mason University

        December 8, 2017
        Title: Design and Preliminary Results of STARTALK: Understanding the ACTFL Guidelines

        Presenters: Margaret E. Malone, Todd McKay, Amy Kim, Assessment and Evaluation Language Resource Center

        Abstract: In U.S. higher education, though the ACTFL Guidelines have shaped language instruction (Chalhoub-Deville & Fulcher, 2003), few professional development (PD) opportunities exist to provide in-service instructors ACTFL Guidelines training. We present the long-term impacts of participation in a hybrid program, which was held at Georgetown University, on instructors’ understanding of the ACTFL Guidelines and teaching practice. Sixteen instructors of LCTLs in higher education took part in the program. To capture program impacts, several data-collection methods were used, including quiz scores, questionnaires, and telephonic interviews. Results suggest that participation in the program had a largely positive influence on instructors’ understanding of the ACTFL Guidelines and their capacity to translate program content into meaningful teaching practice, curriculum design, and student feedback.

    Committee Meeting Topics for 2014-2015

    Committee Meeting Topics for 2013-2014

    • November 15, 2013 - Planning Session

    • December 13, 2013 - Formative Assessment

    • January 24, 2014 - On-line Testing

    • February 21, 2014 - On-line Testing and Immersion Assessment

    • March 21, 2014 - Needs Assessment

    • April 25, 2014 - Proficiency and Performance Assessment

    • May 16, 2014 - Diagnostic Assessment

    • June 6, 2014 - Assessment - Native American Language and Culture

    Committee Meeting Topics for 2012-2013