Using a logic model to evaluate rater training for EAP writing assessment

Jeanne O'Connell

    Research output: Journal PublicationArticlepeer-review

    Abstract

    Assessment by written exams and coursework is common practice in pre-sessional and preliminary year EAP programmes, but the allocation of marks for written assessment is complex, as is training raters to apply specified assessment standards. This practitioner research uses a Logic Model, a visual diagram commonly used in programme evaluation, to evaluate the rater training procedure for writing assessment in an English-medium university department. This study integrates data from surveys, interviews and workshops with the stakeholders involved in the rater training procedure to develop a Logic Model as part of an ongoing theory of change evaluation. The final product is a Model that reveals the guiding principles of rater training in the department, text that describes the evaluation process, and a measurement plan. This paper showcases how practitioner research can enhance EAP practice by demonstrating how an essential component of EAP assessment, rater training, and the rationale behind it, can be made cogent to the various stakeholders involved in the procedure. This paper offers considerations for EAP practitioners, managers, and testing staff when developing or working with rater training, bridging the gap between EAP and language testing and assessment communities.
    Original languageEnglish
    Article number101160
    JournalJournal of English for Academic Purposes
    Volume60
    DOIs
    Publication statusPublished - Nov 2022

    Keywords

    • Logic model
    • Rater training
    • EAP writing assessment
    • Language testing and assessment

    Fingerprint

    Dive into the research topics of 'Using a logic model to evaluate rater training for EAP writing assessment'. Together they form a unique fingerprint.

    Cite this