Abstract

Feedback is an undeniably important aspect of the language learning process. It helps students recognize their strengths and weaknesses and identifies ways they can improve. Over the years, feedback has been provided by teachers, peers and Automated Writing Evaluation (AWE) tools. However, in recent years, artificial intelligence applications have proliferated significantly. With abilities to analyze and generate any kind of content, these models are being used to generate scores and feedback on written assignments to help lighten teachers’ load. ChatGPT has been called “the world’s most advanced chatbot” and “a potential chance to improve second language learning and instruction” (Shabara et al., 2024). The present study aims to investigate the quality of AI-generated scores and feedback on writing in comparison to teacher scores and feedback. Using a mixed methods design, the study compared ChatGPT-generated and its regenerated scores and qualitative comments to those assigned by experienced university instructors. A total of 89 argumentative essays were collected from the archives of a private university in Egypt. ChatGPT- 4o and two human raters scored them using a rubric that evaluates writing based on four criteria: content and development, organization and connection of ideas, linguistic range and control, and communicative effect. All scores were statistically analyzed to examine the consistency and accuracy of ChatGPT in scoring. Similarly, the written feedback was thematically analyzed and compared to teacher feedback. Themes identified from the data included tone of feedback, following the rubric, prioritizing certain writing features, and providing judgmental or improvement-oriented feedback. The quantitative data revealed a moderate correlation between AI-generated and teacher scores, with the only strong relationship being in the linguistic precision criterion. The results also showed a weak consistency in ChatGPT-generated and regenerated scores. In terms of qualitative feedback, it was found to be considerably close in quality to teacher feedback. Additionally, the study tapped into the effect of writing proficiency on the nature of the feedback, and the data showed that ChatGPT did not differentiate between students based on abilities whereas the teachers did, especially in terms of tone. This lack of differentiation, however, indicates that ChatGPT’s feedback may not be as personalized to students’ needs as the teacher feedback. Implications of the study include using ChatGPT for scoring language areas and generating feedback provided that teachers revise this evaluation. Study limitations such as evaluating the effectiveness of the feedback are also discussed.

School

School of Humanities and Social Sciences

Department

Applied Linguistics Department

Degree Name

MA in Teaching English to Speakers of Other Languages

Graduation Date

Winter 2-19-2025

Submission Date

1-27-2025

First Advisor

Atta Gebril

Committee Member 1

Maha Bali

Committee Member 2

Mariah Fairley

Extent

105p.

Document Type

Master's Thesis

Institutional Review Board (IRB) Approval

Approval has been obtained for this item

Share

COinS