Publication Type

Journal Article

Version

publishedVersion

Publication Date

12-2024

Abstract

Semi open-ended multipart questions consist of multiple sub questions within a single question, requiring students to provide certain factual information while allowing them to express their opinion within a defined context. Human grading of such questions can be tedious, constrained by the marking scheme and susceptible to the subjective judgement of instructors. The emergence of large language models (LLMs) such as ChatGPT has significantly advanced the prospect of automatic grading in educational settings. This paper introduces a topic-based grading approach that harnesses LLM capabilities alongside a refined marking scheme to ensure fair and explainable assessment processes. The proposed approach involves segmenting student responses according to sub questions, extracting topics utilizing LLM, and refining the marking scheme in consultation with instructors. The refined marking scheme is derived from LLM-extracted topics, validated by instructors to augment the original grading criteria. Leveraging LLM, we match student responses with refined marking scheme topics and employ a Python program to assign marks based on the matches. Various prompt versions are compared using relevant metrics to determine the most effective prompts. We evaluate LLM's grading proficiency through three approaches: zero-shot prompting, few-shot prompting, and our proposed method. Results indicate that while zero-shot and few-shot prompting methods fall short compared to human grading, the proposed approach achieves the best performance (highest percentage of exact match marks, lowest mean absolute error, highest Spearman correlation, highest Cohen’s weighted kappa) and closely mirrors the distribution observed in human grading. Specifically, the collaborative approach enhances the grading process by refining the marking scheme to student responses, improving transparency and explainability through topic-based matching, and significantly increasing the effectiveness of LLMs when combined with instructor input, rather than as standalone automated grading systems.

Keywords

Large Language Model, Human-AI Collaboration, Semi Open-Ended Multipart Questions, AI-Assisted Grading

Discipline

Artificial Intelligence and Robotics | Educational Assessment, Evaluation, and Research

Research Areas

Data Science and Engineering

Areas of Excellence

Digital transformation

Publication

Computers and Education: Artificial Intelligence

Volume

7

First Page

1

Last Page

18

ISSN

2666-920X

Identifier

10.1016/j.caeai.2024.100339

Publisher

Elsevier

Copyright Owner and License

Publisher-CC-NC

Additional URL

https://doi.org/10.1016/j.caeai.2024.100339

Share

COinS