Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

4-2025

Abstract

We discuss our methodology and implementation of ChatGPT-permitted assessments for a university-level spreadsheets modelling module. Through our quantitative data analysis, our students rated ChatGPT’s answers to be incorrect on average and thus will not help them generate better answers directly, representing low “Perceived usefulness” (PU), while they rated ChatGPT 3.5 with relatively high “Perceived ease of use” (PE). They gave a good “Behavioural intention” (BI) rating indicating that they were motivated to use it in future as they could still learn more about this module by using ChatGPT 3.5. We found that both PU and PE affected BI positively, with PU being the stronger predictor, suggesting that developers should focus on improving ChatGPT’s accuracy to improve PU, which will in turn have a higher positive impact on BI. Through our qualitative analysis, our students indicated that they could learn positively from ChatGPT 3.5 in terms of getting an initial idea on how to approach the problem, providing a first cut solution, learning the execution steps for complex Excel functions, providing an active learning opportunity through identifying and correcting the mistakes, and gaining the awareness of not committing such mistakes in the future.

Keywords

ChatGPT 3.5, Permitted Use, Assessment, Higher Education, Empirical Study

Discipline

Artificial Intelligence and Robotics | Higher Education

Research Areas

Information Systems and Management

Publication

Proceedings of the 17th International Conference on Computer Supported Education, Porto, Portugal, 2025 April 1-3

Volume

2

First Page

189

Last Page

197

ISBN

9789897587467

Identifier

10.5220/0013095500003932

Publisher

Science and Technology Publications

City or Country

Portugal

Additional URL

http://doi.org/10.5220/0013095500003932

Share

COinS