Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
3-2022
Abstract
Intelligent Tutoring Systems have become critically important in future learning environments. Knowledge Tracing (KT) is a crucial part of that system. It is about inferring the skill mastery of students and predicting their performance to adjust the curriculum accordingly. Deep Learning based models like Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) have shown significant predictive performance compared with traditional models like Bayesian Knowledge Tracing (BKT) and Performance Factors Analysis (PFA). However, it is difficult to extract psychologically meaningful explanations from the tens of thousands of parameters in neural networks, that would relate to cognitive theory. There are several ways to achieve high accuracy in student performance prediction but diagnostic and prognostic reasonings are more critical in learning science. In this work, we present Interpretable Knowledge Tracing (IKT), a simple model that relies on three meaningful features: individual skill mastery, ability profile (learning transfer across skills) and problem difficulty by using data mining techniques. IKT’s prediction of future student performance is made using a Tree Augmented Naive Bayes Classifier (TAN), therefore its predictions are easier to explain than deep learning based student models. IKT also shows better student performance prediction than deep learning based student models without requiring a huge amount of parameters. We conduct ablation studies on each feature to examine their contribution to student performance prediction. Thus, IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
Keywords
Student model, Bayesian knowledge Tracing, Causal relation, HMM, TAN
Discipline
Artificial Intelligence and Robotics | Databases and Information Systems
Research Areas
Data Science and Engineering
Publication
Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, February 22 - March 1
Volume
36
First Page
12810
Last Page
12818
Identifier
https://doi.org/10.1609/aaai.v36i11.21560
Publisher
AAAI Press
City or Country
Washington, DC, USA
Citation
MINN, Sein; VIE, Jill-Jênn; TAKEUCHI, Koh; and ZHU, Feida.
Interpretable knowledge tracing: Simple and efficient student modeling with causal relations. (2022). Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, February 22 - March 1. 36, 12810-12818.
Available at: https://ink.library.smu.edu.sg/sis_research/7749
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.