VanLehn, K., & Martin, J. (1997). Evaluation of an assessment system based on Bayesian student modeling. International Journal of Artificial Intelligence and Education, 8(2), 179-221.

Schools need assessments of students in order to make informed decisions. The most common assessments are tests consisting of questions or problems that can be answered in under a minute each. When schools change their instruction to maximize performance on short-item tests, the students learning can suffer. To prevent this, assessments are being developed such that teaching to the test will actually improve instruction. Such performance assessments, as they are called, have students work on complex, intrinsically valuable, authentic tasks. Olae is a performance assessment for Newtonian physics. It is based on student modeling, a technology developed for intelligent tutoring systems. Students solve traditional problems as well as tasks developed by cognitive psychologists for measuring expertise. Students work on a computer, which records all their work as well as their answers. This record is analyzed to form a model of the students physics knowledge that accounts for the students actions. The model is fine-grained, in that it can report the probability of mastery of each of 290 pieces of physics knowledge. These features make Olae a rather unusual assessment instrument, so it is not immediately obvious how to evaluate it, because standard evaluations methods assume the assessment is a short-item test. This paper describes Olae (focusing on parts of it that have not been described previously), several methods for evaluating complex assessments based on student modeling such as Olae, and some preliminary results of applying these methods to Olae with a small sample of physics students. In many cases, more data would be required in order to adequately access Olae, so this paper should be viewed more as a methodological contribution than as a definitive evaluation.

For a PDF full article version, click here (114KB).