Mini-Report on Assessment of Teaching Innovations

A one-hour workshop/round-table discussion, was held at the Research Corporation Cottrell Scholar Conference, July 9-10, 2004, on the Assessment of Teaching Innovations.

The participants were:
Ian Gould, moderator (Chemistry and Biochemistry, Arizona State University)
Kenneth Heller (Physics and Astronomy, University of Minnesota)
Stephen Hill (Physics, University of Florida)
Lyle Isaacs (Chemistry and Biochemistry, University of Maryland)
Christopher Mihos (Astronomy, Case Western University)
Michael Morrison (Physics and Astronomy, University of Oklahoma)
David Rabson (Physics, University of South Florida)

The following is a brief summary:

Classroom assessment comes in various forms for University instructors. These include:
1) Evaluation of students performance for internal and external agencies (such as the degree granting institution or potential employers)
2) The closely related evaluation of performance in class for grading purposes
3) "Real-time" evaluation of teaching and student learning for "real time" feedback of instruction and learning.

The group agreed that statistically meaningful assessment of novel teaching methods was extremely difficult, especially for scientists!
Assessment has to be closely tied to the goals of the particular instructional method and the goals of the particular project.
The commonly employed method of assessment via the use of surveys may sometimes be useful, but in many cases may provide no better than anecdotal evidence.
The use of standardized tests is potentially more rigorous, although difficult to implement. The major problem is the establishment of a proper control group and/or maintenance of instructional style over the testing period.
The most convincing method of assessment involves the use of "outside observers". In this regard, collaboration with colleagues from an education college at the home or a nearby institution is most certainly the most useful.
It was suggested that when writing grant proposals, a "preventative attack" strategy be employed. Here, the problems with statistical assessment are clearly delineated "up front", and what can and can't be done is clearly stated, to "ward-off" reviewer complaints!

The group spent most of the time discussing "active feedback" methods.

Active feedback methods, or "monitor-and-adjust", involve continuous monitoring of student progress in class.
Active feedback allows determination of the effectiveness of instruction, and also allows students to assess their learning before being formally tested. Students need to be aware of how well they are learning, they may not realize that they do not "get it".
Assessment at the end of the course is too late for the current group of students.
The results of the active feedback are not made public, and are self-driven by the instructor.
Active feedback is obtained by continual (usually small-scale) questioning of the students. This questioning is usually not graded or evaluated, but rewarded.
Active feedback is more effective with increased frequency, within reason.
The feedback loop must be "closed" by the instructor, i.e. the students should be informed of the results of the questioning, and what you as the instructor has learned. Students appreciate that you are using their feedback.

Student questioning may be in the form of on-line questions. These questions are reviewed by the instructor, and the next instructional session is modified according to the responses. The questions can include material specific "test" questions and/or general written response questions related to student concerns and understanding. This is basically the approach used in the "Just-In-Time Teaching" method.
In large classes where review of each students input is impossible, automated quizzing can be used with statistical analysis of the results, or the responses can be "sampled".
Feedback related to specific issues is more informative than, for example, "what don't you like".

In very large classes in which questioning can be impossible, the use of in-class "clicker" systems has been found to be useful, such as that from eInstruction. Students can be asked question in class, perhaps being allowed to talk amongst themselves, and their responses are obtained immediately via the electronic data gathering system.
Members of the panel who had used this system report increased attendance, especially when participation is rewarded with points! One advantage of the clicker system is that each student can respond anonymously, which dramatically increases participation. Being an "active" learning method, attentiveness also usually increases.
Discovering student misunderstanding early allows the instructor to expand discussion of, or review a topic immediately in class. "Lost" class time due to review or increased discussion may be recovered by discovering the the students understanding of another subject area is greater than anticipated, allowing some discussion to be skipped.

Finally, the use of an anonymous bulletin board was suggested as a method for obtaining student feedback and monitoring for student misunderstandings.