Monday, January 17, 2011

A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing

Clarence D. Kreiter, Thomas Haugen, Timothy Leaven,Christopher Goerdt, Nancy Rosenthal, William C. McGaghie and Fred Dee

Department of Pathology, University of Iowa Carver College of Medicine, Iowa City, IA, USA;

Northwestern University Feinberg School of Medicine, Chicago, IL, USA

Citation: Medical Education Online 2011, 16: 5646 - DOI: 10.3402/meo.v16i0.5646

Objectives: Insufficient attention has been given to how information from computer-based clinical case simulations is presented, collected, and scored. Research is needed on how best to design such simulations to acquire valid performance assessment data that can act as useful feedback for educational applications. This report describes a study of a new simulation format with design features aimed at improving both its formative assessment feedback and educational function.

Methods: Case simulation software (LabCAPS) was developed to target a highly focused and well-defined measurement goal with a response format that allowed objective scoring. Data from an eight-case computerbased performance assessment administered in a pilot study to 13 second-year medical students was analyzed using classical test theory and generalizability analysis. In addition, a similar analysis was conducted on an administration in a less controlled setting, but to a much large sample (n 143), within a clinical course that utilized two random case subsets from a library of 18 cases.

Results: Classical test theory case-level item analysis of the pilot assessment yielded an average case discrimination of 0.37, and all eight cases were positively discriminating (range 0.11 0.56). Classical test theory coefficient alpha and the decision study showed the eight-case performance assessment to have an observed reliability of s G 0.70. The decision study further demonstrated that a G 0.80 could be attained with approximately 3 h and 15 min of testing. The less-controlled educational application within a large medical class produced a somewhat lower reliability for eight cases (G 0.53). Students gave high ratings to the logic of the simulation interface, its educational value, and to the fidelity of the tasks.

Conclusions: LabCAPS software shows the potential to provide formative assessment of medical students’ skill at diagnostic test ordering and to provide valid feedback to learners. The perceived fidelity of the performance tasks and the statistical reliability findings support the validity of using the automated scores for formative assessment and learning. LabCAPS cases appear well designed for use as a scored assignment, for stimulating discussions in small group educational settings, for self-assessment, and for independent learning. Extension of the more highly controlled pilot assessment study with a larger sample will be needed to confirm its reliability in other assessment applications.

Keywords: computer-based simulation; clinical skills assessment; formative assessment; laboratory medicine; performance
assessment

No comments:

Post a Comment