Clinical Trials Logo

Clinical Trial Summary

Clinical reasoning abilities can be enhanced by repeated formative testing with key feature questions. An analysis of wrong answers to key feature questions facilitates the identification of common misconceptions. This prospective, randomised, cross-over study assessed whether an elaboration task and individualised mailed feedback further improve student performance on clinical reasoning.


Clinical Trial Description

Repeated formative (i.e., non-graded) testing enhances student learning outcome on clinical reasoning skills. At University Medical Centre Göttingen (UMG), a number of trials investigating the so-called testing effect have already been conducted in the past. They showed, amongst others, that dealing with videotaped clinical cases compared with written cases increases short-term outcome but not long-term retention. More recently, one study addressed the question whether clinical reasoning skills can be fostered by an elaboration of incorrect answers. Results of a previous trial had suggested that a considerable number of students were not sufficiently motivated to provide thorough answers to elaboration questions. This impression remained even after introducing financial incentives for students although a small but significant effect of the intervention was noted (percent score in the exit exam: 65.7 +/- 19.6% vs. 62.3 +/- 22.9%; p = 0.022). Yet, student performance remained moderate at best. Thus, the intervention will now be extended by including automated feedback provided by email. All students participating in an electronic case-based seminar (e-seminar) will receive an individual email after the event, displaying their raw point score as well as their written answers to elaboration questions and expert comments reflecting current medical knowledge. This trial addresses the following research question: What is the effect of elaboration and consecutive automated and individual feedback following e-seminars on medical students' clinical reasoning skills? 2. Background and previous work According to recent findings, retrieval of knowledge is not a passive process. Instead, long-term retention is being facilitated by the act of retrieval itself ('retrieval hypothesis'). Potentially, this effect that has also been called 'direct testing effect', could also be due to additional exposure to the content during an assessment. However, complex studies in which exposure was experimentally controlled did not lend support to this 'total time hypothesis'. The effectiveness of examinations as memory boosters with respect to medical education has been shown in a number studies. However, many of these used short follow-up periods (e.g., 7 days) or implemented reproduction tests on a low taxonomic level. Yet, these studies suggest that formative examinations may promote learning processes. According to a review of the topic, these exams should contain production tests and be repeated with appropriate spacing. In addition, students should receive feedback shortly after the exam. Given these recommendations, longitudinal key feature examinations were implemented in three consecutive teaching modules at our institution in 2013. These case-based examinations lend themselves to fostering complex cognitive skills. A key feature is defined as a critical step in solving a clinical problem. According to this definition, a key feature case consists of a case vignette and approximately five consecutive questions relating to the diagnostic and therapeutic approach. In contrast to single-best answer multiple choice questions, students cannot choose from a list of five answer options but must produce a written answer. Thus, rather than recognizing the correct answer, the aim of a key feature examination is to actively produce a correct answer. In order to save students from making follow-on mistakes, they are informed about the correct answers to preceding questions whenever attempting to answer the next question. At this point, students also receive static feedback on their previous answer. Recently, the results of a randomized cross-over trial comparing active retrieval using key feature questions with repeated study of the same material were published. The data showed that working on key feature cases with static feedback elicited a larger medium-term learning outcome than passive restudying of the same content. The specific role of the feedback in the process however remained unclear. Current findings from educational psychology research suggest that diagnostic errors made in a protected learning environment can serve as starting points for further elaboration which may eventually lead to a reduction in diagnostic errors in clinical practice. This trial aims to implement and evaluate this concept. To this end, existing data obtained in previous trials at UMG were analysed with regard to common clinical reasoning errors (CCRE). On this basis, e-seminars running in parallel to curricular teaching in the three aforementioned modules were modified in that - upon answering specific questions - students were prompted to comment on frequent CCREs ('elaboration'). The analyses of student entries revealed that despite all the content having been covered in preceding teaching sessions, a considerable proportion of entries represented slack answers (e.g., 'don't know' or 'no idea'), suggesting that students might not have taken the exercise serious enough. In fact, this notion was corroborated in student comments during focus group discussions following the main study. As a consequence, the study was repeated in the following year, and this time complete answers to elaborations questions were incentivised using book vouchers. In this setting, a significant effect of the intervention was noted but student performance was still at best moderate. Given the importance of feedback for learning processes elicited by formative examinations, this aspect will be strengthened in the trial described here. Students can already open a text box containing static feedback after each question, but so far they have not received personal feedback after each exam. In winter term 2018/19, all students participating in the trial will receive individual emails containing (a) the raw point score achieved in each e-seminar, (b) static expert feedback to elaboration questions, and (c) their own entries to these elaboration questions. Thus, students will be able to compare their own answers to the instructor feedback. 3. Design and Conduct of the Study This is a randomised controlled cross-over educational trial. Participating students will be stratified according to sex and summative exam scores in the previous term. Subsequently, they will be randomized to one of two study groups in a 1:1 fashion. During weekly e-seminars, they work on clinical cases addressing diagnostic and therapeutic strategies needed to manage patients with prevalent symptoms of general medical disorders. Cases will be presented as key feature cases with five questions per case. For some of these questions, elaboration questions will be written. These will focus on common misperceptions and clinical reasoning errors. When used as 'intervention items', elaboration questions will be shown after the original key feature question. Students will be prompted to enter a free-text answer. Upon completing both the original item and the elaboration question, they will be able to access a static feedback ('expert comment'). This feedback will be included in an email sent to all students on the day after the e-seminar, also containing individual performance data as well as the student's free-text answer to the elaboration question. When used as a 'control item', the same key feature question is being displayed, and students can access the expert comment directly after answering the question. Information on control items will not be contained in the mailed feedback. Every student will be exposed to 15 intervention and control items, respectively, and each of these will be shown twice over the course of 10 weeks. Items that are being shown as intervention items in one randomized group will be shown as control items in the other group and vice versa, thus making each student their own control. At the end of the study, individual 'intervention item' and 'control item' scores will be computed for each student, and these two scores will be compared using a paired t Test. This primary analysis will be done to test the following hypothesis: "Long-term retention will be better for content that has been repeatedly tested with additional elaboration questions and subsequent mailed individual feedback than for content that has been repeatedly tested alone." Long-term retention will be assessed in a formative electronic key feature assessment in summer term 2019. It will be identical to the entry and exit exam held in winter term 2018/19. Secondary analyses will include unadjusted and adjusted linear regressions with percent scores in the exit exam and retention test as dependent variables and student characteristics as well as their engagement with key feature questions as independent variables. ;


Study Design


Related Conditions & MeSH terms


NCT number NCT05585892
Study type Interventional
Source University Medical Center Goettingen
Contact
Status Completed
Phase N/A
Start date October 1, 2018
Completion date June 14, 2019

See also
  Status Clinical Trial Phase
Completed NCT05081635 - Consenso2_F1 Delphi Consensus Study on Post-graduate Medical Education Success and Failure and Its Influencing Factors
Recruiting NCT06092320 - Does Teaching Before or After Simulation Improve Learning? N/A
Recruiting NCT05436899 - A Pilot Study on Training Simulator Efficacy N/A
Completed NCT03758391 - Comparison of Learning in Traditional Versus "Flipped" Classrooms N/A
Completed NCT05078762 - Immersive Virtual Reality in Simulation-based Bronchoscopy Training N/A
Completed NCT05526365 - Idea Density in Exam Performance N/A
Completed NCT05043909 - The Effects of Virtual Reality and Augmented Reality Training System on Elderly Oral Care Skill for Oral Hygiene and Nursing Students N/A
Completed NCT05191589 - Haptic Devices Impact on Laparoscopic Simulators N/A
Completed NCT05596305 - Outcomes of Anti Stigma Educational Intervention of Ungraduated Medical Students N/A
Active, not recruiting NCT06276049 - ChatGPT Helping Advance Training for Medical Students: A Study on Self-Directed Learning Enhancement N/A
Completed NCT02971735 - Cognitive Style and Mobile Technology in E-learning in Undergraduate Medical Education N/A
Completed NCT02168192 - Breaking Bad News in Obstetrics: A Trial of Simulation-Debrief Based Education N/A
Completed NCT00466453 - Adapting Web-based Instruction to Baseline Knowledge of Physicians-in-training Phase 2/Phase 3
Recruiting NCT05169073 - Virtual Reality Training for Laparoscopic Cholecystectomy N/A
Completed NCT05393219 - Cardiac Biofeedback, Mindfulness, and Inner Resources Mobilization Interventions on Performances of Medical Students N/A
Recruiting NCT04375254 - Neuroscience-based Nomenclature (NbN) as a Teaching Tool
Completed NCT05834374 - Training for Transfer by Contextual Variation N/A
Completed NCT03863028 - Development and Validation of a Simulator-based Test in Transurethral Resection of Bladder Tumors
Completed NCT03471975 - Learning Direct Laryngoscopy Using a McGrath Video Laryngoscope as Direct Versus Indirect Laryngoscope N/A
Recruiting NCT06114433 - Three-dimensional Upper Gastrointestinal Tract Model N/A