Clinical Trials Logo

Clinical Trial Details — Status: Completed

Administrative data

NCT number NCT05585892
Other study ID # 17/9/18
Secondary ID
Status Completed
Phase N/A
First received
Last updated
Start date October 1, 2018
Est. completion date June 14, 2019

Study information

Verified date October 2022
Source University Medical Center Goettingen
Contact n/a
Is FDA regulated No
Health authority
Study type Interventional

Clinical Trial Summary

Clinical reasoning abilities can be enhanced by repeated formative testing with key feature questions. An analysis of wrong answers to key feature questions facilitates the identification of common misconceptions. This prospective, randomised, cross-over study assessed whether an elaboration task and individualised mailed feedback further improve student performance on clinical reasoning.


Description:

Repeated formative (i.e., non-graded) testing enhances student learning outcome on clinical reasoning skills. At University Medical Centre Göttingen (UMG), a number of trials investigating the so-called testing effect have already been conducted in the past. They showed, amongst others, that dealing with videotaped clinical cases compared with written cases increases short-term outcome but not long-term retention. More recently, one study addressed the question whether clinical reasoning skills can be fostered by an elaboration of incorrect answers. Results of a previous trial had suggested that a considerable number of students were not sufficiently motivated to provide thorough answers to elaboration questions. This impression remained even after introducing financial incentives for students although a small but significant effect of the intervention was noted (percent score in the exit exam: 65.7 +/- 19.6% vs. 62.3 +/- 22.9%; p = 0.022). Yet, student performance remained moderate at best. Thus, the intervention will now be extended by including automated feedback provided by email. All students participating in an electronic case-based seminar (e-seminar) will receive an individual email after the event, displaying their raw point score as well as their written answers to elaboration questions and expert comments reflecting current medical knowledge. This trial addresses the following research question: What is the effect of elaboration and consecutive automated and individual feedback following e-seminars on medical students' clinical reasoning skills? 2. Background and previous work According to recent findings, retrieval of knowledge is not a passive process. Instead, long-term retention is being facilitated by the act of retrieval itself ('retrieval hypothesis'). Potentially, this effect that has also been called 'direct testing effect', could also be due to additional exposure to the content during an assessment. However, complex studies in which exposure was experimentally controlled did not lend support to this 'total time hypothesis'. The effectiveness of examinations as memory boosters with respect to medical education has been shown in a number studies. However, many of these used short follow-up periods (e.g., 7 days) or implemented reproduction tests on a low taxonomic level. Yet, these studies suggest that formative examinations may promote learning processes. According to a review of the topic, these exams should contain production tests and be repeated with appropriate spacing. In addition, students should receive feedback shortly after the exam. Given these recommendations, longitudinal key feature examinations were implemented in three consecutive teaching modules at our institution in 2013. These case-based examinations lend themselves to fostering complex cognitive skills. A key feature is defined as a critical step in solving a clinical problem. According to this definition, a key feature case consists of a case vignette and approximately five consecutive questions relating to the diagnostic and therapeutic approach. In contrast to single-best answer multiple choice questions, students cannot choose from a list of five answer options but must produce a written answer. Thus, rather than recognizing the correct answer, the aim of a key feature examination is to actively produce a correct answer. In order to save students from making follow-on mistakes, they are informed about the correct answers to preceding questions whenever attempting to answer the next question. At this point, students also receive static feedback on their previous answer. Recently, the results of a randomized cross-over trial comparing active retrieval using key feature questions with repeated study of the same material were published. The data showed that working on key feature cases with static feedback elicited a larger medium-term learning outcome than passive restudying of the same content. The specific role of the feedback in the process however remained unclear. Current findings from educational psychology research suggest that diagnostic errors made in a protected learning environment can serve as starting points for further elaboration which may eventually lead to a reduction in diagnostic errors in clinical practice. This trial aims to implement and evaluate this concept. To this end, existing data obtained in previous trials at UMG were analysed with regard to common clinical reasoning errors (CCRE). On this basis, e-seminars running in parallel to curricular teaching in the three aforementioned modules were modified in that - upon answering specific questions - students were prompted to comment on frequent CCREs ('elaboration'). The analyses of student entries revealed that despite all the content having been covered in preceding teaching sessions, a considerable proportion of entries represented slack answers (e.g., 'don't know' or 'no idea'), suggesting that students might not have taken the exercise serious enough. In fact, this notion was corroborated in student comments during focus group discussions following the main study. As a consequence, the study was repeated in the following year, and this time complete answers to elaborations questions were incentivised using book vouchers. In this setting, a significant effect of the intervention was noted but student performance was still at best moderate. Given the importance of feedback for learning processes elicited by formative examinations, this aspect will be strengthened in the trial described here. Students can already open a text box containing static feedback after each question, but so far they have not received personal feedback after each exam. In winter term 2018/19, all students participating in the trial will receive individual emails containing (a) the raw point score achieved in each e-seminar, (b) static expert feedback to elaboration questions, and (c) their own entries to these elaboration questions. Thus, students will be able to compare their own answers to the instructor feedback. 3. Design and Conduct of the Study This is a randomised controlled cross-over educational trial. Participating students will be stratified according to sex and summative exam scores in the previous term. Subsequently, they will be randomized to one of two study groups in a 1:1 fashion. During weekly e-seminars, they work on clinical cases addressing diagnostic and therapeutic strategies needed to manage patients with prevalent symptoms of general medical disorders. Cases will be presented as key feature cases with five questions per case. For some of these questions, elaboration questions will be written. These will focus on common misperceptions and clinical reasoning errors. When used as 'intervention items', elaboration questions will be shown after the original key feature question. Students will be prompted to enter a free-text answer. Upon completing both the original item and the elaboration question, they will be able to access a static feedback ('expert comment'). This feedback will be included in an email sent to all students on the day after the e-seminar, also containing individual performance data as well as the student's free-text answer to the elaboration question. When used as a 'control item', the same key feature question is being displayed, and students can access the expert comment directly after answering the question. Information on control items will not be contained in the mailed feedback. Every student will be exposed to 15 intervention and control items, respectively, and each of these will be shown twice over the course of 10 weeks. Items that are being shown as intervention items in one randomized group will be shown as control items in the other group and vice versa, thus making each student their own control. At the end of the study, individual 'intervention item' and 'control item' scores will be computed for each student, and these two scores will be compared using a paired t Test. This primary analysis will be done to test the following hypothesis: "Long-term retention will be better for content that has been repeatedly tested with additional elaboration questions and subsequent mailed individual feedback than for content that has been repeatedly tested alone." Long-term retention will be assessed in a formative electronic key feature assessment in summer term 2019. It will be identical to the entry and exit exam held in winter term 2018/19. Secondary analyses will include unadjusted and adjusted linear regressions with percent scores in the exit exam and retention test as dependent variables and student characteristics as well as their engagement with key feature questions as independent variables.


Recruitment information / eligibility

Status Completed
Enrollment 143
Est. completion date June 14, 2019
Est. primary completion date June 14, 2019
Accepts healthy volunteers Accepts Healthy Volunteers
Gender All
Age group 18 Years and older
Eligibility Inclusion Criteria: - Enrolment to three consecutive undergraduate teaching modules in Year 4 at Goettingen Medical School in winter term 2018/19 Exclusion Criteria: - no informed consent

Study Design


Related Conditions & MeSH terms


Intervention

Other:
Elaboration and individual mailed feedback
see above

Locations

Country Name City State
Germany University Medical Centre Göttingen Göttingen

Sponsors (1)

Lead Sponsor Collaborator
University Medical Center Goettingen

Country where clinical trial is conducted

Germany, 

References & Publications (15)

Baghdady M, Carnahan H, Lam EW, Woods NN. Test-enhanced learning and its effect on comprehension and diagnostic accuracy. Med Educ. 2014 Feb;48(2):181-8. doi: 10.1111/medu.12302. — View Citation

Dobson JL, Linderholm T. Self-testing promotes superior retention of anatomy and physiology information. Adv Health Sci Educ Theory Pract. 2015 Mar;20(1):149-61. doi: 10.1007/s10459-014-9514-8. Epub 2014 May 17. — View Citation

Goldmann M, Middeke AC, Schuelper N, Dehl T, Raupach T. [Choosing Wisely in medical education]. Z Evid Fortbild Qual Gesundhwes. 2017 Dec;129:22-26. doi: 10.1016/j.zefq.2017.10.014. Epub 2017 Nov 16. German. — View Citation

Kromann CB, Bohnstedt C, Jensen ML, Ringsted C. The testing effect on skills learning might last 6 months. Adv Health Sci Educ Theory Pract. 2010 Aug;15(3):395-401. doi: 10.1007/s10459-009-9207-x. Epub 2009 Oct 17. — View Citation

Kromann CB, Jensen ML, Ringsted C. The effect of testing on skills learning. Med Educ. 2009 Jan;43(1):21-7. doi: 10.1111/j.1365-2923.2008.03245.x. — View Citation

Larsen DP, Butler AC, Roediger HL 3rd. Test-enhanced learning in medical education. Med Educ. 2008 Oct;42(10):959-66. doi: 10.1111/j.1365-2923.2008.03124.x. — View Citation

Logan JM, Thompson AJ, Marshak DW. Testing to enhance retention in human anatomy. Anat Sci Educ. 2011 Sep-Oct;4(5):243-8. doi: 10.1002/ase.250. Epub 2011 Jul 29. — View Citation

Ludwig S, Schuelper N, Brown J, Anders S, Raupach T. How can we teach medical students to choose wisely? A randomised controlled cross-over study of video- versus text-based case scenarios. BMC Med. 2018 Jul 6;16(1):107. doi: 10.1186/s12916-018-1090-y. — View Citation

Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010 Jan;44(1):94-100. doi: 10.1111/j.1365-2923.2009.03507.x. Review. — View Citation

Page G, Bordage G, Allen T. Developing key-feature problems and examinations to assess clinical decision-making skills. Acad Med. 1995 Mar;70(3):194-201. — View Citation

Raupach T, Andresen JC, Meyer K, Strobel L, Koziolek M, Jung W, Brown J, Anders S. Test-enhanced learning of clinical reasoning: a crossover randomised trial. Med Educ. 2016 Jul;50(7):711-20. doi: 10.1111/medu.13069. — View Citation

Raupach T, Schuelper N. Reconsidering the role of assessments in undergraduate medical education. Med Educ. 2018 May;52(5):464-466. doi: 10.1111/medu.13543. — View Citation

Roediger HL 3rd, Karpicke JD. The Power of Testing Memory: Basic Research and Implications for Educational Practice. Perspect Psychol Sci. 2006 Sep;1(3):181-210. doi: 10.1111/j.1745-6916.2006.00012.x. — View Citation

Watling C, Driessen E, van der Vleuten CP, Lingard L. Learning from clinical work: the roles of learning cues and credibility judgements. Med Educ. 2012 Feb;46(2):192-200. doi: 10.1111/j.1365-2923.2011.04126.x. — View Citation

Wing EA, Marsh EJ, Cabeza R. Neural correlates of retrieval-based memory enhancement: an fMRI study of the testing effect. Neuropsychologia. 2013 Oct;51(12):2360-70. doi: 10.1016/j.neuropsychologia.2013.04.004. Epub 2013 Apr 19. — View Citation

* Note: There are 15 references in allClick here to view all references

Outcome

Type Measure Description Time frame Safety issue
Primary Clinical reasoning performance Within-subject difference in percent scores in intervention versus control items in the retention test six months after the last e-seminar Nine months after the start of the study
Secondary Predictors of exam performance Unadjusted and adjusted linear regressions with percent scores in the exit exam and retention test as dependent variables and student characteristics as well as their engagement with key feature questions as independent variables Three (exit exam) and nine (retention test) months after the start of the study
See also
  Status Clinical Trial Phase
Completed NCT05081635 - Consenso2_F1 Delphi Consensus Study on Post-graduate Medical Education Success and Failure and Its Influencing Factors
Recruiting NCT06092320 - Does Teaching Before or After Simulation Improve Learning? N/A
Recruiting NCT05436899 - A Pilot Study on Training Simulator Efficacy N/A
Completed NCT03758391 - Comparison of Learning in Traditional Versus "Flipped" Classrooms N/A
Completed NCT05078762 - Immersive Virtual Reality in Simulation-based Bronchoscopy Training N/A
Completed NCT05526365 - Idea Density in Exam Performance N/A
Completed NCT05043909 - The Effects of Virtual Reality and Augmented Reality Training System on Elderly Oral Care Skill for Oral Hygiene and Nursing Students N/A
Completed NCT05191589 - Haptic Devices Impact on Laparoscopic Simulators N/A
Completed NCT05596305 - Outcomes of Anti Stigma Educational Intervention of Ungraduated Medical Students N/A
Completed NCT06276049 - ChatGPT Helping Advance Training for Medical Students: A Study on Self-Directed Learning Enhancement N/A
Completed NCT02971735 - Cognitive Style and Mobile Technology in E-learning in Undergraduate Medical Education N/A
Completed NCT02168192 - Breaking Bad News in Obstetrics: A Trial of Simulation-Debrief Based Education N/A
Completed NCT00466453 - Adapting Web-based Instruction to Baseline Knowledge of Physicians-in-training Phase 2/Phase 3
Recruiting NCT05169073 - Virtual Reality Training for Laparoscopic Cholecystectomy N/A
Recruiting NCT06259734 - Transfusion Camp for Medical Students in Rwanda N/A
Completed NCT05393219 - Cardiac Biofeedback, Mindfulness, and Inner Resources Mobilization Interventions on Performances of Medical Students N/A
Recruiting NCT04375254 - Neuroscience-based Nomenclature (NbN) as a Teaching Tool
Completed NCT05834374 - Training for Transfer by Contextual Variation N/A
Completed NCT03863028 - Development and Validation of a Simulator-based Test in Transurethral Resection of Bladder Tumors
Completed NCT03471975 - Learning Direct Laryngoscopy Using a McGrath Video Laryngoscope as Direct Versus Indirect Laryngoscope N/A