Clinical Trial Details
— Status: Completed
Administrative data
NCT number |
NCT06234085 |
Other study ID # |
STUDY00015707 |
Secondary ID |
|
Status |
Completed |
Phase |
N/A
|
First received |
|
Last updated |
|
Start date |
July 1, 2022 |
Est. completion date |
June 30, 2023 |
Study information
Verified date |
January 2024 |
Source |
University of Washington |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Interventional
|
Clinical Trial Summary
The goal of this randomized, single-blinded, educational study is to test the effect of
providing crowdsourced ratings and feedback to second-year (PGY2) internal medicine (IM) and
family medicine (FM) resident physicians' about their adverse event communication skills. The
main question it aims to answer is:
- Is the intervention of providing reports with personal performance feedback and
recommendations for effective error disclosure associated with higher ratings of resident
error disclosure skills?
Participants will perform simulated error disclosure with a software tool called the
Video-based Communication Assessment (VCA). Participants will be randomized to receive
feedback reports (intervention) or not (control). Participants receiving the intervention
will be asked to review their feedback and all participants will use the VCA again
approximately 4 weeks later with different patient cases.
Description:
Participating residency programs assigned all eligible post-graduate year 2 (PGY2s) to attend
a 75-minute teaching session at Time 1, consisting of 50 minutes of lecture about
communication with patients after medical harm, 20 minutes of VCA practice with two cases
(containing 4 and 3 sequenced vignettes, respectively), and 5 minutes of debrief. At Time 2,
residents attended a session consisting of 25 minutes of lecture about institutional programs
to support clinicians with error disclosure and 20 minutes of VCA practice with two
additional cases (3 sequenced vignettes each). The recommended duration between Time 1 and
Time 2 was four weeks, although the conference schedule at two residencies required an
interval of 5 to 8 weeks for some residents.
Residents who completed the VCA at Time 1 were randomized in 1:1 fashion to either receive
feedback before Time 2 (intervention) or after Time 2 (control). Intervention residents
received emails when their feedback was available, instructing them to review it in the app
before the next teaching session and VCA practice. Feedback was typically provided two weeks
after VCA use to allow for completion of rating and data quality checks. Reports presented an
interactive feedback display within the VCA app for each vignette.
Residents provided audio responses to each vignette through the VCA software. Audio responses
were bundled into rating tasks on MTurk for raters who were US residents over 18 years old
and able to speak and read English. Raters answered demographic questions, read a vignette
description in lay language, viewed the patient video, and listened to resident responses.
They rated each response on six items covering domains of error disclosure. We averaged
ratings across items and raters to create an overall rating of each response. We then
averaged response ratings across all 7 vignettes at Time 1 to create an overall Time 1 score,
and across all 6 vignettes at Time 2 to create a Time 2 score.
Residents completed questionnaires in the VCA application before proceeding to cases. The
survey at Time 1 asked about age, gender, race, the number of times the resident had
personally participated in disclosure of a harmful error to a patient or family, and the
highest level of involvement they've had during disclosure of a harmful medical error. Before
Time 2, residents who had received feedback were asked "approximately how many minutes did
you spend reviewing your feedback" (response options in 5-min ranges), and "how many of your
own responses did you replay", "how many of the exemplar (highly rated peer) responses did
you play", (response options of none, 1-2, 3-4, 5 or more). Residents responded to four
additional items (Table 2) about the usefulness of each feedback component (scores, personal
recordings, exemplar recordings, learning points) using a 5-point scale with labels from "not
at all" to "extremely"
To address our primary study question about the effect of the intervention, i.e., access to
VCA feedback, we conducted a factorial analysis of covariance (ANCOVA) examining the impact
that the intervention and prior disclosure exposure had on Time 2 scores, while adjusting for
Time 1 scores. We used logistic regression to investigate whether Time 1 scores could predict
the likelihood participants returned for Time 2.