Clinical Trial Details
— Status: Completed
Administrative data
NCT number |
NCT05607485 |
Other study ID # |
P-2020-701 |
Secondary ID |
|
Status |
Completed |
Phase |
N/A
|
First received |
|
Last updated |
|
Start date |
April 1, 2021 |
Est. completion date |
May 1, 2022 |
Study information
Verified date |
November 2022 |
Source |
Copenhagen Academy for Medical Education and Simulation |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Interventional
|
Clinical Trial Summary
The goal of this comparative blinded assessment study is to compare ratings of crowd workers
and expert ratings in simulated robot-assisted radical prostatectomies
The main question[s] it aims to answer are:
- to examine the use of crowdsourced assessment for assessing the performance of
robot-assisted rad-ical prostatectomy (RARP) compared with using experienced surgeons
- to explore if some CW are better than others. Participants will assess edited videos of
simulated robot assisted radical prostatecotmies using a standardized assessment tool.
The laypersons will be asked to answer yes/no to the question: 'Would you trust this
doctor to perform robot-assisted surgery on you?' after each surgery. All participants
were blinded to the identity of the surgeon performing the videos of the robot-assisted
radical prostatectomy Researchers will compare laypersons with expert raters to see if
any difference between their ratings
Description:
3. Trial design 3.1 Content This study will evaluate global robotic skills for the three
modules performed on the RobotiX, Sim-bionix: bladder neck dissection, nerve sparring
dissection, and ureterovesical anastomosis all recorded from the previous study: 'Validation
of a novel simulation-based test in robot-assisted radical prostatectomy.' 3.2 Response
process Experienced surgeons and crowd workers will first be presented with a short, written
instruction describing the trial. Before enrolment all participants will have signed an
informed consent (appendix 2) and a demographic questionnaire for baseline characteristics of
the crowd and surgical experience of the experienced surgeons (Appendix 3). After completion
of the informed consent and demographic questionnaire the survey links will be sent to the
participants. Afterwards, both crowd raters and experts will be trained on how to assess the
videos with use of the assessment tool, mGEARS. mGEARS is composed of 5 domains: depth
perception, bimanual dexterity, efficiency, force sensitivity, and robotic control.
Performance in each domain is measured on a 5-point Likert scale. A rating of 1 corresponds
to the lowest level of performance, whereas a rating of 5 corresponds to the highest level of
performance. An overall performance score is derived by summing the scores of each of the
domains (25 points). The raters will have time to read and understand the assessment tool
before rating the videos. An elaborate explanation of the chosen domains will be given to the
raters including how to rate each video.
3.4 Video material The participants will be assessing the videos using the assessment tool in
a survey sent by E-boks. The surveys will be sent using a URL-link from Redcap. All videos
are stored at 23video system and a link to the videos will be included in the survey. The
survey has successfully been tested on different devices.
The investigators will randomly choose videos from the third repetition from 5 novice
surgeons, 5 experienced robotic surgeons and 5 experienced robotic surgeons in RARP. The
investigators will use edited videos to the length of maximum 5 minutes. The videos will be
edited from start (0 minutes) and 5 minutes forward, where the video will be stopped.
Therefore, the videos will show how far the surgeon has come after 5 minutes of simulated
operation. A total of 4548 edited videos will be used for crowd-sourced assessment.
To secure response process of Messick's framework all participants will be blinded to the
identity and skill level of the surgeon on the recorded video. The experienced surgeons could
potentially rate their own videos, which could be a threat to validity for the response
process, but as the vide-os are blinded, they will not know which videos are their own. In
addition, there will be a signifi-cant time delay between them having performed the task and
rating the videos. Thus, it is unlikely that they will be able to identify their own videos.
All videos will be given a randomly allocated identification ID.
3.5 Video-rating Each participant will rate ten randomly chosen videos using GEARS. The
participants will be given a randomized ID number, which is used to match the ten videos to
the participant. They will be asked to evaluate each video with the five different domains of
GEARS on a scale from one to five. After rating the video, the participant will be asked to
answer 'yes' or 'no' to the question: 'Would the participant trust this doctor to operate on
he participant, if the participant were to have their prostate removed using robotic-assisted
surgery?'. The participants will fill in the answers after the video-rating in RedCap.
3.6 Evaluation questions After the crowd-raters finish the video-ratings, they will receive a
final questionnaire in RedCap, where they are asked their opinion about a possible future
role as crowd-raters regarding time use and possible payment level (appendix 4).
3.7 Data-collection All data will be collected and stored in RedCap, which is a platform
designed to store research data. All data will be pseudo anonymized as all participants will
get a unique link only known to the participant and the principal investigator (RGO). The
participants can only rate the video once. The data will be blinded by RGO prior to
statistical analysis.
4. Selection of participants The crowd workers will be recruited by a Danish Association for
volunteer patients who would like to contribute to research, Forskningspanelet. e-mail,
Facebook, at the website of the Danish prostate cancer association (PROPA) or the monthly
PROPA membership magazine.
The expert panel will be invited by e-mail.