Clinical Trials Logo

Clinical Trial Details — Status: Enrolling by invitation

Administrative data

NCT number NCT05382455
Other study ID # UP-22-00370
Secondary ID
Status Enrolling by invitation
Phase
First received
Last updated
Start date June 15, 2022
Est. completion date July 15, 2022

Study information

Verified date May 2022
Source University of Southern California
Contact n/a
Is FDA regulated No
Health authority
Study type Observational

Clinical Trial Summary

The investigators aim to develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Artificial Intelligence Extension (PRISMA-AI) guideline as a stand-alone extension of the PRISMA statement, modified to reflect the particular requirements for the reporting of AI and its related topics (namely machine learning, deep learning, neuronal networking) in systematic reviews.


Description:

With advances in artificial intelligence (AI) over the last two decades, enthusiasm and adoption of this technology in medicine have steadily increased. Yet despite the greater adoption of AI in medicine, the way such methodologies and results are reported varies widely and the readability of clinical studies utilizing AI can be challenging to the general clinician. Systematic reviews of AI applications are an important area for which specific guidance is needed. An ongoing systematic review led by our team has shown that the number of systematic reviews on AI applications (with or without meta-analysis) is increasing dramatically over the time, yet the quality of reporting is still poor and heterogeneous, leading to inconsistencies in the reporting of informational details among individual studies. Consequently, the lack of these informational details may front problems for primary research and synthesis and potentially limits their usefulness for stakeholders interested in implementing AI or using the information in systematic reviews. The criteria will derive from the consensus among multi-specialty experts (in each medical specialty) who have already published about AI applications in leading medical journals and the lead authors of PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI to ensure that the criteria have global applicability in all the disciplines and for each type of study which involves the AI. The proposed PRISMA-AI extension criteria focus on standardizing the reporting of methods and results for clinical studies utilizing AI. These criteria will reflect the most relevant technical details a data scientist requires for future reproducibility, yet they focus on the ability for the clinician reader to critically follow and ascertain the relevant outcomes of such studies. The resultant PRISMA-AI extension will 1. help stakeholders interested in implementing AI or using AI-related information in systematic reviews 2. create a framework for reviewers that assess publications, 3. provide a tool for training researchers on Artificial Intelligence SR methodology 4. help end-users of the SR such as physicians and policymakers to better evaluate SR's validity and applicability in their decision-making process. The success of the criteria will be seen in how manuscripts are written, how peer reviewers assess them, and finally, how the general readership is able to read and digest the published studies


Recruitment information / eligibility

Status Enrolling by invitation
Enrollment 150
Est. completion date July 15, 2022
Est. primary completion date June 30, 2022
Accepts healthy volunteers No
Gender All
Age group 18 Years and older
Eligibility Inclusion Criteria: - experts in the use AI technology in medicine - experts in PRISMA - leading authors of STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI Exclusion Criteria: - Panelists who were not able to commit to all rounds of the modified Delphi process will be excluded

Study Design


Related Conditions & MeSH terms


Intervention

Other:
Delphi Questionnaire
An invitation email, including a link to the survey, will be sent to the panel of experts in Ai in healthcare. The Delphi questionnaire will be administered via Welphi.com. In the first survey, panel members will outline the AI reporting standards in systematic reviews and objectively identify critical aspects of reporting methodology and results. In subsequent surveys, the expert panel will evaluate the modified criteria using a 1 to 5-point Likert scale with space provided for suggested edits and comments. Multiple rounds will be conducted until consensus is reached. After each round of Likert responses, the study team will calculate the agreement and distribution of responses. Likert responses will be dichotomized with positive values indicating agreement and neutral or negative values indicating disagreement. For the questions that do not reach a consensus of more than 80% in the first round or need further explanation, additional rounds of the survey may be performed.

Locations

Country Name City State
United States University of Southern California Los Angeles California

Sponsors (1)

Lead Sponsor Collaborator
University of Southern California

Country where clinical trial is conducted

United States, 

References & Publications (5)

Collins GS, Dhiman P, Andaur Navarro CL, Ma J, Hooft L, Reitsma JB, Logullo P, Beam AL, Peng L, Van Calster B, van Smeden M, Riley RD, Moons KG. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021 Jul 9;11(7):e048008. doi: 10.1136/bmjopen-2020-048008. — View Citation

Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ; SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Lancet Digit Health. 2020 Oct;2(10):e549-e560. doi: 10.1016/S2589-7500(20)30219-3. Epub 2020 Sep 9. Review. — View Citation

Ibrahim H, Liu X, Rivera SC, Moher D, Chan AW, Sydes MR, Calvert MJ, Denniston AK. Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines. Trials. 2021 Jan 6;22(1):11. doi: 10.1186/s13063-020-04951-6. — View Citation

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tr — View Citation

Sounderajah V, Ashrafian H, Aggarwal R, De Fauw J, Denniston AK, Greaves F, Karthikesalingam A, King D, Liu X, Markar SR, McInnes MDF, Panch T, Pearson-Stuttard J, Ting DSW, Golub RM, Moher D, Bossuyt PM, Darzi A. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: The STARD-AI Steering Group. Nat Med. 2020 Jun;26(6):807-808. doi: 10.1038/s41591-020-0941-1. — View Citation

Outcome

Type Measure Description Time frame Safety issue
Primary Degree of consensus The level of agreement for all statements achieving consensus from the expert panel; consensus is predefined as = 80% of the panel rating a given statement 3 months
See also
  Status Clinical Trial Phase
Completed NCT05386082 - Anesthesia Core Quality Metrics Consensus Delphi Study
Completed NCT05595018 - The Opinions of Multiple Stakeholders Towards Gerontechnology Evaluation Framework: Four Studies Using Delphi Techniques
Enrolling by invitation NCT05388786 - Complications and Adverse Events in Lymphadenectomy in the Inguinal Area
Completed NCT05373966 - Robotic Radical Nephroureterectomy Delphi Consensus
Completed NCT05668156 - Finding Consensus in Fasting Terminology
Recruiting NCT04471103 - Comparison of Multi-Round and Real-Time Delphi Survey Methods N/A
Recruiting NCT06370338 - Cardiothoracic Critical Care as Subspecialty and Its Core Competencies