Clinical Trial Details
— Status: Completed
Administrative data
NCT number |
NCT05495438 |
Other study ID # |
22CX7592 |
Secondary ID |
|
Status |
Completed |
Phase |
|
First received |
|
Last updated |
|
Start date |
July 22, 2022 |
Est. completion date |
October 31, 2022 |
Study information
Verified date |
February 2023 |
Source |
Imperial College London |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Observational
|
Clinical Trial Summary
The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in
particular with regards to how it will influence human decision makers. Previous research
demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to
unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other
hand, excessive confidence and trust placed in the AI could have several adverse consequences
including ability to detect harmful AI decisions, leading to patient harm as well as human
deskilling. Some of these aspects relate to automation bias.
In this simulation study, the investigators intend to measure whether medical decisions in
areas of high clinical uncertainty are modified by the use of an AI-based clinical decision
support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by
doctors in adult patients with sepsis (severe infection with organ failure) in the ICU),
changes as a result of disclosing the doses suggested by a hypothetical AI will be measured.
The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high
variability in practice. This study will not specifically mention the AI Clinician
(Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for
which there is some evidence of effectiveness on retrospective data in another clinical
setting (e.g. a model that was retrospectively validated using data from a different country
than the source data used for model training) but no prospective evidence of effectiveness or
safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The
investigators will intentionally introduce unsafe AI suggestions (in random order), to
measure the sensitivity of our participants at detecting these.
Description:
The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in
particular with regards to how it will influence human decision makers. Previous research
demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to
unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other
hand, excessive confidence and trust placed in the AI could have several adverse consequences
including ability to detect harmful AI decisions, leading to patient harm as well as human
deskilling. Some of these aspects relate to automation bias.
In this simulation study, the investigators intend to measure whether medical decisions in
areas of high clinical uncertainty are modified by the use of an AI-based clinical decision
support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by
doctors in adult patients with sepsis (severe infection with organ failure) in the ICU),
changes as a result of disclosing the doses suggested by a hypothetical AI will be measured.
The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high
variability in practice. This study will not specifically mention the AI Clinician
(Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for
which there is some evidence of effectiveness on retrospective data in another clinical
setting (e.g. a model that was retrospectively validated using data from a different country
than the source data used for model training) but no prospective evidence of effectiveness or
safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The
investigators will intentionally introduce unsafe AI suggestions (in random order), to
measure the sensitivity of our participants at detecting these.
The investigators will examine what participant characteristics are linked with an increase
likelihood of being influenced by the AI, and conduct a number of pre-specified subgroup
analyses, e.g. junior versus senior ICU doctors, and separating those with a positive or a
negative attitude towards AI.