View clinical trials related to Artificial Intelligence.
Filter by:A test-retest study on the stability and repeatability of healthy skin features on OCT
This is a study to validate the effect of the intelligent diagnostic evidence-based analytic system in acute abdominal pain augmentation. Included physicians were randomly assigned into control or AI-assisted group. In this experiment, the whole electronic health record of each acute abdominal pain patient was divided into two parts, signs and symptoms recording (including chief complaint, present history, physical examination, past medical history, trauma surgery history, personal history, family history, obstetrical history, menstrual history, blood transfusion history, drug allergy history) and auxiliary examination recording (including laboratory examination and radiology report). For each case, the control group readers will first read the signs and symptoms recording of electronic health record and make a clinical diagnosis. Then the readers have to decide to either order a list of auxiliary examinations or confirm the clinical diagnosis without further examination. If the readers choose to order examinations, the corresponding examination results will be feedback to the readers, and the readers can then decide to either continue to order a list of auxiliary examinations or make a confirming diagnosis. Such cycle will last until the reader make a confirming diagnosis. For the AI-assisted readers, the physicians were additionally provided with the feature extracted by IDEAS-AAP, a list of suspicious diagnoses predicted by IDEAS-AAP, and corresponding diagnostic criteria according to guidelines. After the readers get the examination results, the IDEAS-AAP will renew its diagnosis prediction
The study has an initial short retrospective component but is predominately a prospective study with two main parts. Initially during a 1 month period whilst reporters are familiarising themselves with the software two local databases will be reviewed by the AI software: - A training set of 100 chest X-rays (CXR) some of which contain nodules and is used as a training tool with previously documented radiologist performance. - A set of previously reported radiographs in patients referred by the reporter for CT, ground truth created from the prior CT report and review by two radiologists if required. This will allow comparison of stand-alone radiologist and AI performance This is followed by a 6 month period involving multiple groups of reporters and approximately 20,000 cases looking at the impact of an AI system which assesses 10 abnormalities on chest X-ray and reporting on the sensitivity for detection of lesions and its impact on reporter confidence. Specifically the investigators would look at: - Missed finding by AI, but detected by reporter - Correctly detected finding by AI - Missed finding by the reporter but detected by AI - Finding detected by AI but disputed by the reporter ■ AI's impact on - Radiological report - Further recommended imaging - Altering patient management - improvement in report confidence as perceived by reporter A subsequent 3 month period looking at the impact of AI produced worklists on report turnaround times and the patient pathway from chest X-ray to CT. the investigators would specifically look at: - number of nodules detected - number of CXRs recommended for follow up CT - time taken from CXR to CT - number of lung cancers detected after CT[1] - Time to report, measured as previously from PACS and reporting software data The population to be studied will be all patients over 16 years of age referred by their General Practitioner to Hull University Hospitals NHS Trust for a chest radiograph and any chest radiograph performed in the Hull Royal Infirmary ED radiology for patients over 16 years of age during the 6 month study period. The ED department images patients from the emergency department and in-patients within the hospital. All radiographs will be reviewed initially without review of the AI information and then using the additional images. Reporters will mark the effect of the AI on their decision. All disagreements between the reporter and the AI will be reviewed by senior reporters and a consensus decision made.
The investigators aim to build a predictive tool for Adverse Outcome of Acute Pulmonary Embolism by Artificial Intelligence System Based on CT Pulmonary Angiography.
In this study, we proposed a prospective study about the effectiveness of artificial intelligence system for endoscopy report quality in endoscopists. The subjects would be divided into two groups. For the collected endoscopic videos, group A would complete the endoscopy report with the assistance of the artificial intelligence system. The artificial intelligence assistant system can automatically capture images, prompt abnormal lesions and the parts covered by the examination (the upper gastrointestinal tract is divided into 26 parts). Group B would complete the endoscopy report without special prompts. After a period of forgetting, the two groups switched, that is, group A without AI assistance and group B with AI assistance to complete the endoscopy report. Then, the completeness of the report lesion, the accuracy of the lesion location, the completeness of the lesion and the standard part in the captured images, and so on were compared with or without AI assistance.
Patients' subjective complaints about pain intensity are difficult to objectively evaluate, and may lead to inadequate pain management, especially in patients with communication difficulties.
The operative link on gastric intestinal metaplasia assessment (OLGIM) staging systems using biopsy specimens were commonly used for histological assessment of gastric cancer risk. But its clinical application is limited for at least biopsy samples. The endoscopic grading system (EGGIM) has been shown a significant correlation with the OLGIM. The investigators designed a computer-aided diagnosis program using deep neural network to automatically evaluate the extent of IM and calculate the EGGIM scores in endoscopy examination. This study is aimed at exploring the relevance of the EGGIM scores automatically evaluated by Artificial Intelligence and OLGIM scores.
In response to clinical needs, infrared multi-spectral images are combined with traditional clinical images and other multi-modal data to build a more efficient intelligent auxiliary diagnosis system and intelligent equipment for skin health and diseases, including skin lesions automatically segmentation on skin diseases images, automatically design surgical margin and planning for skin tumor surgery.
Currently, the Correa cascade is a widely accepted model of gastric carcinogenesis. Intestinal metaplasia is a high risk factor for gastric cancer. According to Sydney criteria, mild intestinal metaplasia was not associated with gastric cancer, while moderate to severe intestinal metaplasia was strongly associated with the development of gastric cancer. Because intestinal metaplasia is distributed in various forms, the use of white light endoscopy lacks specificity, and the consistency with histopathological diagnosis is poor; Pathological biopsy is still needed to make a diagnosis. At present, national guidelines suggest that OLGIM score should be used to evaluate the risk of gastric cancer, and patients with OLGIM grade III/IV should be monitored by close gastroscopy. However, it requires at least four biopsies, which is clinically infeasible. Confocal laser endomicroscopy allows real-time observation of living tissue, comparable to pathological findings.
Gastric intestinal metaplasia(GIM) is an important stage in the gastric cancer(GC). With technical advance of image-enhanced endoscopy (IEE), studies have demonstrated IEE has high accuracy for diagnosis of GIM. The endoscopic grading system (EGGIM), a new endoscopic risk scoring system for GC, have been shown to accurately identify a wide range of patients with GIM. However, the high diagnostic accuracy of GIM using IEE and EGGIM assessments performed all require much experience, which limits the application of EGGIM. The investigators aim to design a computer-aided diagnosis program using deep neural network to automatically evaluate the extent of IM and calculate the EGGIM scores.