Clinical Trials Logo

Clinical Trial Summary

Currently, the domestic emergency medical system is disconnected from the information flow between hospitals, emergency sites, and control agencies, which are participants in the emergency medical system, and there are limitations in collecting and utilizing integrated data in emergency situations [1]. In addition, due to the lack of manpower for emergency services at the site and the lack of a real-time patient information delivery system, sufficient data records are not made to reflect the situation at the emergency site, and emergency patient information at the pre-hospital stage is not delivered to the transfer hospital [1]. Records of pre-hospital patient information that are currently being prepared are often written by hand, relying on the memory of paramedics after completing patient transfer, so the data is highly inaccurate and cannot be guaranteed to be reliable[2]. In particular, in the case of the four major serious emergency diseases, which are called cardiac arrest, severe trauma, cardiovascular emergency, and cerebrovascular emergency, the patient information identified in the emergency stage is very important in determining the severity, so it is very important to collect real-time patient information in the field to evaluate the severity, and based on the results of this evaluation, it is possible to select a medical institution suitable for treatment [3,4]. In addition, in the case of these serious emergency diseases, since targeted treatment is determined to be performed within a certain time, if the medical staff of the medical institution is aware of the patient's information before the patient arrives at the hospital, it is possible to prepare in advance for emergency treatment, thereby increasing the performance rate of emergency treatment within a reasonable time [5,6,7].


Clinical Trial Description

We develop an AI-based algorithm that can predict the four major serious emergency diseases that require immediate emergency treatment and evaluate their severity through the following input and output models. 1. Algorithm input data 1) Video/Image Data - Video acquisition via a 360-degree camera installed inside the ambulance and a Mobile Hot spot (MHS) device connection (RJ-45 wired connection) - Video data collection via neckband camera (wearable) - Number of videos through smart glasses (wearable) devices and first-aid terminals 2) Sound Signal Data - Collect pre-hospital paramedic voice and patient voice data through bone conduction microphones worn by paramedics 3) Bio-signal data - Patient monitoring device installed in ambulance Mobile Hot Spot (MHS) device Collecting and transmitting vital signs through TCP/IP connection - Defibrillators and emergency terminals used by field crews (5G support) Collect and transmit vital signs through TCP/IP connection 2. Development technology 1) Development of voice recognition AI technology in emergency environment - Collect spoken text, emergency-related sentences, and voice data from the site-transport phase - Speech text collection from on-site transfer service scenario to build voice DB for voice recognition learning - Paraphrasing, from the collected text, in which paramedics generate sentences with similar meanings that can be uttered 2) Establishment of a natural language processing system for emergency environment voice transcription data - natural language processing such as stemming analysis and entity name recognition Optimize the emergency medical domain of the module - Collection of language data in emergency environments such as emergency activities, first aid, and first aid - Processing collected emergency environment language data for domain optimization and learning a machine learning-based natural language processing model 3) Paramedic voice information noise removal and speaker separation model design 4) Development of AI-based image recognition bio-sign information monitoring technology in ambulances - Development of image-based character recognition algorithm for PMS (Patient monitoring system) equipment output vital signs - Implementation of automatic recognition technology for PMS equipment (location, type, brand, etc.) through AI learning-based 5G 360°CAM video - Development of automatic character area recognition and OCR (Optical Character Recognition)-based reading algorithm for each type of vital signal - Implementation of NLP (Natural language process)-based specific/distorted character correction technology - Development of image pre-processing technology that minimizes the effects of background, noise, vibration, lighting, etc. 5) Development of emergency activity image information object detection module 6) AI behavior detection video analysis modeling - Behavior detection using deep learning Image Analysis Modeling- Analysis of General Behavior Detection Techniques - Class target test similar to emergency medical behavior during general behavior detection - Class detection similar to the rescue activities of paramedics and the movement of patients - General behavior detection modeling 7) The input variable obtained based on the obtained multifaceted data extracts the main determinants for the model output variable through the Ji-an Lee deep network method and calculates the predictive power for the final output variable. ;


Study Design


Related Conditions & MeSH terms


NCT number NCT05939258
Study type Observational
Source Yonsei University
Contact
Status Completed
Phase
Start date April 19, 2021
Completion date December 31, 2021