Clinical Trial Details
— Status: Active, not recruiting
Administrative data
NCT number |
NCT06036394 |
Other study ID # |
4147 |
Secondary ID |
|
Status |
Active, not recruiting |
Phase |
|
First received |
|
Last updated |
|
Start date |
September 13, 2023 |
Est. completion date |
September 2028 |
Study information
Verified date |
November 2023 |
Source |
Tata Memorial Centre |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Observational
|
Clinical Trial Summary
Radiotherapy involves the use of high-energy X-rays, which can be used to stop the growth of
tumor cells. Radiotherapy constitutes an essential avenue in the treatment of brain tumors.
The modern techniques of radiotherapy involve radiation planning techniques guided by
computer algorithms aimed to deliver high doses of radiation to the areas of brain with
tumors and limit the doses to surrounding normal structures. Artificial intelligence uses
advanced analytical processes aided by computational analysis, which can be undertaken on the
medical images, and radiation planning process. We plan to use artificial intelligence
techniques to automatically delineate areas of the brain with tumor and other normal
structures as identified from images. Also, we will use artificial intelligence on the
radiation dose images and other images done for radiation treatment to classify tumors with
good or bad prognoses, identify patients developing radiation complications, and detect
responses after treatment.
Description:
In the proposed retrospective study, patients treated with Radiotherapy (RT) for Central
Nervous System(CNS) tumors will be included. The DMG database maintaining records of patients
registered and treated in TMC (Tata Memorial Centre) neuro oncology DMG (Disease Management
Group) will be screened to identify the patients eligible for the study. With approximately
500-600 patients with CNS tumors treated with RT in TMC every year, we expect a ceiling of
6000 patients during 2010-2022, which will be the maximum number of patients used for the
analysis. The images (CT, MRI, PET) used for RT planning, mid-treatment imaging as part of
IGRT (Image-guided radiation therapy) or disease evaluation, and response assessment/
surveillance post-RT will be analyzed. The radiation plans and dose-volume histogram will be
obtained from TPS (Treatment Planning System). All the images and radiation-related data will
be downloaded from the PACS (Picture Archiving and Communication System) and TPS, applying
anonymization filters. Clinical features (patient, disease, treatment-related
characteristics, and outcomes) will be extracted by review of electronic medical records.
Imaging pre-processing will be done, which will include skull stripping and registration
across different modalities (e.g., MRI and CT) or different sequences (e.g., T1C, T2W, ADC)
will be done using rigid or deformable algorithms as suited best for the modality. The target
volumes, i.e., gross tumor volume (GTV), clinical target volume (CTV), and planning target
volume (PTV), and OARs will be individually reviewed by radiation oncologists with
modifications applied as appropriate (e.g., exclusion of OARs (Organs at risk) distorted by
disease or surgery) and will be used to train the machine learning models for supervised
learning. The contours and the images will be resampled to a uniform resolution for different
sequences or modalities (e.g., T2W/ ADC/ PET) to match either with the 3D sequence (e.g.,
FSPGR sequence) or available images with the least slice thickness. Subsequently,
normalization techniques (e.g., histogram normalization/ Z-score normalization) will be
undertaken within the individual images and across the entire dataset to account for image
heterogeneity, including field strength for MRI and different image acquisition parameters.
For autosegmentation, both supervised and unsupervised machine learning algorithms will be
applied. For the supervised model, the entire database will be split into training and test
cohorts for the model and application development, respectively. Since the OARs are uniformly
applicable for different histology or tumor sites, autosegmentation training will be applied
to the entire dataset. However, given there are variations in target volume delineations
(e.g., for circumscribed vs. diffuse tumors, low grade vs. high grade), the training/ testing
for TVs will be applicable for individual disease entities. The effectiveness of the
automated model will be tested using the dice similarity coefficient between manually
segmentation regions and AI-based segments. For outcome prediction (e.g., survival and
toxicities), the next step will include feature extraction from images (CT, MRI, PET)
corresponding to different TV and OARs and RT dose distribution data converted to volumetric
image/ number data (dosiomics), which will consist of first-order (including shape,
histogram), second-order or higher-order (e.g., different texture features like GLCM, GLDM,
GLSZM, etc.), or deep learning techniques will be employed. Delta-radiomics will include
temporal changes in the radiomic features from different time points for the same patient
within the entire volume and individual regions. Subsequently, feature reduction and
selection techniques like LASSO, recursive feature elimination will be used to shortlist the
number of features depending on the sample size. The outputs will be decided based on the
modeling defined for specific class problems (e.g., tumor vs. edema, recurrence vs.
pseudoprogression, outcomes, tumor region of interest vs. non-tumoral area) as obtained from
the clinical information. Any class imbalance will be addressed using methods like random
subset sampling or SMOTE analysis for data augmentation of the minority class. Machine
learning algorithms like LDA, k-NN, SVM, random forest, AdaBoost, etc., will be applied
singularly or in combination as an ensembled classifier to find the model with the best
performance. Deep learning classifiers will be used along with feature-based modeling and
compared to test the classifier's applicability. Validation techniques like leave-one-out
validation, k-fold validation, and split (into training and test cohort) will be used to
assess the stability of the machine learning model. Radiomic analysis will be done by data
scientist/ study investigators with expertise in data analytics. All segmentations will be
done on open source software like ITK snap (itksnap.org) or 3D Slicer (slicer.org). Feature
extraction and modeling will be done using opensource software like Python (python.org). As a
tertiary objective in the project, we will develop a protocol for anonymized data storage
(clinical information, radiation planning and response assessment images, radiation planning
data, intra-treatment images like cone beam computed tomography) in a secured image biobank
repository with protected cloud space. Also, natural language processing (NLP) algorithms
will be developed to train and validate model for extraction, and documentation of clinical
variables extracted for the study. With continuous advancements in computational science,
available newer analytical techniques and platforms will be applied as appropriate by
collaborators from Bhabha Atomic Research Centre, Mumbai, by sharing anonymized data.