BACKGROUND: The radiological differential diagnosis between tumor recurrence and radiation-induced necrosis (ie, pseudoprogression) is of paramount importance in the management of glioma patients. OBJECTIVE: This research aims to develop a deep learning methodology for automated differentiation of tumor recurrence from radiation necrosis based on routine magnetic resonance imaging (MRI) scans. METHODS: In this retrospective study, 146 patients who underwent radiation therapy after glioma resection and presented with suspected recurrent lesions at the follow-up MRI examination were selected for analysis. Routine MRI scans were acquired from each patient, including T1, T2, and gadolinium-contrast-enhanced T1 sequences. Of those cases, 96 (65.8%) were confirmed as glioma recurrence on postsurgical pathological examination, while 50 (34.2%) were diagnosed as necrosis. A light-weighted deep neural network (DNN) (ie, efficient radionecrosis neural network [ERN-Net]) was proposed to learn radiological features of gliomas and necrosis from MRI scans. Sensitivity, specificity, accuracy, and area under the curve (AUC) were used to evaluate performance of the model in both image-wise and subject-wise classifications. Preoperative diagnostic performance of the model was also compared to that of the state-of-the-art DNN models and five experienced neurosurgeons. RESULTS: DNN models based on multimodal MRI outperformed single-modal models. ERN-Net achieved the highest AUC in both image-wise (0.915) and subject-wise (0.958) classification tasks. The evaluated DNN models achieved an average sensitivity of 0.947 (SD 0.033), specificity of 0.817 (SD 0.075), and accuracy of 0.903 (SD 0.026), which were significantly better than the tested neurosurgeons (P=.02 in sensitivity and P<.001 in specificity and accuracy). CONCLUSIONS: Deep learning offers a useful computational tool for the differential diagnosis between recurrent gliomas and necrosis. The proposed ERN-Net model, a simple and effective DNN model, achieved excellent performance on routine MRI scans and showed a high clinical applicability.
We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only. We also show the use of the text for explaining the image classification by the joint model. To the best of our knowledge, our approach is the first to leverage free-text radiology reports for improving the image model performance in this application. Our code is available at: https://github.com/RayRuizhiLiao/joint_chestxray.
Using medical images to evaluate disease severity and change over time is a routine and important task in clinical decision making. Grading systems are often used, but are unreliable as domain experts disagree on disease severity category thresholds. These discrete categories also do not reflect the underlying continuous spectrum of disease severity. To address these issues, we developed a convolutional Siamese neural network approach to evaluate disease severity at single time points and change between longitudinal patient visits on a continuous spectrum. We demonstrate this in two medical imaging domains: retinopathy of prematurity (ROP) in retinal photographs and osteoarthritis in knee radiographs. Our patient cohorts consist of 4861 images from 870 patients in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) cohort study and 10,012 images from 3021 patients in the Multicenter Osteoarthritis Study (MOST), both of which feature longitudinal imaging data. Multiple expert clinician raters ranked 100 retinal images and 100 knee radiographs from excluded test sets for severity of ROP and osteoarthritis, respectively. The Siamese neural network output for each image in comparison to a pool of normal reference images correlates with disease severity rank (ρ = 0.87 for ROP and ρ = 0.89 for osteoarthritis), both within and between the clinical grading categories. Thus, this output can represent the continuous spectrum of disease severity at any single time point. The difference in these outputs can be used to show change over time. Alternatively, paired images from the same patient at two time points can be directly compared using the Siamese neural network, resulting in an additional continuous measure of change between images. Importantly, our approach does not require manual localization of the pathology of interest and requires only a binary label for training (same versus different). The location of disease and site of change detected by the algorithm can be visualized using an occlusion sensitivity map-based approach. For a longitudinal binary change detection task, our Siamese neural networks achieve test set receiving operator characteristic area under the curves (AUCs) of up to 0.90 in evaluating ROP or knee osteoarthritis change, depending on the change detection strategy. The overall performance on this binary task is similar compared to a conventional convolutional deep-neural network trained for multi-class classification. Our results demonstrate that convolutional Siamese neural networks can be a powerful tool for evaluating the continuous spectrum of disease severity and change in medical imaging.
Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to clinical quality medical images. We develop a novel parameterization of conditional generative adversarial networks that achieves high image fidelity when trained to transform MRIs conditioned on a patient's age and disease severity. The spatial-intensity transform generative adversarial network (SIT-GAN) constrains the generator to a smooth spatial transform composed with sparse intensity changes. This technique improves image quality and robustness to artifacts, and generalizes to different scanners. We demonstrate SIT-GAN on a large clinical image dataset of stroke patients, where it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. Additionally, SIT-GAN provides a disentangled view of the variation in shape and appearance across subjects.
Evolution provides an important window into how cortical organization shapes function and vice versa. The complex mosaic of changes in brain morphology and functional organization that have shaped the mammalian cortex during evolution, complicates attempts to chart cortical differences across species. It limits our ability to fully appreciate how evolution has shaped our brain, especially in systems associated with unique human cognitive capabilities that lack anatomical homologues in other species. Here, we develop a function-based method for cross-species alignment that enables the quantification of homologous regions between humans and rhesus macaques, even when their location is decoupled from anatomical landmarks. Critically, we find cross-species similarity in functional organization reflects a gradient of evolutionary change that decreases from unimodal systems and culminates with the most pronounced changes in posterior regions of the default mode network (angular gyrus, posterior cingulate and middle temporal cortices). Our findings suggest that the establishment of the default mode network, as the apex of a cognitive hierarchy, has changed in a complex manner during human evolution - even within subnetworks.
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.
Diffusion MRI (dMRI) tractography has been successfully used to study the trigeminal nerves (TGNs) in many clinical and research applications. Currently, identification of the TGN in tractography data requires expert nerve selection using manually drawn regions of interest (ROIs), which is prone to inter-observer variability, time-consuming and carries high clinical and labor costs. To overcome these issues, we propose to create a novel anatomically curated TGN tractography atlas that enables automated identification of the TGN from dMRI tractography. In this paper, we first illustrate the creation of a trigeminal tractography atlas. Leveraging a well-established computational pipeline and expert neuroanatomical knowledge, we generate a data-driven TGN fiber clustering atlas using tractography data from 50 subjects from the Human Connectome Project. Then, we demonstrate the application of the proposed atlas for automated TGN identification in new subjects, without relying on expert ROI placement. Quantitative and visual experiments are performed with comparison to expert TGN identification using dMRI data from two different acquisition sites. We show highly comparable results between the automatically and manually identified TGNs in terms of spatial overlap and visualization, while our proposed method has several advantages. First, our method performs automated TGN identification, and thus it provides an efficient tool to reduce expert labor costs and inter-operator bias relative to expert manual selection. Second, our method is robust to potential imaging artifacts and/or noise that can prevent successful manual ROI placement for TGN selection and hence yields a higher successful TGN identification rate.
Fiber tracking produces large tractography datasets that are tens of gigabytes in size consisting of millions of streamlines. Such vast amounts of data require formats that allow for efficient storage, transfer, and visualization. We present TRAKO, a new data format based on the Graphics Layer Transmission Format (glTF) that enables immediate graphical and hardware-accelerated processing. We integrate a state-of-the-art compression technique for vertices, streamlines, and attached scalar and property data. We then compare TRAKO to existing tractography storage methods and provide a detailed evaluation on eight datasets. TRAKO can achieve data reductions of over 28x without loss of statistical significance when used to replicate analysis from previously published studies.
Glioblastoma might have widespread effects on the neural organization and cognitive function, and even focal lesions may be associated with distributed functional alterations. However, functional changes do not necessarily follow obvious anatomical patterns and the current understanding of this interrelation is limited. In this study, we used resting-state functional magnetic resonance imaging to evaluate changes in global functional connectivity patterns in 15 patients with glioblastoma. For six patients we followed longitudinal trajectories of their functional connectome and structural tumour evolution using bi-monthly follow-up scans throughout treatment and disease progression. In all patients, unilateral tumour lesions were associated with inter-hemispherically symmetric network alterations, and functional proximity of tumour location was stronger linked to distributed network deterioration than anatomical distance. In the longitudinal subcohort of six patients, we observed patterns of network alterations with initial transient deterioration followed by recovery at first follow-up, and local network deterioration to precede structural tumour recurrence by two months. In summary, the impact of focal glioblastoma lesions on the functional connectome is global and linked to functional proximity rather than anatomical distance to tumour regions. Our findings further suggest a relevance for functional network trajectories as a possible means supporting early detection of tumour recurrence.
BACKGROUND: Post-traumatic stress disorder (PTSD) is a psychiatric disorder that afflicts many individuals, yet the neuropathological mechanisms that contribute to this disorder remain to be fully determined. Moreover, it is unclear how exposure to mild traumatic brain injury (mTBI), a condition that is often comorbid with PTSD, particularly among military personnel, affects the clinical and neurological presentation of PTSD. To address these issues, the present study explores relationships between PTSD symptom severity and the microstructure of limbic and paralimbic gray matter brain regions, as well as the impact of mTBI comorbidity on these relationships. METHODS: Structural and diffusion MRI data were acquired from 102 male veterans who were diagnosed with current PTSD. Diffusion data were analyzed with free-water imaging to quantify average CSF-corrected fractional anisotropy (FA) and mean diffusivity (MD) in 18 limbic and paralimbic gray matter regions. Associations between PTSD symptom severity and regional average dMRI measures were examined with repeated measures linear mixed models. Associations were studied separately in veterans with PTSD only, and in veterans with PTSD and a history of military mTBI. RESULTS: Analyses revealed that in the PTSD only cohort, more severe symptoms were associated with higher FA in the right amygdala-hippocampus complex, lower FA in the right cingulate cortex, and lower MD in the left medial orbitofrontal cortex. In the PTSD and mTBI cohort, more severe PTSD symptoms were associated with higher FA bilaterally in the amygdala-hippocampus complex, with higher FA bilaterally in the nucleus accumbens, with lower FA bilaterally in the cingulate cortex, and with higher MD in the right amygdala-hippocampus complex. CONCLUSIONS: These findings suggest that the microstructure of limbic and paralimbic brain regions may influence PTSD symptomatology. Further, given the additional associations observed between microstructure and symptom severity in veterans with head trauma, we speculate that mTBI may exacerbate the impact of brain microstructure on PTSD symptoms, especially within regions of the brain known to be vulnerable to chronic stress. A heightened sensitivity to the microstructural environment of the brain could partially explain why individuals with PTSD and mTBI comorbidity experience more severe symptoms and poorer illness prognoses than those without a history of brain injury. The relevance of these microstructural findings to the conceptualization of PTSD as being a disorder of stress-induced neuronal connectivity loss is discussed.
The brainstem, a structure of vital importance in mammals, is currently becoming a principal focus in cognitive, affective, and clinical neuroscience. Midbrain, pontine and medullary structures serve as the conduit for signals between the forebrain and spinal cord, are the epicenter of cranial nerve-circuits and systems, and subserve such integrative functions as consciousness, emotional processing, pain, and motivation. In this study, we parcellated the nuclear masses and the principal fiber pathways that were visible in a high-resolution T2-weighted MRI dataset of 50-micron isotropic voxels of a postmortem human brainstem. Based on this analysis, we generated a detailed map of the human brainstem. To assess the validity of our maps, we compared our observations with histological maps of traditional human brainstem atlases. Given the unique capability of MRI-based morphometric analysis in generating and preserving the morphology of 3D objects from individual 2D sections, we reconstructed the motor, sensory and integrative neural systems of the brainstem and rendered them in 3D representations. We anticipate the utilization of these maps by the neuroimaging community for applications in basic neuroscience as well as in neurology, psychiatry, and neurosurgery, due to their versatile computational nature in 2D and 3D representations in a publicly available capacity.
BACKGROUND: Intraoperative magnetic resonance imaging (IO-MRI) provides real-time assessment of extent of resection of brain tumor. Development of new enhancement during IO-MRI can confound interpretation of residual enhancing tumor, although the incidence of this finding is unknown. OBJECTIVE: To determine the frequency of new enhancement during brain tumor resection on intraoperative 3 Tesla (3T) MRI. To optimize the postoperative imaging window after brain tumor resection using 1.5 and 3T MRI. METHODS: We retrospectively evaluated 64 IO-MRI performed for patients with enhancing brain lesions referred for biopsy or resection as well as a subset with an early postoperative MRI (EP-MRI) within 72 h of surgery (N = 42), and a subset with a late postoperative MRI (LP-MRI) performed between 120 h and 8 wk postsurgery (N = 34). Three radiologists assessed for new enhancement on IO-MRI, and change in enhancement on available EP-MRI and LP-MRI. Consensus was determined by majority response. Inter-rater agreement was assessed using percentage agreement. RESULTS: A total of 10 out of 64 (16%) of the IO-MRI demonstrated new enhancement. Seven of 10 patients with available EP-MRI demonstrated decreased/resolved enhancement. One out of 42 (2%) of the EP-MRI demonstrated new enhancement, which decreased on LP-MRI. Agreement was 74% for the assessment of new enhancement on IO-MRI and 81% for the assessment of new enhancement on the EP-MRI. CONCLUSION: New enhancement occurs in intraoperative 3T MRI in 16% of patients after brain tumor resection, which decreases or resolves on subsequent MRI within 72 h of surgery. Our findings indicate the opportunity for further study to optimize the postoperative imaging window.
PURPOSE: Neurosurgeons can have a better understanding of surgical procedures by comparing ultrasound images obtained at different phases of the tumor resection. However, establishing a direct mapping between subsequent acquisitions is challenging due to the anatomical changes happening during surgery. We propose here a method to improve the registration of ultrasound volumes, by excluding the resection cavity from the registration process. METHODS: The first step of our approach includes the automatic segmentation of the resection cavities in ultrasound volumes, acquired during and after resection. We used a convolution neural network inspired by the 3D U-Net. Then, subsequent ultrasound volumes are registered by excluding the contribution of resection cavity. RESULTS: Regarding the segmentation of the resection cavity, the proposed method achieved a mean DICE index of 0.84 on 27 volumes. Concerning the registration of the subsequent ultrasound acquisitions, we reduced the mTRE of the volumes acquired before and during resection from 3.49 to 1.22 mm. For the set of volumes acquired before and after removal, the mTRE improved from 3.55 to 1.21 mm. CONCLUSIONS: We proposed an innovative registration algorithm to compensate the brain shift affecting ultrasound volumes obtained at subsequent phases of neurosurgical procedures. To the best of our knowledge, our method is the first to exclude automatically segmented resection cavities in the registration of ultrasound volumes in neurosurgery.
PURPOSE: The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images. ACQUISITION AND VALIDATION METHODS: Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools. DATA FORMAT AND USAGE NOTES: The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq. POTENTIAL APPLICATIONS: The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
White matter tract segmentation, i.e. identifying tractography fibers (streamline trajectories) belonging to anatomically meaningful fiber tracts, is an essential step to enable tract quantification and visualization. In this study, we present a deep learning tractography segmentation method (DeepWMA) that allows fast and consistent identification of 54 major deep white matter fiber tracts from the whole brain. We create a large-scale training tractography dataset of 1 million labeled fiber samples, and we propose a novel 2D multi-channel feature descriptor (FiberMap) that encodes spatial coordinates of points along each fiber. We learn a convolutional neural network (CNN) fiber classification model based on FiberMap and obtain a high fiber classification accuracy of 90.99% on the training tractography data with ground truth fiber labels. Then, the method is evaluated on a test dataset of 597 diffusion MRI scans from six independently acquired populations across genders, the lifespan (1 day - 82 years), and different health conditions (healthy control, neuropsychiatric disorders, and brain tumor patients). We perform comparisons with two state-of-the-art tract segmentation methods. Experimental results show that our method obtains a highly consistent tract segmentation result, where on average over 99% of the fiber tracts are successfully identified across all subjects under study, most importantly, including neonates and patients with space-occupying brain tumors. We also demonstrate good generalization of the method to tractography data from multiple different fiber tracking methods. The proposed method leverages deep learning techniques and provides a fast and efficient tool for brain white matter segmentation in large diffusion MRI tractography datasets.
The etiology of bipolar disorder (BD) is unknown and the neurobiological underpinnings are not fully understood. Both genetic and environmental factors contribute to the risk of BD, which may be linked through epigenetic mechanisms, including those regulated by histone deacetylase (HDAC) enzymes. This study measures in vivo HDAC expression in individuals with BD for the first time using the HDAC-specific radiotracer [C]Martinostat. Eleven participants with BD and 11 age- and sex-matched control participants (CON) completed a simultaneous magnetic resonance - positron emission tomography (MR-PET) scan with [C]Martinostat. Lower [C]Martinostat uptake was found in the right amygdala of BD compared to CON. We assessed uptake in the dorsolateral prefrontal cortex (DLPFC) to compare previous findings of lower uptake in the DLPFC in schizophrenia and found no group differences in BD. Exploratory whole-brain voxelwise analysis showed lower [C]Martinostat uptake in the bilateral thalamus, orbitofrontal cortex, right hippocampus, and right amygdala in BD compared to CON. Furthermore, regional [C]Martinostat uptake was associated with emotion regulation in BD in fronto-limbic areas, which aligns with findings from previous structural, functional, and molecular neuroimaging studies in BD. Regional [C]Martinostat uptake was associated with attention in BD in fronto-parietal and temporal regions. These findings indicate a potential role of HDACs in BD pathophysiology. In particular, HDAC expression levels may modulate attention and emotion regulation, which represent two core clinical features of BD.
OBJECTIVE: Delineation of malformations of cortical development (MCD) is central in presurgical evaluation of drug-resistant epilepsy. Delineation using magnetic resonance imaging (MRI) can be ambiguous, however, because the conventional T - and T -weighted contrasts depend strongly on myelin for differentiation of cortical tissue and white matter. Variations in myelin content within both cortex and white matter may cause MCD findings on MRI to change size, become undetectable, or disagree with histopathology. The novel tensor-valued diffusion MRI (dMRI) technique maps microscopic diffusion anisotropy, which is sensitive to axons rather than myelin. This work investigated whether tensor-valued dMRI may improve differentiation of cortex and white matter in the delineation of MCD. METHODS: Tensor-valued dMRI was performed on a 7 T MRI scanner in 13 MCD patients (age = 32 ± 13 years) featuring periventricular heterotopia, subcortical heterotopia, focal cortical dysplasia, and polymicrogyria. Data analysis yielded maps of microscopic anisotropy that were compared with T -weighted and T -fluid-attenuated inversion recovery images and with the fractional anisotropy from diffusion tensor imaging. RESULTS: Maps of microscopic anisotropy revealed large white matter-like regions within MCD that were uniformly cortex-like in the conventional MRI contrasts. These regions were seen particularly in the deep white matter parts of subcortical heterotopias and near the gray-white boundaries of focal cortical dysplasias and polymicrogyrias. SIGNIFICANCE: By being sensitive to axons rather than myelin, mapping of microscopic anisotropy may yield a more robust differentiation of cortex and white matter and improve MCD delineation in presurgical evaluation of epilepsy.