We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only. We also show the use of the text for explaining the image classification by the joint model. To the best of our knowledge, our approach is the first to leverage free-text radiology reports for improving the image model performance in this application. Our code is available at: https://github.com/RayRuizhiLiao/joint_chestxray.
Image Features for Brain Phenotypes Core Publications
The corticospinal tract (CST) is one of the most well studied tracts in human neuroanatomy. Its clinical significance can be demonstrated in many notable traumatic conditions and diseases such as stroke, spinal cord injury (SCI) or amyotrophic lateral sclerosis (ALS). With the advent of diffusion MRI and tractography the computational representation of the human CST in a 3D model became available. However, the representation of the entire CST and, specifically, the hand motor area has remained elusive. In this paper we propose a novel method, using manually drawn ROIs based on robustly identifiable neuroanatomic structures to delineate the entire CST and isolate its hand motor representation as well as to estimate their variability and generate a database of their volume, length and biophysical parameters. Using 37 healthy human subjects we performed a qualitative and quantitative analysis of the CST and the hand-related motor fiber tracts (HMFTs). Finally, we have created variability heat maps from 37 subjects for both the aforementioned tracts, which could be utilized as a reference for future studies with clinical focus to explore neuropathology in both trauma and disease states.
Segmentation of brain tissue types from diffusion MRI (dMRI) is an important task, required for quantification of brain microstructure and for improving tractography. Current dMRI segmentation is mostly based on anatomical MRI (e.g., T1- and T2-weighted) segmentation that is registered to the dMRI space. However, such inter-modality registration is challenging due to more image distortions and lower image resolution in dMRI as compared with anatomical MRI. In this study, we present a deep learning method for diffusion MRI segmentation, which we refer to as DDSeg. Our proposed method learns tissue segmentation from high-quality imaging data from the Human Connectome Project (HCP), where registration of anatomical MRI to dMRI is more precise. The method is then able to predict a tissue segmentation directly from new dMRI data, including data collected with different acquisition protocols, without requiring anatomical data and inter-modality registration. We train a convolutional neural network (CNN) to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary. To further improve accuracy, our method adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters. The DKI parameters are calculated from the recently proposed mean-kurtosis-curve method that corrects implausible DKI parameter values and provides additional features that discriminate between tissue types. We demonstrate high tissue segmentation accuracy on HCP data, and also when applying the HCP-trained model on dMRI data from other acquisitions with lower resolution and fewer gradient directions.
Introduction: Neuronavigation greatly improves the surgeons ability to approach, assess and operate on brain tumors, but tends to lose its accuracy as the surgery progresses and substantial brain shift and deformation occurs. Intraoperative MRI (iMRI) can partially address this problem but is resource intensive and workflow disruptive. Intraoperative ultrasound (iUS) provides real-time information that can be used to update neuronavigation and provide real-time information regarding the resection progress. We describe the intraoperative use of 3D iUS in relation to iMRI, and discuss the challenges and opportunities in its use in neurosurgical practice. Methods: We performed a retrospective evaluation of patients who underwent image-guided brain tumor resection in which both 3D iUS and iMRI were used. The study was conducted between June 2020 and December 2020 when an extension of a commercially available navigation software was introduced in our practice enabling 3D iUS volumes to be reconstructed from tracked 2D iUS images. For each patient, three or more 3D iUS images were acquired during the procedure, and one iMRI was acquired towards the end. The iUS images included an extradural ultrasound sweep acquired before dural incision (iUS-1), a post-dural opening iUS (iUS-2), and a third iUS acquired immediately before the iMRI acquisition (iUS-3). iUS-1 and preoperative MRI were compared to evaluate the ability of iUS to visualize tumor boundaries and critical anatomic landmarks; iUS-3 and iMRI were compared to evaluate the ability of iUS for predicting residual tumor. Results: Twenty-three patients were included in this study. Fifteen patients had tumors located in eloquent or near eloquent brain regions, the majority of patients had low grade gliomas (11), gross total resection was achieved in 12 patients, postoperative temporary deficits were observed in five patients. In twenty-two iUS was able to define tumor location, tumor margins, and was able to indicate relevant landmarks for orientation and guidance. In sixteen cases, white matter fiber tracts computed from preoperative dMRI were overlaid on the iUS images. In nineteen patients, the EOR (GTR or STR) was predicted by iUS and confirmed by iMRI. The remaining four patients where iUS was not able to evaluate the presence or absence of residual tumor were recurrent cases with a previous surgical cavity that hindered good contact between the US probe and the brainsurface. Conclusion: This recent experience at our institution illustrates the practical benefits, challenges, and opportunities of 3D iUS in relation to iMRI.
In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.
Little is known on how mild traumatic brain injury affects white matter based on age at injury, sex, cerebral microbleeds, and time since injury. Here, we study the fractional anisotropy of white matter to study these effects in 109 participants aged 18-77 (46 females, age μ ± σ = 40 ± 17 years) imaged within [Formula: see text] 1 week and [Formula: see text] 6 months post-injury. Age is found to be linearly associated with white matter degradation, likely due not only to injury but also to cumulative effects of other pathologies and to their interactions with injury. Age is associated with mean anisotropy decreases in the corpus callosum, middle longitudinal fasciculi, inferior longitudinal and occipitofrontal fasciculi, and superficial frontal and temporal fasciculi. Over [Formula: see text] 6 months, the mean anisotropies of the corpus callosum, left superficial frontal fasciculi, and left corticospinal tract decrease significantly. Independently of other predictors, age and cerebral microbleeds contribute to anisotropy decrease in the callosal genu. Chronically, the white matter of commissural tracts, left superficial frontal fasciculi, and left corticospinal tract degrade appreciably, independently of other predictors. Our findings suggest that large commissural and intra-hemispheric structures are at high risk for post-traumatic degradation. This study identifies detailed neuroanatomic substrates consistent with brain injury patients’ age-dependent deficits in information processing speed, interhemispheric communication, motor coordination, visual acuity, sensory integration, reading speed/comprehension, executive function, personality, and memory. We also identify neuroanatomic features underlying white matter degradation whose severity is associated with the male sex. Future studies should compare our findings to functional measures and other neurodegenerative processes.
We propose a novel pairwise distance measure between image keypoint sets, for the purpose of large-scale medical image indexing. Our measure generalizes the Jaccard index to account for soft set equivalence (SSE) between keypoint elements, via an adaptive kernel framework modeling uncertainty in keypoint appearance and geometry. A new kernel is proposed to quantify the variability of keypoint geometry in location and scale. Our distance measure may be estimated between O (N 2) image pairs in [Formula: see text] operations via keypoint indexing. Experiments report the first results for the task of predicting family relationships from medical images, using 1010 T1-weighted MRI brain volumes of 434 families including monozygotic and dizygotic twins, siblings and half-siblings sharing 100%-25% of their polymorphic genes. Soft set equivalence and the keypoint geometry kernel improve upon standard hard set equivalence (HSE) and appearance kernels alone in predicting family relationships. Monozygotic twin identification is near 100%, and three subjects with uncertain genotyping are automatically paired with their self-reported families, the first reported practical application of image-based family identification. Our distance measure can also be used to predict group categories, sex is predicted with an AUC = 0.97. Software is provided for efficient fine-grained curation of large, generic image datasets.