Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (US) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy US. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. To improve accuracy of registration, we use high-dimensional texture attributes instead of image intensities and propose to replace the standard difference-based attribute matching with correlation-based attribute matching. We also present a strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images. We optimize key parameters across independent MR-iUS brain tumor datasets acquired at three different institutions, with a total of 43 tumor patients and 758 corresponding landmarks to validate the registration algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, our algorithm was able to reduce landmark errors prior to registration in three data sets (5.37 ± 4.27, 4.18 ± 1.97 and 6.18 ± 3.38 mm, respectively) to a consistently low level (2.28 ± 0.71, 2.08 ± 0.37 and 2.24 ± 0.78 mm, respectively). Our algorithm is compared to 15 other algorithms that have been previously tested on MR-iUS registration and it is competitive with the state-of-the-art on multiple datasets. We show that our algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). We further characterized landmark errors according to brain regions and tumor types, a topic so far missing in the literature. We found that landmark errors were higher in high-grade than low-grade glioma patients, and higher in tumor regions than in other brain regions.
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace’s method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
The Human Placenta Project has focused attention on the need for noninvasive magnetic resonance imaging (MRI)-based techniques to diagnose and monitor placental function throughout pregnancy. The hope is that the management of placenta-related pathologies would be improved if physicians had more direct, real-time measures of placental health to guide clinical decision making. As oxygen alters signal intensity on MRI and oxygen transport is a key function of the placenta, many of the MRI methods under development are focused on quantifying oxygen transport or oxygen content of the placenta. For example, measurements from blood oxygen level-dependent imaging of the placenta during maternal hyperoxia correspond to outcomes in twin pregnancies, suggesting that some aspects of placental oxygen transport can be monitored by MRI. Additional methods are being developed to accurately quantify baseline placental oxygenation by MRI relaxometry. However, direct validation of placental MRI methods is challenging and therefore animal studies and ex vivo studies of human placentas are needed. Here we provide an overview of the current state of the art of oxygen transport and quantification with MRI. We suggest that as these techniques are being developed, increased focus be placed on ensuring they are robust and reliable across individuals and standardized to enable predictive diagnostic models to be generated from the data. The field is still several years away from establishing the clinical benefit of monitoring placental function in real time with MRI, but the promise of individual personalized diagnosis and monitoring of placental disease in real time continues to motivate this effort.
Schizophrenia has been characterized as a neurodevelopmental disorder, with structural brain abnormalities reported at all stages. However, at present, it remains unclear whether gray and white matter abnormalities represent related or independent pathologies in schizophrenia. In this study, we present findings from an integrative analysis exploring the morphological relationship between gray and white matter in 45 schizophrenia participants and 49 healthy controls. We utilized mutual information (MI), a measure of how much information two variables share, to assess the morphological dependence between gray and white matter in three segments of the corpus callsoum, and the gray matter regions these segments connect: (1) the genu and the left and right rostral middle frontal gyrus (rMFG), (2) the isthmus and the left and right superior temporal gyrus (STG), (3) the splenium and the left and right lateral occipital gyrus (LOG). We report significantly reduced MI between white matter tract dispersion of the right hemispheric callosal connections to the STG and both cortical thickness and area in the right STG in schizophrenia patients, despite a lack of group differences in cortical thickness, surface area, or dispersion. We believe that this reduction in morphological dependence between gray and white matter may reflect a possible decoupling of the developmental processes that shape morphological features of white and gray matter early in life. The present study also demonstrates the importance of studying the relationship between gray and white matter measures, as opposed to restricting analyses to gray and white matter measures independently.
Cerebral microbleeds (CMBs), a common manifestation of mild traumatic brain injury (mTBI), have been sporadically implicated in the neurocognitive deficits of mTBI victims but their clinical significance has not been established adequately. Here we investigate the longitudinal effects of post-mTBI CMBs upon the fractional anisotropy (FA) of white matter (WM) in 21 older mTBI patients across the first 6 months post-injury. CMBs were segmented automatically from susceptibility-weighted imaging (SWI) by leveraging the intensity gradient properties of SWI to identify CMB-related hypointensities using gradient-based edge detection. A detailed diffusion magnetic resonance imaging (dMRI) atlas of WM was used to segment and cluster tractography streamlines whose prototypes were then identified. The correlation coefficient was calculated between (A) FA values at vertices along streamline prototypes and (B) topological (along-streamline) distances between these vertices and the nearest CMB. Across subjects, the CMB identification approach achieved a sensitivity of 97.1% ± 4.7% and a precision of 72.4% ± 11.0% across subjects. The correlation coefficient was found to be negative and, additionally, statistically significant for 12.3% ± 3.5% of WM clusters (p <; 0.05, corrected), whose FA was found to decrease, on average, by 11.8% ± 5.3% across the first 6 months post-injury. These results suggest that CMBs can be associated with deleterious effects upon peri-lesional WM and highlight the vulnerability of older mTBI patients to neurovascular injury.
Probabilistic atlas priors have been commonly used to derive adaptive and robust brain MRI segmentation algorithms. Widely-used neuroimage analysis pipelines rely heavily on these techniques, which are often computationally expensive. In contrast, there has been a recent surge of approaches that leverage deep learning to implement segmentation tools that are computationally efficient at test time. However, most of these strategies rely on learning from manually annotated images. These supervised deep learning methods are therefore sensitive to the intensity profiles in the training dataset. To develop a deep learning-based segmentation model for a new image dataset (e.g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal adaptation or augmentation approaches. In this paper, we propose an alternative strategy that combines a conventional probabilistic atlas-based segmentation with deep learning, enabling one to train a segmentation model for new MRI scans without the need for any manually segmented images. Our experiments include thousands of brain MRI scans and demonstrate that the proposed method achieves good accuracy for a brain MRI segmentation task for different MRI contrasts, requiring only approximately 15 seconds at test time on a GPU.
We present a deep learning tractography segmentation method that allows fast and consistent white matter fiber tract identification across healthy and disease populations and across multiple diffusion MRI (dMRI) acquisitions. We create a large-scale training tractography dataset of 1 million labeled fiber samples (54 anatomical tracts are included). To discriminate between fibers from different tracts, we propose a novel 2D multi-channel feature descriptor (FiberMap) that encodes spatial coordinates of points along each fiber. We learn a CNN tract classification model based on FiberMap and obtain a high tract classification accuracy of 90.99%. The method is evaluated on a test dataset of 374 dMRI scans from three independently acquired populations across health conditions (healthy control, neuropsychiatric disorders, and brain tumor patients). We perform comparisons with two state-of-the-art white matter tract segmentation methods. Experimental results show that our method obtains a highly consistent segmentation result, where over 99% of the fiber tracts are successfully detected across all subjects under study, most importantly, including patients with space occupying brain tumors. The proposed method leverages deep learning techniques and provides a much faster and more efficient tool for large data analysis than methods using traditional machine learning techniques.
The performance and diagnostic utility of magnetic resonance imaging (MRI) in pregnancy is fundamentally constrained by fetal motion. Motion of the fetus, which is unpredictable and rapid on the scale of conventional imaging times, limits the set of viable acquisition techniques to single-shot imaging with severe compromises in signal-to-noise ratio and diagnostic contrast, and frequently results in unacceptable image quality. Surprisingly little is known about the characteristics of fetal motion during MRI and here we propose and demonstrate methods that exploit a growing repository of MRI observations of the gravid abdomen that are acquired at low spatial resolution but relatively high temporal resolution and over long durations (10-30 minutes). We estimate fetal pose per frame in MRI volumes of the pregnant abdomen via deep learning algorithms that detect key fetal landmarks. Evaluation of the proposed method shows that our framework achieves quantitatively an average error of 4.47 mm and 96.4% accuracy (with error less than 10 mm). Fetal pose estimation in MRI time series yields novel means of quantifying fetal movements in health and disease, and enables the learning of kinematic models that may enhance prospective mitigation of fetal motion artifacts during MRI acquisition.
We propose and demonstrate a joint model of anatomical shapes, image features and clinical indicators for statistical shape modeling and medical image analysis. The key idea is to employ a copula model to separate the joint dependency structure from the marginal distributions of variables of interest. This separation provides flexibility on the assumptions made during the modeling process. The proposed method can handle binary, discrete, ordinal and continuous variables. We demonstrate a simple and efficient way to include binary, discrete and ordinal variables into the modeling. We build Bayesian conditional models based on observed partial clinical indicators, features or shape based on Gaussian processes capturing the dependency structure. We apply the proposed method on a stroke dataset to jointly model the shape of the lateral ventricles, the spatial distribution of the white matter hyperintensity associated with periventricular white matter disease, and clinical indicators. The proposed method yields interpretable joint models for data exploration and patient-specific statistical shape models for medical image analysis.