PURPOSE: Matching points that are derived from features or landmarks in image data is a key step in some medical imaging applications. Since most robust point matching algorithms claim to be able to deal with outliers, users may place high confidence in the matching result and use it without further examination. However, for tasks such as feature-based registration in image-guided neurosurgery, even a few mismatches, in the form of invalid displacement vectors, could cause serious consequences. As a result, having an effective tool by which operators can manually screen all matches for outliers could substantially benefit the outcome of those applications. METHODS: We introduce a novel variogram-based outlier screening method for vectors. The variogram is a powerful geostatistical tool for characterizing the spatial dependence of stochastic processes. Since the spatial correlation of invalid displacement vectors, which are considered as vector outliers, tends to behave differently than normal displacement vectors, they can be efficiently identified on the variogram. RESULTS: We validate the proposed method on 9 sets of clinically acquired ultrasound data. In the experiment, potential outliers are flagged on the variogram by one operator and further evaluated by 8 experienced medical imaging researchers. The matching quality of those potential outliers is approximately 1.5 lower, on a scale from 1 (bad) to 5 (good), than valid displacement vectors. CONCLUSION: The variogram is a simple yet informative tool. While being used extensively in geostatistical analysis, it has not received enough attention in the medical imaging field. We believe there is a good deal of potential for clinically applying the proposed outlier screening method. By way of this paper, we also expect researchers to find variogram useful in other medical applications that involve motion vectors analyses.
Statistical Inference for Imaging and Disease Core Publications
We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large data sets of clinical images contain a wealth of information, time constraints during acquisition result in sparse scans that fail to capture much of the anatomy. These characteristics often render computational analysis impractical as many image analysis algorithms tend to fail when applied to such images. Highly specialized algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, we aim to enable application of existing algorithms that were originally developed for high resolution research scans to significantly undersampled scans. We introduce a generative model that captures fine-scale anatomical structure across subjects in clinical image collections and derive an algorithm for filling in the missing data in scans with large inter-slice spacing. Our experimental results demonstrate that the resulting method outperforms state-of-the-art upsampling super-resolution techniques, and promises to facilitate subsequent analysis not previously possible with scans of this quality. Our implementation is freely available at https://github.com/adalca/papago.
This paper presents a novel approach to modeling the pos terior distribution in image registration that is computationally efficient for large deformation diffeomorphic metric mapping (LDDMM). We develop a Laplace approximation of Bayesian registration models entirely in a bandlimited space that fully describes the properties of diffeomorphic transformations. In contrast to current methods, we compute the inverse Hessian at the mode of the posterior distribution of diffeomorphisms directly in the low dimensional frequency domain. This dramatically reduces the computational complexity of approximating posterior marginals in the high dimensional imaging space. Experimental results show that our method is significantly faster than the state-of-the-art diffeomorphic image registration uncertainty quantification algorithms, while producing comparable results. The efficiency of our method strengthens the feasibility in prospective clinical applications, e.g., real- time image-guided navigation for brain surgery.
We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.
A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as the tumor resection, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active registration framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data.
White matter hyperintensity (WMH) burden is a critically important cerebrovascular phenotype linked to prediction of diagnosis and prognosis of diseases, such as acute ischemic stroke (AIS). However, current approaches to its quantification on clinical MRI often rely on time intensive manual delineation of the disease on T2 fluid attenuated inverse recovery (FLAIR), which hinders high-throughput analyses such as genetic discovery. In this work, we present a fully automated pipeline for quantification of WMH in clinical large-scale studies of AIS. The pipeline incorporates automated brain extraction, intensity normalization and WMH segmentation using spatial priors. We first propose a brain extraction algorithm based on a fully convolutional deep learning architecture, specifically designed for clinical FLAIR images. We demonstrate that our method for brain extraction outperforms two commonly used and publicly available methods on clinical quality images in a set of 144 subject scans across 12 acquisition centers, based on dice coefficient (median 0.95; inter-quartile range 0.94-0.95; p < 0.01) and Pearson correlation of total brain volume (r = 0.90). Subsequently, we apply it to the large-scale clinical multi-site MRI-GENIE study (N = 2783) and identify a decrease in total brain volume of -2.4 cc/year. Additionally, we show that the resulting total brain volumes can successfully be used for quality control of image preprocessing. Finally, we obtain WMH volumes by building on an existing automatic WMH segmentation algorithm that delineates and distinguishes between different cerebrovascular pathologies. The learning method mimics expert knowledge of the spatial distribution of the WMH burden using a convolutional auto-encoder. This enables successful computation of WMH volumes of 2533 clinical AIS patients. We utilize these results to demonstrate the increase of WMH burden with age (0.950 cc/year) and show that single site estimates can be biased by the number of subjects recruited.
The Human Placenta Project has focused attention on the need for noninvasive magnetic resonance imaging (MRI)-based techniques to diagnose and monitor placental function throughout pregnancy. The hope is that the management of placenta-related pathologies would be improved if physicians had more direct, real-time measures of placental health to guide clinical decision making. As oxygen alters signal intensity on MRI and oxygen transport is a key function of the placenta, many of the MRI methods under development are focused on quantifying oxygen transport or oxygen content of the placenta. For example, measurements from blood oxygen level-dependent imaging of the placenta during maternal hyperoxia correspond to outcomes in twin pregnancies, suggesting that some aspects of placental oxygen transport can be monitored by MRI. Additional methods are being developed to accurately quantify baseline placental oxygenation by MRI relaxometry. However, direct validation of placental MRI methods is challenging and therefore animal studies and ex vivo studies of human placentas are needed. Here we provide an overview of the current state of the art of oxygen transport and quantification with MRI. We suggest that as these techniques are being developed, increased focus be placed on ensuring they are robust and reliable across individuals and standardized to enable predictive diagnostic models to be generated from the data. The field is still several years away from establishing the clinical benefit of monitoring placental function in real time with MRI, but the promise of individual personalized diagnosis and monitoring of placental disease in real time continues to motivate this effort.
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace’s method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
Probabilistic atlas priors have been commonly used to derive adaptive and robust brain MRI segmentation algorithms. Widely-used neuroimage analysis pipelines rely heavily on these techniques, which are often computationally expensive. In contrast, there has been a recent surge of approaches that leverage deep learning to implement segmentation tools that are computationally efficient at test time. However, most of these strategies rely on learning from manually annotated images. These supervised deep learning methods are therefore sensitive to the intensity profiles in the training dataset. To develop a deep learning-based segmentation model for a new image dataset (e.g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal adaptation or augmentation approaches. In this paper, we propose an alternative strategy that combines a conventional probabilistic atlas-based segmentation with deep learning, enabling one to train a segmentation model for new MRI scans without the need for any manually segmented images. Our experiments include thousands of brain MRI scans and demonstrate that the proposed method achieves good accuracy for a brain MRI segmentation task for different MRI contrasts, requiring only approximately 15 seconds at test time on a GPU.
The Alzheimer’s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting the future evolution of individuals at risk of Alzheimer’s disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. Participants are then required to make forecasts of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer’s Disease Assessment Scale Cognitive Subdomain (ADAS-Cog 13), and total volume of the ventricles - which are then compared with future measurements. Strong points of the challenge are that the test data did not exist at the time of forecasting (it was acquired afterwards), and that it focuses on the challenging problem of cohort selection for clinical trials by identifying fast progressors. The submission phase of TADPOLE was open until 15 November 2017; since then data has been acquired until April 2019 from 219 subjects with 223 clinical visits and 150 Magnetic Resonance Imaging (MRI) scans, which was used for the evaluation of the participants’ predictions. Thirty-three teams participated with a total of 92 submissions. No single submission was best at predicting all three outcomes. For diagnosis prediction, the best forecast (team Frog), which was based on gradient boosting, obtained a multiclass area under the receiver-operating curve (MAUC) of 0.931, while for ventricle prediction the best forecast (team ), which was based on disease progression modelling and spline regression, obtained mean absolute error of 0.41% of total intracranial volume (ICV). For ADAS-Cog 13, no forecast was considerably better than the benchmark mixed effects model ( ), provided to participants before the submission deadline. Further analysis can help understand which input features and algorithms are most suitable for Alzheimer’s disease prediction and for aiding patient stratification in clinical trials. The submission system remains open via the website: https://tadpole.grand-challenge.org/.
We present a volumetric mesh-based algorithm for flattening the placenta to a canonical template to enable effective visualization of local anatomy and function. Monitoring placental function promises to support pregnancy assessment and to improve care outcomes. We aim to alleviate visualization and interpretation challenges presented by the shape of the placenta when it is attached to the curved uterine wall. To do so, we flatten the volumetric mesh that captures placental shape to resemble the well-studied shape. We formulate our method as a map from the shape to a flattened template that minimizes the symmetric Dirichlet energy to control distortion throughout the volume. Local injectivity is enforced via constrained line search during gradient descent. We evaluate the proposed method on 28 placenta shapes extracted from MRI images in a clinical study of placental function. We achieve sub-voxel accuracy in mapping the boundary of the placenta to the template while successfully controlling distortion throughout the volume. We illustrate how the resulting mapping of the placenta enhances visualization of placental anatomy and function. Our implementation is freely available at https://github.com/mabulnaga/placenta-flattening.
We propose and demonstrate a joint model of anatomical shapes, image features and clinical indicators for statistical shape modeling and medical image analysis. The key idea is to employ a copula model to separate the joint dependency structure from the marginal distributions of variables of interest. This separation provides flexibility on the assumptions made during the modeling process. The proposed method can handle binary, discrete, ordinal and continuous variables. We demonstrate a simple and efficient way to include binary, discrete and ordinal variables into the modeling. We build Bayesian conditional models based on observed partial clinical indicators, features or shape based on Gaussian processes capturing the dependency structure. We apply the proposed method on a stroke dataset to jointly model the shape of the lateral ventricles, the spatial distribution of the white matter hyperintensity associated with periventricular white matter disease, and clinical indicators. The proposed method yields interpretable joint models for data exploration and patient-specific statistical shape models for medical image analysis.
The performance and diagnostic utility of magnetic resonance imaging (MRI) in pregnancy is fundamentally constrained by fetal motion. Motion of the fetus, which is unpredictable and rapid on the scale of conventional imaging times, limits the set of viable acquisition techniques to single-shot imaging with severe compromises in signal-to-noise ratio and diagnostic contrast, and frequently results in unacceptable image quality. Surprisingly little is known about the characteristics of fetal motion during MRI and here we propose and demonstrate methods that exploit a growing repository of MRI observations of the gravid abdomen that are acquired at low spatial resolution but relatively high temporal resolution and over long durations (10-30 minutes). We estimate fetal pose per frame in MRI volumes of the pregnant abdomen via deep learning algorithms that detect key fetal landmarks. Evaluation of the proposed method shows that our framework achieves quantitatively an average error of 4.47 mm and 96.4% accuracy (with error less than 10 mm). Fetal pose estimation in MRI time series yields novel means of quantifying fetal movements in health and disease, and enables the learning of kinematic models that may enhance prospective mitigation of fetal motion artifacts during MRI acquisition.
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.
We introduce Disease Knowledge Transfer (DKT), a novel technique for transferring biomarker information between related neurodegenerative diseases. DKT infers robust multimodal biomarker trajectories in rare neurodegenerative diseases even when only limited, unimodal data is available, by transferring information from larger multimodal datasets from common neurodegenerative diseases. DKT is a joint-disease generative model of biomarker progressions, which exploits biomarker relationships that are shared across diseases. Our proposed method allows, for the first time, the estimation of plausible biomarker trajectories in Posterior Cortical Atrophy (PCA), a rare neurodegenerative disease where only unimodal MRI data is available. For this we train DKT on a combined dataset containing subjects with two distinct diseases and sizes of data available: 1) a larger, multimodal typical AD (tAD) dataset from the TADPOLE Challenge, and 2) a smaller unimodal Posterior Cortical Atrophy (PCA) dataset from the Dementia Research Centre (DRC), for which only a limited number of Magnetic Resonance Imaging (MRI) scans are available. Although validation is challenging due to lack of data in PCA, we validate DKT on synthetic data and two patient datasets (TADPOLE and PCA cohorts), showing it can estimate the ground truth parameters in the simulation and predict unseen biomarkers on the two patient datasets. While we demonstrated DKT on Alzheimer’s variants, we note DKT is generalisable to other forms of related neurodegenerative diseases. Source code for DKT is available online: https://github.com/mrazvan22/dkt.
OBJECTIVE:Posterior circulation ischemic stroke (PCiS) constitutes 20-30% of ischemic stroke cases. Detailed information about differences between PCiS and anterior circulation ischemic stroke (ACiS) remains scarce. Such information might guide clinical decision making and prevention strategies. We studied risk factors and ischemic stroke subtypes in PCiS vs. ACiS and lesion location on magnetic resonance imaging (MRI) in PCiS.METHODS:Out of 3,301 MRIs from 12 sites in the National Institute of Neurological Disorders and Stroke (NINDS) Stroke Genetics Network (SiGN), we included 2,381 cases with acute DWI lesions. The definition of ACiS or PCiS was based on lesion location. We compared the groups using Chi-squared and logistic regression.RESULTS:PCiS occurred in 718 (30%) patients and ACiS in 1663 (70%). Diabetes and male sex were more common in PCiS vs. ACiS (diabetes 27% vs. 23%, p < 0.05; male sex 68% vs. 58%, p < 0.001). Both were independently associated with PCiS (diabetes, OR = 1.29; 95% CI 1.04-1.61; male sex, OR = 1.46; 95% CI 1.21-1.78). ACiS more commonly had large artery atherosclerosis (25% vs. 20%, p < 0.01) and cardioembolic mechanisms (17% vs. 11%, p < 0.001) compared to PCiS. Small artery occlusion was more common in PCiS vs. ACiS (20% vs. 14%, p < 0.001). Small artery occlusion accounted for 47% of solitary brainstem infarctions.CONCLUSION:Ischemic stroke subtypes differ between the two phenotypes. Diabetes and male sex have a stronger association with PCiS than ACiS. Definitive MRI-based PCiS diagnosis aids etiological investigation and contributes additional insights into specific risk factors and mechanisms of injury in PCiS.
INTRODUCTION: Before using blood-oxygen-level-dependent magnetic resonance imaging (BOLD MRI) during maternal hyperoxia as a method to detect individual placental dysfunction, it is necessary to understand spatiotemporal variations that represent normal placental function. We investigated the effect of maternal position and Braxton-Hicks contractions on estimates obtained from BOLD MRI of the placenta during maternal hyperoxia. METHODS: For 24 uncomplicated singleton pregnancies (gestational age 27-36 weeks), two separate BOLD MRI datasets were acquired, one in the supine and one in the left lateral maternal position. The maternal oxygenation was adjusted as 5 min of room air (21% O), followed by 5 min of 100% FiO. After datasets were corrected for signal non-uniformities and motion, global and regional BOLD signal changes in R* and voxel-wise Time-To-Plateau (TTP) in the placenta were measured. The overall placental and uterine volume changes were determined across time to detect contractions. RESULTS: In mothers without contractions, increases in global placental R* in the supine position were larger compared to the left lateral position with maternal hyperoxia. Maternal position did not alter global TTP but did result in regional changes in TTP. 57% of the subjects had Braxton-Hicks contractions and 58% of these had global placental R* decreases during the contraction. CONCLUSION: Both maternal position and Braxton-Hicks contractions significantly affect global and regional changes in placental R* and regional TTP. This suggests that both factors must be taken into account in analyses when comparing placental BOLD signals over time within and between individuals.
Glioblastoma might have widespread effects on the neural organization and cognitive function, and even focal lesions may be associated with distributed functional alterations. However, functional changes do not necessarily follow obvious anatomical patterns and the current understanding of this interrelation is limited. In this study, we used resting-state functional magnetic resonance imaging to evaluate changes in global functional connectivity patterns in 15 patients with glioblastoma. For six patients we followed longitudinal trajectories of their functional connectome and structural tumour evolution using bi-monthly follow-up scans throughout treatment and disease progression. In all patients, unilateral tumour lesions were associated with inter-hemispherically symmetric network alterations, and functional proximity of tumour location was stronger linked to distributed network deterioration than anatomical distance. In the longitudinal subcohort of six patients, we observed patterns of network alterations with initial transient deterioration followed by recovery at first follow-up, and local network deterioration to precede structural tumour recurrence by two months. In summary, the impact of focal glioblastoma lesions on the functional connectome is global and linked to functional proximity rather than anatomical distance to tumour regions. Our findings further suggest a relevance for functional network trajectories as a possible means supporting early detection of tumour recurrence.
Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to clinical quality medical images. We develop a novel parameterization of conditional generative adversarial networks that achieves high image fidelity when trained to transform MRIs conditioned on a patient’s age and disease severity. The spatial-intensity transform generative adversarial network (SIT-GAN) constrains the generator to a smooth spatial transform composed with sparse intensity changes. This technique improves image quality and robustness to artifacts, and generalizes to different scanners. We demonstrate SIT-GAN on a large clinical image dataset of stroke patients, where it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. Additionally, SIT-GAN provides a disentangled view of the variation in shape and appearance across subjects.
We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only. We also show the use of the text for explaining the image classification by the joint model. To the best of our knowledge, our approach is the first to leverage free-text radiology reports for improving the image model performance in this application. Our code is available at: https://github.com/RayRuizhiLiao/joint_chestxray.
Purpose: To develop a machine learning model to classify the severity grades of pulmonary edema on chest radiographs. Materials and Methods: In this retrospective study, 369 071 chest radiographs and associated radiology reports from 64 581 patients (mean age, 51.71 years; 54.51% women) from the MIMIC-CXR chest radiograph dataset were included. This dataset was split into patients with and without congestive heart failure (CHF). Pulmonary edema severity labels from the associated radiology reports were extracted from patients with CHF as four different ordinal levels: 0, no edema; 1, vascular congestion; 2, interstitial edema; and 3, alveolar edema. Deep learning models were developed using two approaches: a semisupervised model using a variational autoencoder and a pretrained supervised learning model using a dense neural network. Receiver operating characteristic curve analysis was performed on both models. Results: The area under the receiver operating characteristic curve (AUC) for differentiating alveolar edema from no edema was 0.99 for the semisupervised model and 0.87 for the pretrained models. Performance of the algorithm was inversely related to the difficulty in categorizing milder states of pulmonary edema (shown as AUCs for semisupervised model and pretrained model, respectively): 2 versus 0, 0.88 and 0.81; 1 versus 0, 0.79 and 0.66; 3 versus 1, 0.93 and 0.82; 2 versus 1, 0.69 and 0.73; and 3 versus 2, 0.88 and 0.63. Conclusion: Deep learning models were trained on a large chest radiograph dataset and could grade the severity of pulmonary edema on chest radiographs with high performance.Supplemental material is available for this article.See also the commentary by Auffermann in this issue.© RSNA, 2021.
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
PURPOSE: Fetal brain Magnetic Resonance Imaging suffers from unpredictable and unconstrained fetal motion that causes severe image artifacts even with half-Fourier single-shot fast spin echo (HASTE) readouts. This work presents the implementation of a closed-loop pipeline that automatically detects and reacquires HASTE images that were degraded by fetal motion without any human interaction. METHODS: A convolutional neural network that performs automatic image quality assessment (IQA) was run on an external GPU-equipped computer that was connected to the internal network of the MRI scanner. The modified HASTE pulse sequence sent each image to the external computer, where the IQA convolutional neural network evaluated it, and then the IQA score was sent back to the sequence. At the end of the HASTE stack, the IQA scores from all the slices were sorted, and only slices with the lowest scores (corresponding to the slices with worst image quality) were reacquired. RESULTS: The closed-loop HASTE acquisition framework was tested on 10 pregnant mothers, for a total of 73 acquisitions of our modified HASTE sequence. The IQA convolutional neural network, which was successfully employed by our modified sequence in real time, achieved an accuracy of 85.2% and area under the receiver operator characteristic of 0.899. CONCLUSION: The proposed acquisition/reconstruction pipeline was shown to successfully identify and automatically reacquire only the motion degraded fetal brain HASTE slices in the prescribed stack. This minimizes the overall time spent on HASTE acquisitions by avoiding the need to repeat the entire stack if only few slices in the stack are motion-degraded.
We present a volumetric mesh-based algorithm for parameterizing the placenta to a flattened template to enable effective visualization of local anatomy and function. MRI shows potential as a research tool as it provides signals directly related to placental function. However, due to the curved and highly variable in vivo shape of the placenta, interpreting and visualizing these images is difficult. We address interpretation challenges by mapping the placenta so that it resembles the familiar ex vivo shape. We formulate the parameterization as an optimization problem for mapping the placental shape represented by a volumetric mesh to a flattened template. We employ the symmetric Dirichlet energy to control local distortion throughout the volume. Local injectivity in the mapping is enforced by a constrained line search during the gradient descent optimization. We validate our method using a research study of 111 placental shapes extracted from BOLD MRI images. Our mapping achieves sub-voxel accuracy in matching the template while maintaining low distortion throughout the volume. We demonstrate how the resulting flattening of the placenta improves visualization of anatomy and function. Our code is freely available at https://github.com/mabulnaga/placenta-flattening.
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.