Statistical Inference for Imaging and Disease Core Publications

Pace DF, Dalca AV, Brosch T, Geva T, Powell AJ, Weese J, Moghari MH, Golland P. Learned Iterative Segmentation of Highly Variable Anatomy From Limited Data: Applications to Whole Heart Segmentation for Congenital Heart Disease. Med Image Anal. 2022;80 :102469.Abstract
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.
Abulnaga MS, Abaci Turk E, Bessmeltsev M, Grant EP, Solomon J, Golland P. Volumetric Parameterization of the Placenta to a Flattened Template. IEEE Trans Med Imaging. 2022;41 (4) :925-36.Abstract
We present a volumetric mesh-based algorithm for parameterizing the placenta to a flattened template to enable effective visualization of local anatomy and function. MRI shows potential as a research tool as it provides signals directly related to placental function. However, due to the curved and highly variable in vivo shape of the placenta, interpreting and visualizing these images is difficult. We address interpretation challenges by mapping the placenta so that it resembles the familiar ex vivo shape. We formulate the parameterization as an optimization problem for mapping the placental shape represented by a volumetric mesh to a flattened template. We employ the symmetric Dirichlet energy to control local distortion throughout the volume. Local injectivity in the mapping is enforced by a constrained line search during the gradient descent optimization. We validate our method using a research study of 111 placental shapes extracted from BOLD MRI images. Our mapping achieves sub-voxel accuracy in matching the template while maintaining low distortion throughout the volume. We demonstrate how the resulting flattening of the placenta improves visualization of anatomy and function. Our code is freely available at https://github.com/mabulnaga/placenta-flattening.
Gagoski B, Xu J, Wighton P, Tisdall DM, Frost R, Lo W-C, Golland P, van der Kouwe A, Adalsteinsson E, Grant EP. Automated Detection and Reacquisition of Motion-degraded Images in Fetal HASTE Imaging at 3T. Magn Reson Med. 2022;87 (4) :1914-22.Abstract
PURPOSE: Fetal brain Magnetic Resonance Imaging suffers from unpredictable and unconstrained fetal motion that causes severe image artifacts even with half-Fourier single-shot fast spin echo (HASTE) readouts. This work presents the implementation of a closed-loop pipeline that automatically detects and reacquires HASTE images that were degraded by fetal motion without any human interaction. METHODS: A convolutional neural network that performs automatic image quality assessment (IQA) was run on an external GPU-equipped computer that was connected to the internal network of the MRI scanner. The modified HASTE pulse sequence sent each image to the external computer, where the IQA convolutional neural network evaluated it, and then the IQA score was sent back to the sequence. At the end of the HASTE stack, the IQA scores from all the slices were sorted, and only slices with the lowest scores (corresponding to the slices with worst image quality) were reacquired. RESULTS: The closed-loop HASTE acquisition framework was tested on 10 pregnant mothers, for a total of 73 acquisitions of our modified HASTE sequence. The IQA convolutional neural network, which was successfully employed by our modified sequence in real time, achieved an accuracy of 85.2% and area under the receiver operator characteristic of 0.899. CONCLUSION: The proposed acquisition/reconstruction pipeline was shown to successfully identify and automatically reacquire only the motion degraded fetal brain HASTE slices in the prescribed stack. This minimizes the overall time spent on HASTE acquisitions by avoiding the need to repeat the entire stack if only few slices in the stack are motion-degraded.
Iglesias JE, Billot B, Balbastre Y, Tabari A, Conklin J, Gilberto González R, Alexander DC, Golland P, Edlow BL, Fischl B, et al. Joint Super-Resolution and Synthesis of 1 Mm Isotropic MP-RAGE Volumes From Clinical MRI Exams With Scans of Different Orientation, Resolution and Contrast. Neuroimage. 2021;237 :118206.Abstract
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
Horng S, Liao R, Wang X, Dalal S, Golland P, Berkowitz SJ. Deep Learning to Quantify Pulmonary Edema in Chest Radiographs. Radiol Artif Intell. 2021;3 (2) :e190228.Abstract
Purpose: To develop a machine learning model to classify the severity grades of pulmonary edema on chest radiographs. Materials and Methods: In this retrospective study, 369 071 chest radiographs and associated radiology reports from 64 581 patients (mean age, 51.71 years; 54.51% women) from the MIMIC-CXR chest radiograph dataset were included. This dataset was split into patients with and without congestive heart failure (CHF). Pulmonary edema severity labels from the associated radiology reports were extracted from patients with CHF as four different ordinal levels: 0, no edema; 1, vascular congestion; 2, interstitial edema; and 3, alveolar edema. Deep learning models were developed using two approaches: a semisupervised model using a variational autoencoder and a pretrained supervised learning model using a dense neural network. Receiver operating characteristic curve analysis was performed on both models. Results: The area under the receiver operating characteristic curve (AUC) for differentiating alveolar edema from no edema was 0.99 for the semisupervised model and 0.87 for the pretrained models. Performance of the algorithm was inversely related to the difficulty in categorizing milder states of pulmonary edema (shown as AUCs for semisupervised model and pretrained model, respectively): 2 versus 0, 0.88 and 0.81; 1 versus 0, 0.79 and 0.66; 3 versus 1, 0.93 and 0.82; 2 versus 1, 0.69 and 0.73; and 3 versus 2, 0.88 and 0.63. Conclusion: Deep learning models were trained on a large chest radiograph dataset and could grade the severity of pulmonary edema on chest radiographs with high performance.Supplemental material is available for this article.See also the commentary by Auffermann in this issue.© RSNA, 2021.
Chauhan G, Liao R, Wells W, Andreas J, Wang X, Berkowitz S, Horng S, Szolovits P, Golland P. Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary Edema Assessment. Med Image Comput Comput Assist Interv. 2020;12262 :529-39.Abstract
We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only. We also show the use of the text for explaining the image classification by the joint model. To the best of our knowledge, our approach is the first to leverage free-text radiology reports for improving the image model performance in this application. Our code is available at: https://github.com/RayRuizhiLiao/joint_chestxray.
Wang CJ, Rost NS, Golland P. Spatial-Intensity Transform GANs for High Fidelity Medical Image-to-Image Translation. Med Image Comput Comput Assist Interv. 2020;12262 :749-59.Abstract
Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to clinical quality medical images. We develop a novel parameterization of conditional generative adversarial networks that achieves high image fidelity when trained to transform MRIs conditioned on a patient's age and disease severity. The spatial-intensity transform generative adversarial network (SIT-GAN) constrains the generator to a smooth spatial transform composed with sparse intensity changes. This technique improves image quality and robustness to artifacts, and generalizes to different scanners. We demonstrate SIT-GAN on a large clinical image dataset of stroke patients, where it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. Additionally, SIT-GAN provides a disentangled view of the variation in shape and appearance across subjects.
Nenning K-H, Furtner J, Kiesel B, Schwartz E, Roetzer T, Fortelny N, Bock C, Grisold A, Marko M, Leutmezer F, et al. Distributed Changes of the Functional Connectome in Patients with Glioblastoma. Sci Rep. 2020;10 (1) :18312.Abstract
Glioblastoma might have widespread effects on the neural organization and cognitive function, and even focal lesions may be associated with distributed functional alterations. However, functional changes do not necessarily follow obvious anatomical patterns and the current understanding of this interrelation is limited. In this study, we used resting-state functional magnetic resonance imaging to evaluate changes in global functional connectivity patterns in 15 patients with glioblastoma. For six patients we followed longitudinal trajectories of their functional connectome and structural tumour evolution using bi-monthly follow-up scans throughout treatment and disease progression. In all patients, unilateral tumour lesions were associated with inter-hemispherically symmetric network alterations, and functional proximity of tumour location was stronger linked to distributed network deterioration than anatomical distance. In the longitudinal subcohort of six patients, we observed patterns of network alterations with initial transient deterioration followed by recovery at first follow-up, and local network deterioration to precede structural tumour recurrence by two months. In summary, the impact of focal glioblastoma lesions on the functional connectome is global and linked to functional proximity rather than anatomical distance to tumour regions. Our findings further suggest a relevance for functional network trajectories as a possible means supporting early detection of tumour recurrence.
Abaci Turk E, Abulnaga MS, Luo J, Stout JN, Feldman HA, Turk A, Gagoski B, Wald LL, Adalsteinsson E, Roberts DJ, et al. Placental MRI: Effect of Maternal Position and Uterine Contractions on Placental BOLD MRI Measurements. Placenta. 2020;95 :69-77.Abstract
INTRODUCTION: Before using blood-oxygen-level-dependent magnetic resonance imaging (BOLD MRI) during maternal hyperoxia as a method to detect individual placental dysfunction, it is necessary to understand spatiotemporal variations that represent normal placental function. We investigated the effect of maternal position and Braxton-Hicks contractions on estimates obtained from BOLD MRI of the placenta during maternal hyperoxia. METHODS: For 24 uncomplicated singleton pregnancies (gestational age 27-36 weeks), two separate BOLD MRI datasets were acquired, one in the supine and one in the left lateral maternal position. The maternal oxygenation was adjusted as 5 min of room air (21% O), followed by 5 min of 100% FiO. After datasets were corrected for signal non-uniformities and motion, global and regional BOLD signal changes in R* and voxel-wise Time-To-Plateau (TTP) in the placenta were measured. The overall placental and uterine volume changes were determined across time to detect contractions. RESULTS: In mothers without contractions, increases in global placental R* in the supine position were larger compared to the left lateral position with maternal hyperoxia. Maternal position did not alter global TTP but did result in regional changes in TTP. 57% of the subjects had Braxton-Hicks contractions and 58% of these had global placental R* decreases during the contraction. CONCLUSION: Both maternal position and Braxton-Hicks contractions significantly affect global and regional changes in placental R* and regional TTP. This suggests that both factors must be taken into account in analyses when comparing placental BOLD signals over time within and between individuals.
Frid P, Drake M, Giese A-K, Wasselius J, Schirmer MD, Donahue KL, Cloonan L, Irie R, McIntosh EC, Golland P. Detailed Phenotyping of Posterior vs. Anterior Circulation Ischemic Stroke: A Multi-center MRI Study. J Neurol. 2020;267 (3) :649-58.Abstract

OBJECTIVE:

Posterior circulation ischemic stroke (PCiS) constitutes 20-30% of ischemic stroke cases. Detailed information about differences between PCiS and anterior circulation ischemic stroke (ACiS) remains scarce. Such information might guide clinical decision making and prevention strategies. We studied risk factors and ischemic stroke subtypes in PCiS vs. ACiS and lesion location on magnetic resonance imaging (MRI) in PCiS.

METHODS:

Out of 3,301 MRIs from 12 sites in the National Institute of Neurological Disorders and Stroke (NINDS) Stroke Genetics Network (SiGN), we included 2,381 cases with acute DWI lesions. The definition of ACiS or PCiS was based on lesion location. We compared the groups using Chi-squared and logistic regression.

RESULTS:

PCiS occurred in 718 (30%) patients and ACiS in 1663 (70%). Diabetes and male sex were more common in PCiS vs. ACiS (diabetes 27% vs. 23%, p < 0.05; male sex 68% vs. 58%, p < 0.001). Both were independently associated with PCiS (diabetes, OR = 1.29; 95% CI 1.04-1.61; male sex, OR = 1.46; 95% CI 1.21-1.78). ACiS more commonly had large artery atherosclerosis (25% vs. 20%, p < 0.01) and cardioembolic mechanisms (17% vs. 11%, p < 0.001) compared to PCiS. Small artery occlusion was more common in PCiS vs. ACiS (20% vs. 14%, p < 0.001). Small artery occlusion accounted for 47% of solitary brainstem infarctions.

CONCLUSION:

Ischemic stroke subtypes differ between the two phenotypes. Diabetes and male sex have a stronger association with PCiS than ACiS. Definitive MRI-based PCiS diagnosis aids etiological investigation and contributes additional insights into specific risk factors and mechanisms of injury in PCiS.

Wachinger C, Toews M, Langs G, Wells W, Golland P. Keypoint Transfer for Fast Whole-Body Segmentation. IEEE Trans Med Imaging. 2020;39 (2) :273-82.Abstract
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.
Abulnaga MS, Abaci Turk E, Bessmeltsev M, Grant EP, Solomon J, Golland P. Placental Flattening via Volumetric Parameterization. Med Image Comput Comput Assist Interv. 2019;11767 :39-47.Abstract
We present a volumetric mesh-based algorithm for flattening the placenta to a canonical template to enable effective visualization of local anatomy and function. Monitoring placental function promises to support pregnancy assessment and to improve care outcomes. We aim to alleviate visualization and interpretation challenges presented by the shape of the placenta when it is attached to the curved uterine wall. To do so, we flatten the volumetric mesh that captures placental shape to resemble the well-studied shape. We formulate our method as a map from the shape to a flattened template that minimizes the symmetric Dirichlet energy to control distortion throughout the volume. Local injectivity is enforced via constrained line search during gradient descent. We evaluate the proposed method on 28 placenta shapes extracted from MRI images in a clinical study of placental function. We achieve sub-voxel accuracy in mapping the boundary of the placenta to the template while successfully controlling distortion throughout the volume. We illustrate how the resulting mapping of the placenta enhances visualization of placental anatomy and function. Our implementation is freely available at https://github.com/mabulnaga/placenta-flattening.
Marinescu RV, Lorenzi M, Blumberg SB, Young AL, Planell-Morell P, Oxtoby NP, Eshaghi A, Yong KX, Crutch SJ, Golland P, et al. Disease Knowledge Transfer across Neurodegenerative Diseases. Med Image Comput Comput Assist Interv. 2019;11765 :860-8.Abstract
We introduce Disease Knowledge Transfer (DKT), a novel technique for transferring biomarker information between related neurodegenerative diseases. DKT infers robust multimodal biomarker trajectories in rare neurodegenerative diseases even when only limited, unimodal data is available, by transferring information from larger multimodal datasets from common neurodegenerative diseases. DKT is a joint-disease generative model of biomarker progressions, which exploits biomarker relationships that are shared across diseases. Our proposed method allows, for the first time, the estimation of plausible biomarker trajectories in Posterior Cortical Atrophy (PCA), a rare neurodegenerative disease where only unimodal MRI data is available. For this we train DKT on a combined dataset containing subjects with two distinct diseases and sizes of data available: 1) a larger, multimodal typical AD (tAD) dataset from the TADPOLE Challenge, and 2) a smaller unimodal Posterior Cortical Atrophy (PCA) dataset from the Dementia Research Centre (DRC), for which only a limited number of Magnetic Resonance Imaging (MRI) scans are available. Although validation is challenging due to lack of data in PCA, we validate DKT on synthetic data and two patient datasets (TADPOLE and PCA cohorts), showing it can estimate the ground truth parameters in the simulation and predict unseen biomarkers on the two patient datasets. While we demonstrated DKT on Alzheimer's variants, we note DKT is generalisable to other forms of related neurodegenerative diseases. Source code for DKT is available online: https://github.com/mrazvan22/dkt.
Marinescu RV, Oxtoby NP, Young AL, Bron EE, Toga AW, Weiner MW, Barkhof F, Fox NC, Golland P, Klein S, et al. TADPOLE Challenge: Accurate Alzheimer's Disease Prediction Through Crowdsourced Forecasting of Future Data. Predict Intell Med. 2019;11843 :1-10.Abstract
The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting the future evolution of individuals at risk of Alzheimer's disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. Participants are then required to make forecasts of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog 13), and total volume of the ventricles - which are then compared with future measurements. Strong points of the challenge are that the test data did not exist at the time of forecasting (it was acquired afterwards), and that it focuses on the challenging problem of cohort selection for clinical trials by identifying fast progressors. The submission phase of TADPOLE was open until 15 November 2017; since then data has been acquired until April 2019 from 219 subjects with 223 clinical visits and 150 Magnetic Resonance Imaging (MRI) scans, which was used for the evaluation of the participants' predictions. Thirty-three teams participated with a total of 92 submissions. No single submission was best at predicting all three outcomes. For diagnosis prediction, the best forecast (team Frog), which was based on gradient boosting, obtained a multiclass area under the receiver-operating curve (MAUC) of 0.931, while for ventricle prediction the best forecast (team ), which was based on disease progression modelling and spline regression, obtained mean absolute error of 0.41% of total intracranial volume (ICV). For ADAS-Cog 13, no forecast was considerably better than the benchmark mixed effects model ( ), provided to participants before the submission deadline. Further analysis can help understand which input features and algorithms are most suitable for Alzheimer's disease prediction and for aiding patient stratification in clinical trials. The submission system remains open via the website: https://tadpole.grand-challenge.org/.
Egger B, Schirmer MD, Dubost F, Nardin MJ, Rost NS, Golland P. Patient-specific Conditional Joint Models of Shape, Image Features and Clinical Indicators. Med Image Comput Comput Assist Interv. 2019;11767 :93-101.Abstract
We propose and demonstrate a joint model of anatomical shapes, image features and clinical indicators for statistical shape modeling and medical image analysis. The key idea is to employ a copula model to separate the joint dependency structure from the marginal distributions of variables of interest. This separation provides flexibility on the assumptions made during the modeling process. The proposed method can handle binary, discrete, ordinal and continuous variables. We demonstrate a simple and efficient way to include binary, discrete and ordinal variables into the modeling. We build Bayesian conditional models based on observed partial clinical indicators, features or shape based on Gaussian processes capturing the dependency structure. We apply the proposed method on a stroke dataset to jointly model the shape of the lateral ventricles, the spatial distribution of the white matter hyperintensity associated with periventricular white matter disease, and clinical indicators. The proposed method yields interpretable joint models for data exploration and patient-specific statistical shape models for medical image analysis.

Pages