We propose a unified Bayesian framework for detecting genetic variants associated with a disease while exploiting image-based features as an intermediate phenotype. Traditionally, imaging genetics methods comprise two separate steps. First, image features are selected based on their relevance to the disease phenotype. Second, a set of genetic variants are identified to explain the selected features. In contrast, our method performs these tasks simultaneously to ultimately assign probabilistic measures of relevance to both genetic and imaging markers. We derive an efficient approximate inference algorithm that handles high dimensionality of imaging genetic data. We evaluate the algorithm on synthetic data and show that it outperforms traditional models. We also illustrate the application of the method on ADNI data.
We present an analysis framework for large studies of multimodal clinical quality brain image collections. Processing and analysis of such datasets is challenging due to low resolution, poor contrast, mis-aligned images, and restricted field of view. We adapt existing registration and segmentation methods and build a computational pipeline for spatial normalization and feature extraction. The resulting aligned dataset enables clinically meaningful analysis of spatial distributions of relevant anatomical features and of their evolution with age and disease progression. We demonstrate the approach on a neuroimaging study of stroke with more than 800 patients. We show that by combining data from several modalities, we can automatically segment important biomarkers such as white matter hyperintensity and characterize pathology evolution in this heterogeneous cohort. Specifically, we examine two sub-populations with different dynamics of white matter hyperintensity changes as a function of patients' age. Pipeline and analysis code is available at http://groups.csail.mit.edu/vision/medical-vision/stroke/.
Manifold learning has been successfully applied to a variety of medical imaging problems. Its use in real-time applications requires fast projection onto the low-dimensional space. To this end, out-of-sample extensions are applied by constructing an interpolation function that maps from the input space to the low-dimensional manifold. Commonly used approaches such as the Nyström extension and kernel ridge regression require using all training points. We propose an interpolation function that only depends on a small subset of the input training data. Consequently, in the testing phase each new point only needs to be compared against a small number of input training data in order to project the point onto the low-dimensional space. We interpret our method as an out-of-sample extension that approximates kernel ridge regression. Our method involves solving a simple convex optimization problem and has the attractive property of guaranteeing an upper bound on the approximation error, which is crucial for medical applications. Tuning this error bound controls the sparsity of the resulting interpolation function. We illustrate our method in two clinical applications that require fast mapping of input images onto a low-dimensional space.
We present a method to detect epileptic regions based on functional connectivity differences between individual epilepsy patients and a healthy population. Our model assumes that the global functional characteristics of these differences are shared across patients, but it allows for the epileptic regions to vary between individuals. We evaluate the detection performance against intracranial EEG observations and compare our approach with two baseline methods that use standard statistics. The baseline techniques are sensitive to the choice of thresholds, whereas our algorithm automatically estimates the appropriate model parameters and compares favorably with the best baseline results. This suggests the promise of our approach for pre-surgical planning in epilepsy.
We present a novel method for inferring tissue labels in atlas-based image segmentation using Gaussian process regression. Atlas-based segmentation results in probabilistic label maps that serve as input to our method. We introduce a contour-driven prior distribution over label maps to incorporate image features of the input scan into the label inference problem. The mean function of the Gaussian process posterior distribution yields the MAP estimate of the label map and is used in the subsequent voting. We demonstrate improved segmentation accuracy when our approach is combined with two different patch-based segmentation techniques. We focus on the segmentation of parotid glands in CT scans of patients with head and neck cancer, which is important for radiation therapy planning.
We propose a novel approach to identify the foci of a neurological disorder based on anatomical and functional connectivity information. Specifically, we formulate a generative model that characterizes the network of abnormal functional connectivity emanating from the affected foci. This allows us to aggregate pairwise connectivity changes into a region-based representation of the disease. We employ the variational expectation-maximization algorithm to fit the model and subsequently identify both the afflicted regions and the differences in connectivity induced by the disorder. We demonstrate our method on a population study of schizophrenia.
Open-source software provides an economic benefit by reducing duplicated development effort, and advances science knowledge by fostering a culture of reproducible experimentation. This paper describes recent advances in the Plastimatch open software suite, which implements a broad set of useful tools for research and practice in radiotherapy and medical imaging. The focus of this paper is to highlight recent advancements, including 2D-3D registration, GPU-accelerated mutual information, analytic regularization of B-spline registration, automatic 3D feature detection and feature matching, and radiotherapy plan evaluation tools.
Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases, segmentation is largely performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially discernible boundaries. We present an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior information. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. Structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the maximum a posteriori probability estimation problem. We demonstrate the approach on 20 brain magnetic resonance images showing superior performance, particularly in cases where purely image based methods fail.