This paper reports on a new computational methodology, inter-slice correspondence (ISC), for robustly aligning sets of 2D ultrasound (US) slices during image-guided medical procedures. Correspondences are derived from distinctive, local scale-invariant features, which are used in one-to-many matching of US slices in near real-time despite out-of-plane rotation, in addition to global in-plane similarity transforms, occlusion, missing tissue, US plane mirroring, changes in US probe depth settings. Experiments demonstrate that ISC can align manually-acquired US slices without probe tracking information in the context of image-guided neurosurgery, with an accuracy of 1.3mm. A novel reconstruction-without-calibration application based on ISC is proposed, where 3D US reconstruction results are very similar to those obtained via traditional phantom-based calibration.
The average diffusion propagator (ADP) obtained from diffusion MRI (dMRI) data encapsulates important structural properties of the underlying tissue. Measures derived from the ADP can be potentially used as markers of tissue integrity in characterizing several mental disorders. Thus, accurate estimation of the ADP is imperative for its use in neuroimaging studies. In this work, we propose a simple method for estimating the ADP by representing the acquired diffusion signal in the entire q-space using radial basis functions (RBF). We demonstrate our technique using two different RBF’s (generalized inverse multiquadric and Gaussian) and derive analytical expressions for the corresponding ADP’s. We also derive expressions for computing the solid angle orientation distribution function (ODF) for each of the RBF’s. Estimation of the weights of the RBF’s is done by enforcing positivity constraint on the estimated ADP or ODF. Finally, we validate our method on data obtained from a physical phantom with known fiber crossing of 45 degrees and also show comparison with the solid spherical harmonics method of . We also demonstrate our method on in-vivo human brain data.
Diffusion magnetic resonance imaging (dMRI) is an important tool that allows non-invasive investigation of the neural architecture of the brain. Advanced dMRI protocols typically require a large number of measurements for accurately tracing the fiber bundles and estimating the diffusion properties (such as, FA). However, the acquisition time of these sequences is prohibitively large for pediatric as well as patients with certain types of brain disorders (such as, dementia). Thus, fast echo-planar imaging (EPI) acquisition sequences were proposed by the authors in [6, 16], which acquired multiple slices simultaneously to reduce scan time. The scan time in such cases drops proportionately to the number of simultaneous slice acquisitions (which we denote by R). While preliminary results in [6, 16] showed good reproducibility, yet the effect of simultaneous acquisitions on long range fiber connectivity and diffusion measures such as FA, is not known. In this work, we use multi-tensor based fiber connectivity to compare data acquired on two subjects with different acceleration factors (R = 1, 2, 3). We investigate and report the reproducibility of fiber bundles and diffusion measures between these scans on two subjects with different spatial resolutions, which is quite useful while designing neuroimaging studies.
OBJECTIVE: EEG source localization is demonstrated in three cases of acute traumatic brain injury (TBI) with progressive lesion loads using anatomically faithful models of the head which account for pathology. METHODS: Multimodal magnetic resonance imaging (MRI) volumes were used to generate head models via the finite element method (FEM). A total of 25 tissue types-including 6 types accounting for pathology-were included. To determine the effects of TBI upon source localization accuracy, a minimum-norm operator was used to perform inverse localization and to determine the accuracy of the latter. RESULTS: The importance of using a more comprehensive number of tissue types is confirmed in both health and in TBI. Pathology omission is found to cause substantial inaccuracies in EEG forward matrix calculations, with lead field sensitivity being underestimated by as much as ≈ 200% in (peri-) contusional regions when TBI-related changes are ignored. Failing to account for such conductivity changes is found to misestimate substantial localization error by up to 35 mm. CONCLUSIONS: Changes in head conductivity profiles should be accounted for when performing EEG modeling in acute TBI. SIGNIFICANCE: Given the challenges of inverse localization in TBI, this framework can benefit neurotrauma patients by providing useful insights on pathophysiology.
This paper presents a new distance for measuring shape dissimilarity between objects. Recent publications introduced the use of eigenvalues of the Laplace operator as compact shape descriptors. Here, we revisit the eigenvalues to define a proper distance, called Weighted Spectral Distance (WESD), for quantifying shape dissimilarity. The definition of WESD is derived through analyzing the heat trace. This analysis provides the proposed distance with an intuitive meaning and mathematically links it to the intrinsic geometry of objects. We analyze the resulting distance definition, present and prove its important theoretical properties. Some of these properties include: 1) WESD is defined over the entire sequence of eigenvalues yet it is guaranteed to converge, 2) it is a pseudometric, 3) it is accurately approximated with a finite number of eigenvalues, and 4) it can be mapped to the [0,1) interval. Last, experiments conducted on synthetic and real objects are presented. These experiments highlight the practical benefits of WESD for applications in vision and medical image analysis.
PURPOSE: The purpose of this work is to investigate the performance of an interactive image segmentation method for radiotherapy contouring on computed tomography (CT) images. Manual segmentation is a time consuming task that is essential for treatment. Due to the low contrast of target structures, their similarity to surrounding tissue, and the required precision for the final segmentation Result, automatic methods do not exhibit robust performance. Furthermore, when an automatic segmentation algorithm produces errors at the structure boundary, they are tedious for a human user to correct. For this experiment, it is hypothesized that an interactive algorithm can attain ground truth results in a fraction of the the time needed for manual segmentation. METHODS: The proposed method is interactive segmentation that tightly couples a human "expert user" with a framework from computer vision called "active contours" to create a closed loop control system. As a Result, the strengths (i.e., quickly delineating complicated target boundaries) of the automatic method can be leveraged by the user, who guides the algorithm based on his expert knowledge throughout the process. Experimental segmentations have been performed both with and without the control system feedback, the accuracy of the resulting labels will be compared along with the time required to create the labels. RESULTS: Four structures were evaluated: left/right eye ball, brain stem, and mandible. Tests show that virtually identical segmentations are performed with and without control system feedback. However, the time required to complete the task is significantly less than what is needed for fully manual contouring. CONCLUSION: Interactive segmentation using control system feedback is shown to reduce the time and effort needed to segment targets in CT volumes of the head and neck region.
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI).
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology.
We propose a unified Bayesian framework for detecting genetic variants associated with a disease while exploiting image-based features as an intermediate phenotype. Traditionally, imaging genetics methods comprise two separate steps. First, image features are selected based on their relevance to the disease phenotype. Second, a set of genetic variants are identified to explain the selected features. In contrast, our method performs these tasks simultaneously to ultimately assign probabilistic measures of relevance to both genetic and imaging markers. We derive an efficient approximate inference algorithm that handles high dimensionality of imaging genetic data. We evaluate the algorithm on synthetic data and show that it outperforms traditional models. We also illustrate the application of the method on ADNI data.
The clustering of fibers into bundles is an important task in studying the structure and function of white matter. Existing technology mostly relies on geometrical features, such as the shape of fibers, and thus only provides very limited information about the neuroanatomical function of the brain. We advance this issue by proposing a multinomial representation of fibers decoding their connectivity to gray matter regions. We then simplify the clustering task by first deriving a compact encoding of our representation via the logit transformation. Furthermore, we define a distance between fibers that is in theory invariant to parcellation biases and is equivalent to a family of Riemannian metrics on the simplex of multinomial probabilities. We apply our method to longitudinal scans of two healthy subjects showing high reproducibility of the resulting fiber bundles without needing to register the corresponding scans to a common coordinate system. We confirm these qualitative findings via a simple statistical analyse of the fiber bundles.
We present an analysis framework for large studies of multimodal clinical quality brain image collections. Processing and analysis of such datasets is challenging due to low resolution, poor contrast, mis-aligned images, and restricted field of view. We adapt existing registration and segmentation methods and build a computational pipeline for spatial normalization and feature extraction. The resulting aligned dataset enables clinically meaningful analysis of spatial distributions of relevant anatomical features and of their evolution with age and disease progression. We demonstrate the approach on a neuroimaging study of stroke with more than 800 patients. We show that by combining data from several modalities, we can automatically segment important biomarkers such as white matter hyperintensity and characterize pathology evolution in this heterogeneous cohort. Specifically, we examine two sub-populations with different dynamics of white matter hyperintensity changes as a function of patients' age. Pipeline and analysis code is available at http://groups.csail.mit.edu/vision/medical-vision/stroke/.
Manifold learning has been successfully applied to a variety of medical imaging problems. Its use in real-time applications requires fast projection onto the low-dimensional space. To this end, out-of-sample extensions are applied by constructing an interpolation function that maps from the input space to the low-dimensional manifold. Commonly used approaches such as the Nyström extension and kernel ridge regression require using all training points. We propose an interpolation function that only depends on a small subset of the input training data. Consequently, in the testing phase each new point only needs to be compared against a small number of input training data in order to project the point onto the low-dimensional space. We interpret our method as an out-of-sample extension that approximates kernel ridge regression. Our method involves solving a simple convex optimization problem and has the attractive property of guaranteeing an upper bound on the approximation error, which is crucial for medical applications. Tuning this error bound controls the sparsity of the resulting interpolation function. We illustrate our method in two clinical applications that require fast mapping of input images onto a low-dimensional space.
This paper details an algorithm to simultaneously perform registration of computed tomography (CT) and cone-beam computed (CBCT) images, and image enhancement of CBCT. The algorithm employs a viscous fluid model which naturally incorporates two components: a similarity measure for registration and an intensity correction term for image enhancement. Incorporating an intensity correction term improves the registration results. Furthermore, applying the image enhancement term to CBCT imagery leads to an intensity corrected CBCT with better image quality. To achieve minimal processing time, the algorithm is implemented on a graphic processing unit (GPU) platform. The advantage of the simultaneous optimization strategy is quantitatively validated and discussed using a synthetic example. The effectiveness of the proposed algorithm is then illustrated using six patient datasets, three head-and-neck datasets and three prostate datasets.
In settings where high-level inferences are made based on registered image data, the registration uncertainty can contain important information. In this article, we propose a Bayesian non-rigid registration framework where conventional dissimilarity and regularization energies can be included in the likelihood and the prior distribution on deformations respectively through the use of Boltzmann's distribution. The posterior distribution is characterized using Markov Chain Monte Carlo (MCMC) methods with the effect of the Boltzmann temperature hyper-parameters marginalized under broad uninformative hyper-prior distributions. The MCMC chain permits estimation of the most likely deformation as well as the associated uncertainty. On synthetic examples, we demonstrate the ability of the method to identify the maximum a posteriori estimate and the associated posterior uncertainty, and demonstrate that the posterior distribution can be non-Gaussian. Additionally, results from registering clinical data acquired during neurosurgery for resection of brain tumor are provided; we compare the method to single transformation results from a deterministic optimizer and introduce methods that summarize the high-dimensional uncertainty. At the site of resection, the registration uncertainty increases and the marginal distribution on deformations is shown to be multi-modal.
Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.
Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer - a free platform for biomedical research - provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm.
Patients with 22q11.2 deletion syndrome (22q11.2DS) represent a population at high risk for developing schizophrenia, as well as learning disabilities. Deficits in visuo-spatial memory are thought to underlie some of the cognitive disabilities. Neuronal substrates of visuo-spatial memory include the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal fasciculus (ILF), two tracts that comprise the ventral visual stream. Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is an established method to evaluate white matter (WM) connections in vivo. DT-MRI scans of nine 22q11.2DS young adults and nine matched healthy subjects were acquired. Tractography of the IFOF and the ILF was performed. DT-MRI indices, including Fractional anisotropy (FA, measure of WM changes), axial diffusivity (AD, measure of axonal changes) and radial diffusivity (RD, measure of myelin changes) of each of the tracts and each group were measured and compared. The 22q11.2DS group showed statistically significant reductions of FA in IFOF in the left hemisphere. Additionally, reductions of AD were found in the IFOF and the ILF in both hemispheres. These findings might be the consequence of axonal changes, which is possibly due to fewer, thinner, or less organized fibers. No changes in RD were detected in any of the tracts delineated, which is in contrast to findings in schizophrenia patients where increases in RD are believed to be indicative of demyelination. We conclude that reduced axonal changes may be key to understanding the underlying pathology of WM leading to the visuo-spatial phenotype in 22q11.2DS.
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model selection based on the deviance information criterion (DIC), we show that our model provides a good fit to the observed data by sharing information across the sites. We also propose a simple approach for evaluating the efficacy of the multi-site experiment by comparing the results to those that would be expected in hypothetical single-site experiments with the same sample size.
We propose two novel distance measures, normalized between 0 and 1, and based on normalized cross-correlation for image matching. These distance measures explicitly utilize the fact that for natural images there is a high correlation between spatially close pixels. Image matching is used in various computer vision tasks, and the requirements to the distance measure are application dependent. Image recognition applications require more shift and rotation robust measures. In contrast, registration and tracking applications require better localization and noise tolerance. In this paper, we explore different advantages of our distance measures, and compare them to other popular measures, including Normalized Cross-Correlation (NCC) and Image Euclidean Distance (IMED). We show which of the proposed measures is more appropriate for tracking, and which is appropriate for image recognition tasks.