Publications

2016
Hengameh Mirzaalian, Lipeng Ning, Peter Savadjiev, Ofer Pasternak, Sylvain Bouix, Oleg Michailovich, G Grant, CE Marx, RA Morey, LA Flashman, MS George, TW McAllister, N Andaluz, L Shutter, R Coimbra, Ross Zafonte, Michael J Coleman, Marek Kubicki, Carl-Fredrik Westin, M.B. Stein, Martha E Shenton, and Yogesh Rathi. 7/2016. “Inter-site and Inter-scanner Diffusion MRI Data Harmonization.” Neuroimage, 135, Pp. 311-23.Abstract

We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.

Nematollah K Batmanghelich, Adrian Dalca, Gerald Quon, Mert Sabuncu, and Polina Golland. 7/2016. “Probabilistic Modeling of Imaging, Genetics and Diagnosis.” IEEE Trans Med Imaging, 35, 7, Pp. 1765-79.Abstract

We propose a unified Bayesian framework for detecting genetic variants associated with disease by exploiting image-based features as an intermediate phenotype. The use of imaging data for examining genetic associations promises new directions of analysis, but currently the most widely used methods make sub-optimal use of the richness that these data types can offer. Currently, image features are most commonly selected based on their relevance to the disease phenotype. Then, in a separate step, a set of genetic variants is identified to explain the selected features. In contrast, our method performs these tasks simultaneously in order to jointly exploit information in both data types. The analysis yields probabilistic measures of clinical relevance for both imaging and genetic markers. We derive an efficient approximate inference algorithm that handles the high dimensionality of image and genetic data. We evaluate the algorithm on synthetic data and demonstrate that it outperforms traditional models. We also illustrate our method on Alzheimer's Disease Neuroimaging Initiative data.

Carl-Fredrik Westin, Hans Knutsson, Ofer Pasternak, Filip Szczepankiewicz, Evren Özarslan, Danielle van Westen, Cecilia Mattisson, Mats Bogren, Lauren J O'Donnell, Marek Kubicki, Daniel Topgaard, and Markus Nilsson. 7/2016. “Q-space Trajectory Imaging for Multidimensional Diffusion MRI of the Human Brain.” Neuroimage, 135, Pp. 345-62.Abstract

This work describes a new diffusion MR framework for imaging and modeling of microstructure that we call q-space trajectory imaging (QTI). The QTI framework consists of two parts: encoding and modeling. First we propose q-space trajectory encoding, which uses time-varying gradients to probe a trajectory in q-space, in contrast to traditional pulsed field gradient sequences that attempt to probe a point in q-space. Then we propose a microstructure model, the diffusion tensor distribution (DTD) model, which takes advantage of additional information provided by QTI to estimate a distributional model over diffusion tensors. We show that the QTI framework enables microstructure modeling that is not possible with the traditional pulsed gradient encoding as introduced by Stejskal and Tanner. In our analysis of QTI, we find that the well-known scalar b-value naturally extends to a tensor-valued entity, i.e., a diffusion measurement tensor, which we call the b-tensor. We show that b-tensors of rank 2 or 3 enable estimation of the mean and covariance of the DTD model in terms of a second order tensor (the diffusion tensor) and a fourth order tensor. The QTI framework has been designed to improve discrimination of the sizes, shapes, and orientations of diffusion microenvironments within tissue. We derive rotationally invariant scalar quantities describing intuitive microstructural features including size, shape, and orientation coherence measures. To demonstrate the feasibility of QTI on a clinical scanner, we performed a small pilot study comparing a group of five healthy controls with five patients with schizophrenia. The parameter maps derived from QTI were compared between the groups, and 9 out of the 14 parameters investigated showed differences between groups. The ability to measure and model the distribution of diffusion tensors, rather than a quantity that has already been averaged within a voxel, has the potential to provide a powerful paradigm for the study of complex tissue architecture.

Sönke Bartling, Marianna Jakab, and Ron Kikinis. 6/2016. CT-based Atlas of the Ear. Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. Publisher's VersionAbstract
The Surgical Planning Laboratory at Brigham and Women's Hospital, Harvard Medical School, developed the SPL Ear Atlas. The atlas was derived from a high-resolution flat-panel computed tomography (CT) scan (aprox 140 µm high contrast resultion), using semi-automated image segmentation and three-dimensional reconstruction techniques [Gupta, Bartling, et al. AJNR Am J Neuroradiol. 2004.]. The current version consists of: 1. the original CT scan; 2. a set of detailed label maps; 3. a set of three-dimensional models of the labeled anatomical structures; 4. mrb (Medical Reality Bundle) file archive that contains the mrml scene file and all data for loading into Slicer 4 for displaying the volumes in 3D Slicer version 4.0 or greater; 5. several pre-defined 3D-views (“anatomy teaching files”). The SPL Ear Atlas provides important reference information for surgical planning, anatomy teaching, and template driven segmentation. Visualization of the data requires 3D Slicer. This software package can be downloaded from here. We are pleased to make this atlas available to our colleagues for free download. Please note that the data is being distributed under the Slicer license. By downloading these data, you agree to acknowledge our contribution in any of your publications that result form the use of this atlas. This work is funded as part of the Neuroimaging Analysis Center, grant number P41 RR013218, by the NIH's National Center for Research Resources (NCRR) and grant number P41 EB015902, by the NIH's National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Google Faculty Research Award.
Andriy Fedorov, David Clunie, Ethan Ulrich, Christian Bauer, Andreas Wahle, Bartley Brown, Michael Onken, Jörg Riesmeier, Steve Pieper, Ron Kikinis, John Buatti, and Reinhard R Beichel. 5/2016. “DICOM for Quantitative Imaging Biomarker Development: A Standards Based Approach to Sharing Clinical Data and Structured PET/CT Analysis Results in Head and Neck Cancer Research.” PeerJ, 4, Pp. e2057.Abstract

Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

Padmavathi Sundaram, Aapo Nummenmaa, William M Wells III, Darren Orbach, Daniel Orringer, Robert Mulkern, and Yoshio Okada. 5/2016. “Direct Neural Current Imaging in an Intact Cerebellum with Magnetic Resonance Imaging.” Neuroimage, 132, Pp. 477-90.Abstract

The ability to detect neuronal currents with high spatiotemporal resolution using magnetic resonance imaging (MRI) is important for studying human brain function in both health and disease. While significant progress has been made, we still lack evidence showing that it is possible to measure an MR signal time-locked to neuronal currents with a temporal waveform matching concurrently recorded local field potentials (LFPs). Also lacking is evidence that such MR data can be used to image current distribution in active tissue. Since these two results are lacking even in vitro, we obtained these data in an intact isolated whole cerebellum of turtle during slow neuronal activity mediated by metabotropic glutamate receptors using a gradient-echo EPI sequence (TR=100ms) at 4.7T. Our results show that it is possible (1) to reliably detect an MR phase shift time course matching that of the concurrently measured LFP evoked by stimulation of a cerebellar peduncle, (2) to detect the signal in single voxels of 0.1mm3, (3) to determine the spatial phase map matching the magnetic field distribution predicted by the LFP map, (4) to estimate the distribution of neuronal current in the active tissue from a group-average phase map, and (5) to provide a quantitatively accurate theoretical account of the measured phase shifts. The peak values of the detected MR phase shifts were 0.27-0.37°, corresponding to local magnetic field changes of 0.67-0.93nT (for TE=26ms). Our work provides an empirical basis for future extensions to in vivo imaging of neuronal currents.

Fan Zhang, Yang Song, Weidong Cai, Sidong Liu, Siqi Liu, Sonia Pujol, Ron Kikinis, Yong Xia, Michael Fulham, and David Feng. 5/2016. “Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging.” IEEE Trans Biomed Eng, 63, 5, Pp. 1058-69.Abstract

Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.

Romeil S Sandhu, Tryphon T Georgiou, and Allen R Tannenbaum. 5/2016. “Ricci Curvature: An Economic Indicator for Market Fragility and Systemic Risk.” Sci Adv, 2, 5, Pp. e1501495.Abstract

Quantifying the systemic risk and fragility of financial systems is of vital importance in analyzing market efficiency, deciding on portfolio allocation, and containing financial contagions. At a high level, financial systems may be represented as weighted graphs that characterize the complex web of interacting agents and information flow (for example, debt, stock returns, and shareholder ownership). Such a representation often turns out to provide keen insights. We show that fragility is a system-level characteristic of "business-as-usual" market behavior and that financial crashes are invariably preceded by system-level changes in robustness. This was done by leveraging previous work, which suggests that Ricci curvature, a key geometric feature of a given network, is negatively correlated to increases in network fragility. To illustrate this insight, we examine daily returns from a set of stocks comprising the Standard and Poor's 500 (S&P 500) over a 15-year span to highlight the fact that corresponding changes in Ricci curvature constitute a financial "crash hallmark." This work lays the foundation of understanding how to design (banking) systems and policy regulations in a manner that can combat financial instabilities exposed during the 2007-2008 crisis.

Bjoern H Menze, Koen Van Leemput, Danial Lashkari, Tammy Riklin-Raviv, Ezequiel Geremia, Esther Alberts, Philipp Gruber, Susanne Wegener, Marc-Andre Weber, Gabor Szekely, Nicholas Ayache, and Polina Golland. 4/2016. “A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation - with Application to Tumor and Stroke.” IEEE Trans Med Imaging, 35, 4, Pp. 933-46.Abstract

We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative-discriminative model to be one of the top ranking methods in the BRATS evaluation.

Matthew Toews and William M Wells III. 4/2016. “Invariant Feature-Based Analysis of Medical Images: An Overview.” In IEEE Int Symp Biomed Imaging.
Lipeng Ning, Carl-Fredrik Westin, and Yogesh Rathi. 3/2016. “Estimation of Bounded and Unbounded Trajectories in Diffusion MRI.” Front Neurosci, 10, Pp. 129.Abstract

Disentangling the tissue microstructural information from the diffusion magnetic resonance imaging (dMRI) measurements is quite important for extracting brain tissue specific measures. The autocorrelation function of diffusing spins is key for understanding the relation between dMRI signals and the acquisition gradient sequences. In this paper, we demonstrate that the autocorrelation of diffusion in restricted or bounded spaces can be well approximated by exponential functions. To this end, we propose to use the multivariate Ornstein-Uhlenbeck (OU) process to model the matrix-valued exponential autocorrelation function of three-dimensional diffusion processes with bounded trajectories. We present detailed analysis on the relation between the model parameters and the time-dependent apparent axon radius and provide a general model for dMRI signals from the frequency domain perspective. For our experimental setup, we model the diffusion signal as a mixture of two compartments that correspond to diffusing spins with bounded and unbounded trajectories, and analyze the corpus-callosum in an ex-vivo data set of a monkey brain.

Tina Kapur and Clare M. Tempany. 3/2016. “Proceedings of the 8th Image Guided Therapy Workshop” 8, Pp. 1-68. 2016 IGT Workshop Proceedings
Yi Gao, Vadim Ratner, Liangjia Zhu, Tammy Diprima, Tahsin Kurc, Allen Tannenbaum, and Joel Saltz. 2/2016. “Hierarchical Nucleus Segmentation in Digital Pathology Images.” Proc SPIE Int Soc Opt Eng, 9791.Abstract

Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.

Yi Gao, William Liu, Shipra Arjun, Liangjia Zhu, Vadim Ratner, Tahsin Kurc, Joel Saltz, and Allen Tannenbaum. 2/2016. “Multi-scale Learning Based Segmentation of Glands in Digital Colonrectal Pathology Images.” Proc SPIE Int Soc Opt Eng, 9791.Abstract

Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.

Jørn Bersvendsen, Matthew Toews, Adriyana Danudibroto, William M Wells III, Stig Urheim, Raúl San José Estépar, and Eigil Samset. 2/2016. “Robust Spatio-Temporal Registration of 4D Cardiac Ultrasound Sequences.” Proc SPIE Int Soc Opt Eng, 9790.Abstract

Registration of multiple 3D ultrasound sectors in order to provide an extended field of view is important for the appreciation of larger anatomical structures at high spatial and temporal resolution. In this paper, we present a method for fully automatic spatio-temporal registration between two partially overlapping 3D ultrasound sequences. The temporal alignment is solved by aligning the normalized cross correlation-over-time curves of the sequences. For the spatial alignment, corresponding 3D Scale Invariant Feature Transform (SIFT) features are extracted from all frames of both sequences independently of the temporal alignment. A rigid transform is then calculated by least squares minimization in combination with random sample consensus. The method is applied to 16 echocardiographic sequences of the left and right ventricles and evaluated against manually annotated temporal events and spatial anatomical landmarks. The mean distances between manually identified landmarks in the left and right ventricles after automatic registration were (mean ± SD) 4.3 ± 1.2 mm compared to a reference error of 2.8 ± 0.6 mm with manual registration. For the temporal alignment, the absolute errors in valvular event times were 14.4 ± 11.6 ms for Aortic Valve (AV) opening, 18.6 ± 16.0 ms for AV closing, and 34.6 ± 26.4 ms for mitral valve opening, compared to a mean inter-frame time of 29 ms.

Lipeng Ning, Kawin Setsompop, Oleg Michailovich, Nikos Makris, Martha E Shenton, Carl-Fredrik Westin, and Yogesh Rathi. 1/2016. “A Joint Compressed-sensing and Super-resolution Approach for Very High-resolution Diffusion Imaging.” Neuroimage, 125, Pp. 386-400.Abstract

Diffusion MRI (dMRI) can provide invaluable information about the structure of different tissue types in the brain. Standard dMRI acquisitions facilitate a proper analysis (e.g. tracing) of medium-to-large white matter bundles. However, smaller fiber bundles connecting very small cortical or sub-cortical regions cannot be traced accurately in images with large voxel sizes. Yet, the ability to trace such fiber bundles is critical for several applications such as deep brain stimulation and neurosurgery. In this work, we propose a novel acquisition and reconstruction scheme for obtaining high spatial resolution dMRI images using multiple low resolution (LR) images, which is effective in reducing acquisition time while improving the signal-to-noise ratio (SNR). The proposed method called compressed-sensing super resolution reconstruction (CS-SRR), uses multiple overlapping thick-slice dMRI volumes that are under-sampled in q-space to reconstruct diffusion signal with complex orientations. The proposed method combines the twin concepts of compressed sensing and super-resolution to model the diffusion signal (at a given b-value) in a basis of spherical ridgelets with total-variation (TV) regularization to account for signal correlation in neighboring voxels. A computationally efficient algorithm based on the alternating direction method of multipliers (ADMM) is introduced for solving the CS-SRR problem. The performance of the proposed method is quantitatively evaluated on several in-vivo human data sets including a true SRR scenario. Our experimental results demonstrate that the proposed method can be used for reconstructing sub-millimeter super resolution dMRI data with very good data fidelity in clinically feasible acquisition time.

Sidong Liu, Weidong Cai, Sonia Pujol, Ron Kikinis, and Dagan D Feng. 2016. “Cross-View Neuroimage Pattern Analysis in Alzheimer's Disease Staging.” Front Aging Neurosci, 8, Pp. 23.Abstract
The research on staging of pre-symptomatic and prodromal phase of neurological disorders, e.g., Alzheimer's disease (AD), is essential for prevention of dementia. New strategies for AD staging with a focus on early detection, are demanded to optimize potential efficacy of disease-modifying therapies that can halt or slow the disease progression. Recently, neuroimaging are increasingly used as additional research-based markers to detect AD onset and predict conversion of MCI and normal control (NC) to AD. Researchers have proposed a variety of neuroimaging biomarkers to characterize the patterns of the pathology of AD and MCI, and suggested that multi-view neuroimaging biomarkers could lead to better performance than single-view biomarkers in AD staging. However, it is still unclear what leads to such synergy and how to preserve or maximize. In an attempt to answer these questions, we proposed a cross-view pattern analysis framework for investigating the synergy between different neuroimaging biomarkers. We quantitatively analyzed nine types of biomarkers derived from FDG-PET and T1-MRI, and evaluated their performance in a task of classifying AD, MCI, and NC subjects obtained from the ADNI baseline cohort. The experiment results showed that these biomarkers could depict the pathology of AD from different perspectives, and output distinct patterns that are significantly associated with the disease progression. Most importantly, we found that these features could be separated into clusters, each depicting a particular aspect; and the inter-cluster features could always achieve better performance than the intra-cluster features in AD staging.
Fan Zhang, Yang Song, Weidong Cai, Alexander G Hauptmann, Sidong Liu, Sonia Pujol, Ron Kikinis, Michael J Fulham, David Dagan Feng, and Mei Chen. 2016. “Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.” Neurocomputing, 177, Pp. 75-88.Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Ivan Kolesov, Jehoon Lee, Gregory Sharp, Patricio Vela, and Allen Tannenbaum. 2016. “A Stochastic Approach to Diffeomorphic Point Set Registration with Landmark Constraints.” IEEE Trans Pattern Anal Mach Intell, 38, 2, Pp. 238-51.Abstract
This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm's robustness to noise and missing information.
Katherine A Fu, Romil Nathan, Ivo D Dinov, Junning Li, and Arthur W Toga. 2016. “T2-Imaging Changes in the Nigrosome-1 Relate to Clinical Measures of Parkinson's Disease.” Front Neurol, 7, Pp. 174.Abstract
BACKGROUND: The nigrosome-1 region of the substantia nigra (SN) undergoes the greatest and earliest dopaminergic neuron loss in Parkinson's disease (PD). As T2-weighted magnetic resonance imaging (MRI) scans are often collected with routine clinical MRI protocols, this investigation aims to determine whether T2-imaging changes in the nigrosome-1 are related to clinical measures of PD and to assess their potential as a more clinically accessible biomarker for PD. METHODS: Voxel intensity ratios were calculated for T2-weighted MRI scans from 47 subjects from the Parkinson's Progression Markers Initiative database. Three approaches were used to delineate the SN and nigrosome-1: (1) manual segmentation, (2) automated segmentation, and (3) area voxel-based morphometry. Voxel intensity ratios were calculated from voxel intensity values taken from the nigrosome-1 and two areas of the remaining SN. Linear regression analyses were conducted relating voxel intensity ratios with the Movement Disorder Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS) sub-scores for each subject. RESULTS: For manual segmentation, linear regression tests consistently identified the voxel intensity ratio derived from the dorsolateral SN and nigrosome-1 (IR2) as predictive of nBehav (p = 0.0377) and nExp (p = 0.03856). For automated segmentation, linear regression tests identified IR2 as predictive of Subscore IA (nBehav) (p = 0.01134), Subscore IB (nExp) (p = 0.00336), Score II (mExp) (p = 0.02125), and Score III (mSign) (p = 0.008139). For the voxel-based morphometric approach, univariate simple linear regression analysis identified IR2 as yielding significant results for nBehav (p = 0.003102), mExp (p = 0.0172), and mSign (p = 0.00393). CONCLUSION: Neuroimaging biomarkers may be used as a proxy of changes in the nigrosome-1, measured by MDS-UPDRS scores as an indicator of the severity of PD. The voxel intensity ratio derived from the dorsolateral SN and nigrosome-1 was consistently predictive of non-motor complex behaviors in all three analyses and predictive of non-motor experiences of daily living, motor experiences of daily living, and motor signs of PD in two of the three analyses. These results suggest that T2 changes in the nigrosome-1 may relate to certain clinical measures of PD. T2 changes in the nigrosome-1 may be considered when developing a more accessible clinical diagnostic tool for patients with suspected PD.

Pages