This work describes a new diffusion MR framework for imaging and modeling of microstructure that we call q-space trajectory imaging (QTI). The QTI framework consists of two parts: encoding and modeling. First we propose q-space trajectory encoding, which uses time-varying gradients to probe a trajectory in q-space, in contrast to traditional pulsed field gradient sequences that attempt to probe a point in q-space. Then we propose a microstructure model, the diffusion tensor distribution (DTD) model, which takes advantage of additional information provided by QTI to estimate a distributional model over diffusion tensors. We show that the QTI framework enables microstructure modeling that is not possible with the traditional pulsed gradient encoding as introduced by Stejskal and Tanner. In our analysis of QTI, we find that the well-known scalar b-value naturally extends to a tensor-valued entity, i.e., a diffusion measurement tensor, which we call the b-tensor. We show that b-tensors of rank 2 or 3 enable estimation of the mean and covariance of the DTD model in terms of a second order tensor (the diffusion tensor) and a fourth order tensor. The QTI framework has been designed to improve discrimination of the sizes, shapes, and orientations of diffusion microenvironments within tissue. We derive rotationally invariant scalar quantities describing intuitive microstructural features including size, shape, and orientation coherence measures. To demonstrate the feasibility of QTI on a clinical scanner, we performed a small pilot study comparing a group of five healthy controls with five patients with schizophrenia. The parameter maps derived from QTI were compared between the groups, and 9 out of the 14 parameters investigated showed differences between groups. The ability to measure and model the distribution of diffusion tensors, rather than a quantity that has already been averaged within a voxel, has the potential to provide a powerful paradigm for the study of complex tissue architecture.
We propose a unified Bayesian framework for detecting genetic variants associated with disease by exploiting image-based features as an intermediate phenotype. The use of imaging data for examining genetic associations promises new directions of analysis, but currently the most widely used methods make sub-optimal use of the richness that these data types can offer. Currently, image features are most commonly selected based on their relevance to the disease phenotype. Then, in a separate step, a set of genetic variants is identified to explain the selected features. In contrast, our method performs these tasks simultaneously in order to jointly exploit information in both data types. The analysis yields probabilistic measures of clinical relevance for both imaging and genetic markers. We derive an efficient approximate inference algorithm that handles the high dimensionality of image and genetic data. We evaluate the algorithm on synthetic data and demonstrate that it outperforms traditional models. We also illustrate our method on Alzheimer’s Disease Neuroimaging Initiative data.
The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision.
Diffusion MRI (dMRI) can provide invaluable information about the structure of different tissue types in the brain. Standard dMRI acquisitions facilitate a proper analysis (e.g. tracing) of medium-to-large white matter bundles. However, smaller fiber bundles connecting very small cortical or sub-cortical regions cannot be traced accurately in images with large voxel sizes. Yet, the ability to trace such fiber bundles is critical for several applications such as deep brain stimulation and neurosurgery. In this work, we propose a novel acquisition and reconstruction scheme for obtaining high spatial resolution dMRI images using multiple low resolution (LR) images, which is effective in reducing acquisition time while improving the signal-to-noise ratio (SNR). The proposed method called compressed-sensing super resolution reconstruction (CS-SRR), uses multiple overlapping thick-slice dMRI volumes that are under-sampled in q-space to reconstruct diffusion signal with complex orientations. The proposed method combines the twin concepts of compressed sensing and super-resolution to model the diffusion signal (at a given b-value) in a basis of spherical ridgelets with total-variation (TV) regularization to account for signal correlation in neighboring voxels. A computationally efficient algorithm based on the alternating direction method of multipliers (ADMM) is introduced for solving the CS-SRR problem. The performance of the proposed method is quantitatively evaluated on several in-vivo human data sets including a true SRR scenario. Our experimental results demonstrate that the proposed method can be used for reconstructing sub-millimeter super resolution dMRI data with very good data fidelity in clinically feasible acquisition time.
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative-discriminative model to be one of the top ranking methods in the BRATS evaluation.
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
We propose a method for the automated identification of key white matter fiber tracts for neurosurgical planning, and we apply the method in a retrospective study of 18 consecutive neurosurgical patients with brain tumors. Our method is designed to be relatively robust to challenges in neurosurgical tractography, which include peritumoral edema, displacement, and mass effect caused by mass lesions. The proposed method has two parts. First, we learn a data-driven white matter parcellation or fiber cluster atlas using groupwise registration and spectral clustering of multi-fiber tractography from healthy controls. Key fiber tract clusters are identified in the atlas. Next, patient-specific fiber tracts are automatically identified using tractography-based registration to the atlas and spectral embedding of patient tractography. Results indicate good generalization of the data-driven atlas to patients: 80% of the 800 fiber clusters were identified in all 18 patients, and 94% of the 800 fiber clusters were found in 16 or more of the 18 patients. Automated subject-specific tract identification was evaluated by quantitative comparison to subject-specific motor and language functional MRI, focusing on the arcuate fasciculus (language) and corticospinal tracts (motor), which were identified in all patients. Results indicate good colocalization: 89 of 95, or 94%, of patient-specific language and motor activations were intersected by the corresponding identified tract. All patient-specific activations were within 3mm of the corresponding language or motor tract. Overall, our results indicate the potential of an automated method for identifying fiber tracts of interest for neurosurgical planning, even in patients with mass lesions.
Disentangling the tissue microstructural information from the diffusion magnetic resonance imaging (dMRI) measurements is quite important for extracting brain tissue specific measures. The autocorrelation function of diffusing spins is key for understanding the relation between dMRI signals and the acquisition gradient sequences. In this paper, we demonstrate that the autocorrelation of diffusion in restricted or bounded spaces can be well approximated by exponential functions. To this end, we propose to use the multivariate Ornstein-Uhlenbeck (OU) process to model the matrix-valued exponential autocorrelation function of three-dimensional diffusion processes with bounded trajectories. We present detailed analysis on the relation between the model parameters and the time-dependent apparent axon radius and provide a general model for dMRI signals from the frequency domain perspective. For our experimental setup, we model the diffusion signal as a mixture of two compartments that correspond to diffusing spins with bounded and unbounded trajectories, and analyze the corpus-callosum in an ex-vivo data set of a monkey brain.