Diffusion MRI (dMRI) can provide invaluable information about the structure of different tissue types in the brain. Standard dMRI acquisitions facilitate a proper analysis (e.g. tracing) of medium-to-large white matter bundles. However, smaller fiber bundles connecting very small cortical or sub-cortical regions cannot be traced accurately in images with large voxel sizes. Yet, the ability to trace such fiber bundles is critical for several applications such as deep brain stimulation and neurosurgery. In this work, we propose a novel acquisition and reconstruction scheme for obtaining high spatial resolution dMRI images using multiple low resolution (LR) images, which is effective in reducing acquisition time while improving the signal-to-noise ratio (SNR). The proposed method called compressed-sensing super resolution reconstruction (CS-SRR), uses multiple overlapping thick-slice dMRI volumes that are under-sampled in q-space to reconstruct diffusion signal with complex orientations. The proposed method combines the twin concepts of compressed sensing and super-resolution to model the diffusion signal (at a given b-value) in a basis of spherical ridgelets with total-variation (TV) regularization to account for signal correlation in neighboring voxels. A computationally efficient algorithm based on the alternating direction method of multipliers (ADMM) is introduced for solving the CS-SRR problem. The performance of the proposed method is quantitatively evaluated on several in-vivo human data sets including a true SRR scenario. Our experimental results demonstrate that the proposed method can be used for reconstructing sub-millimeter super resolution dMRI data with very good data fidelity in clinically feasible acquisition time.
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative-discriminative model to be one of the top ranking methods in the BRATS evaluation.
We propose a method for the automated identification of key white matter fiber tracts for neurosurgical planning, and we apply the method in a retrospective study of 18 consecutive neurosurgical patients with brain tumors. Our method is designed to be relatively robust to challenges in neurosurgical tractography, which include peritumoral edema, displacement, and mass effect caused by mass lesions. The proposed method has two parts. First, we learn a data-driven white matter parcellation or fiber cluster atlas using groupwise registration and spectral clustering of multi-fiber tractography from healthy controls. Key fiber tract clusters are identified in the atlas. Next, patient-specific fiber tracts are automatically identified using tractography-based registration to the atlas and spectral embedding of patient tractography. Results indicate good generalization of the data-driven atlas to patients: 80% of the 800 fiber clusters were identified in all 18 patients, and 94% of the 800 fiber clusters were found in 16 or more of the 18 patients. Automated subject-specific tract identification was evaluated by quantitative comparison to subject-specific motor and language functional MRI, focusing on the arcuate fasciculus (language) and corticospinal tracts (motor), which were identified in all patients. Results indicate good colocalization: 89 of 95, or 94%, of patient-specific language and motor activations were intersected by the corresponding identified tract. All patient-specific activations were within 3mm of the corresponding language or motor tract. Overall, our results indicate the potential of an automated method for identifying fiber tracts of interest for neurosurgical planning, even in patients with mass lesions.
Disentangling the tissue microstructural information from the diffusion magnetic resonance imaging (dMRI) measurements is quite important for extracting brain tissue specific measures. The autocorrelation function of diffusing spins is key for understanding the relation between dMRI signals and the acquisition gradient sequences. In this paper, we demonstrate that the autocorrelation of diffusion in restricted or bounded spaces can be well approximated by exponential functions. To this end, we propose to use the multivariate Ornstein-Uhlenbeck (OU) process to model the matrix-valued exponential autocorrelation function of three-dimensional diffusion processes with bounded trajectories. We present detailed analysis on the relation between the model parameters and the time-dependent apparent axon radius and provide a general model for dMRI signals from the frequency domain perspective. For our experimental setup, we model the diffusion signal as a mixture of two compartments that correspond to diffusing spins with bounded and unbounded trajectories, and analyze the corpus-callosum in an ex-vivo data set of a monkey brain.
We present a robust method to correct for motion and deformations in in-utero volumetric MRI time series. Spatio-temporal analysis of dynamic MRI requires robust alignment across time in the presence of substantial and unpredictable motion. We make a Markov assumption on the nature of deformations to take advantage of the temporal structure in the image data. Forward message passing in the corresponding hidden Markov model (HMM) yields an estimation algorithm that only has to account for relatively small motion between consecutive frames. We demonstrate the utility of the temporal model by showing that its use improves the accuracy of the segmentation propagation through temporal registration. Our results suggest that the proposed model captures accurately the temporal dynamics of deformations in in-utero MRI time series.
Registration of multiple 3D ultrasound sectors in order to provide an extended field of view is important for the appreciation of larger anatomical structures at high spatial and temporal resolution. In this paper, we present a method for fully automatic spatio-temporal registration between two partially overlapping 3D ultrasound sequences. The temporal alignment is solved by aligning the normalized cross correlation-over-time curves of the sequences. For the spatial alignment, corresponding 3D Scale Invariant Feature Transform (SIFT) features are extracted from all frames of both sequences independently of the temporal alignment. A rigid transform is then calculated by least squares minimization in combination with random sample consensus. The method is applied to 16 echocardiographic sequences of the left and right ventricles and evaluated against manually annotated temporal events and spatial anatomical landmarks. The mean distances between manually identified landmarks in the left and right ventricles after automatic registration were (mean ± SD) 4.3 ± 1.2 mm compared to a reference error of 2.8 ± 0.6 mm with manual registration. For the temporal alignment, the absolute errors in valvular event times were 14.4 ± 11.6 ms for Aortic Valve (AV) opening, 18.6 ± 16.0 ms for AV closing, and 34.6 ± 26.4 ms for mitral valve opening, compared to a mean inter-frame time of 29 ms.