Hengameh Mirzaalian, Lipeng Ning, Peter Savadjiev, Ofer Pasternak, Sylvain Bouix, Oleg Michailovich, G Grant, CE Marx, RA Morey, LA Flashman, MS George, TW McAllister, N Andaluz, L Shutter, R Coimbra, Ross Zafonte, Michael J Coleman, Marek Kubicki, Carl-Fredrik Westin, M.B. Stein, Martha E Shenton, and Yogesh Rathi. 7/2016. “Inter-site and Inter-scanner Diffusion MRI Data Harmonization.” Neuroimage, 135, Pp. 311-23.Abstract
We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.
We propose a unified Bayesian framework for detecting genetic variants associated with disease by exploiting image-based features as an intermediate phenotype. The use of imaging data for examining genetic associations promises new directions of analysis, but currently the most widely used methods make sub-optimal use of the richness that these data types can offer. Currently, image features are most commonly selected based on their relevance to the disease phenotype. Then, in a separate step, a set of genetic variants is identified to explain the selected features. In contrast, our method performs these tasks simultaneously in order to jointly exploit information in both data types. The analysis yields probabilistic measures of clinical relevance for both imaging and genetic markers. We derive an efficient approximate inference algorithm that handles the high dimensionality of image and genetic data. We evaluate the algorithm on synthetic data and demonstrate that it outperforms traditional models. We also illustrate our method on Alzheimer's Disease Neuroimaging Initiative data.
This work describes a new diffusion MR framework for imaging and modeling of microstructure that we call q-space trajectory imaging (QTI). The QTI framework consists of two parts: encoding and modeling. First we propose q-space trajectory encoding, which uses time-varying gradients to probe a trajectory in q-space, in contrast to traditional pulsed field gradient sequences that attempt to probe a point in q-space. Then we propose a microstructure model, the diffusion tensor distribution (DTD) model, which takes advantage of additional information provided by QTI to estimate a distributional model over diffusion tensors. We show that the QTI framework enables microstructure modeling that is not possible with the traditional pulsed gradient encoding as introduced by Stejskal and Tanner. In our analysis of QTI, we find that the well-known scalar b-value naturally extends to a tensor-valued entity, i.e., a diffusion measurement tensor, which we call the b-tensor. We show that b-tensors of rank 2 or 3 enable estimation of the mean and covariance of the DTD model in terms of a second order tensor (the diffusion tensor) and a fourth order tensor. The QTI framework has been designed to improve discrimination of the sizes, shapes, and orientations of diffusion microenvironments within tissue. We derive rotationally invariant scalar quantities describing intuitive microstructural features including size, shape, and orientation coherence measures. To demonstrate the feasibility of QTI on a clinical scanner, we performed a small pilot study comparing a group of five healthy controls with five patients with schizophrenia. The parameter maps derived from QTI were compared between the groups, and 9 out of the 14 parameters investigated showed differences between groups. The ability to measure and model the distribution of diffusion tensors, rather than a quantity that has already been averaged within a voxel, has the potential to provide a powerful paradigm for the study of complex tissue architecture.
The Surgical Planning Laboratory at Brigham and Women's Hospital, Harvard Medical School, developed the SPL Ear Atlas. The atlas was derived from a high-resolution flat-panel computed tomography (CT) scan (aprox 140 µm high contrast resultion), using semi-automated image segmentation and three-dimensional reconstruction techniques [Gupta, Bartling, et al. AJNR Am J Neuroradiol. 2004.]. The current version consists of: 1. the original CT scan; 2. a set of detailed label maps; 3. a set of three-dimensional models of the labeled anatomical structures; 4. mrb (Medical Reality Bundle) file archive that contains the mrml scene file and all data for loading into Slicer 4 for displaying the volumes in 3D Slicer version 4.0 or greater; 5. several pre-defined 3D-views (“anatomy teaching files”). The SPL Ear Atlas provides important reference information for surgical planning, anatomy teaching, and template driven segmentation. Visualization of the data requires 3D Slicer. This software package can be downloaded from here. We are pleased to make this atlas available to our colleagues for free download. Please note that the data is being distributed under the Slicer license. By downloading these data, you agree to acknowledge our contribution in any of your publications that result form the use of this atlas. This work is funded as part of the Neuroimaging Analysis Center, grant number P41 RR013218, by the NIH's National Center for Research Resources (NCRR) and grant number P41 EB015902, by the NIH's National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Google Faculty Research Award.
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
The ability to detect neuronal currents with high spatiotemporal resolution using magnetic resonance imaging (MRI) is important for studying human brain function in both health and disease. While significant progress has been made, we still lack evidence showing that it is possible to measure an MR signal time-locked to neuronal currents with a temporal waveform matching concurrently recorded local field potentials (LFPs). Also lacking is evidence that such MR data can be used to image current distribution in active tissue. Since these two results are lacking even in vitro, we obtained these data in an intact isolated whole cerebellum of turtle during slow neuronal activity mediated by metabotropic glutamate receptors using a gradient-echo EPI sequence (TR=100ms) at 4.7T. Our results show that it is possible (1) to reliably detect an MR phase shift time course matching that of the concurrently measured LFP evoked by stimulation of a cerebellar peduncle, (2) to detect the signal in single voxels of 0.1mm3, (3) to determine the spatial phase map matching the magnetic field distribution predicted by the LFP map, (4) to estimate the distribution of neuronal current in the active tissue from a group-average phase map, and (5) to provide a quantitatively accurate theoretical account of the measured phase shifts. The peak values of the detected MR phase shifts were 0.27-0.37°, corresponding to local magnetic field changes of 0.67-0.93nT (for TE=26ms). Our work provides an empirical basis for future extensions to in vivo imaging of neuronal currents.
Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.
Quantifying the systemic risk and fragility of financial systems is of vital importance in analyzing market efficiency, deciding on portfolio allocation, and containing financial contagions. At a high level, financial systems may be represented as weighted graphs that characterize the complex web of interacting agents and information flow (for example, debt, stock returns, and shareholder ownership). Such a representation often turns out to provide keen insights. We show that fragility is a system-level characteristic of "business-as-usual" market behavior and that financial crashes are invariably preceded by system-level changes in robustness. This was done by leveraging previous work, which suggests that Ricci curvature, a key geometric feature of a given network, is negatively correlated to increases in network fragility. To illustrate this insight, we examine daily returns from a set of stocks comprising the Standard and Poor's 500 (S&P 500) over a 15-year span to highlight the fact that corresponding changes in Ricci curvature constitute a financial "crash hallmark." This work lays the foundation of understanding how to design (banking) systems and policy regulations in a manner that can combat financial instabilities exposed during the 2007-2008 crisis.
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative-discriminative model to be one of the top ranking methods in the BRATS evaluation.
Disentangling the tissue microstructural information from the diffusion magnetic resonance imaging (dMRI) measurements is quite important for extracting brain tissue specific measures. The autocorrelation function of diffusing spins is key for understanding the relation between dMRI signals and the acquisition gradient sequences. In this paper, we demonstrate that the autocorrelation of diffusion in restricted or bounded spaces can be well approximated by exponential functions. To this end, we propose to use the multivariate Ornstein-Uhlenbeck (OU) process to model the matrix-valued exponential autocorrelation function of three-dimensional diffusion processes with bounded trajectories. We present detailed analysis on the relation between the model parameters and the time-dependent apparent axon radius and provide a general model for dMRI signals from the frequency domain perspective. For our experimental setup, we model the diffusion signal as a mixture of two compartments that correspond to diffusing spins with bounded and unbounded trajectories, and analyze the corpus-callosum in an ex-vivo data set of a monkey brain.
Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.
Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.
Registration of multiple 3D ultrasound sectors in order to provide an extended field of view is important for the appreciation of larger anatomical structures at high spatial and temporal resolution. In this paper, we present a method for fully automatic spatio-temporal registration between two partially overlapping 3D ultrasound sequences. The temporal alignment is solved by aligning the normalized cross correlation-over-time curves of the sequences. For the spatial alignment, corresponding 3D Scale Invariant Feature Transform (SIFT) features are extracted from all frames of both sequences independently of the temporal alignment. A rigid transform is then calculated by least squares minimization in combination with random sample consensus. The method is applied to 16 echocardiographic sequences of the left and right ventricles and evaluated against manually annotated temporal events and spatial anatomical landmarks. The mean distances between manually identified landmarks in the left and right ventricles after automatic registration were (mean ± SD) 4.3 ± 1.2 mm compared to a reference error of 2.8 ± 0.6 mm with manual registration. For the temporal alignment, the absolute errors in valvular event times were 14.4 ± 11.6 ms for Aortic Valve (AV) opening, 18.6 ± 16.0 ms for AV closing, and 34.6 ± 26.4 ms for mitral valve opening, compared to a mean inter-frame time of 29 ms.
Diffusion MRI (dMRI) can provide invaluable information about the structure of different tissue types in the brain. Standard dMRI acquisitions facilitate a proper analysis (e.g. tracing) of medium-to-large white matter bundles. However, smaller fiber bundles connecting very small cortical or sub-cortical regions cannot be traced accurately in images with large voxel sizes. Yet, the ability to trace such fiber bundles is critical for several applications such as deep brain stimulation and neurosurgery. In this work, we propose a novel acquisition and reconstruction scheme for obtaining high spatial resolution dMRI images using multiple low resolution (LR) images, which is effective in reducing acquisition time while improving the signal-to-noise ratio (SNR). The proposed method called compressed-sensing super resolution reconstruction (CS-SRR), uses multiple overlapping thick-slice dMRI volumes that are under-sampled in q-space to reconstruct diffusion signal with complex orientations. The proposed method combines the twin concepts of compressed sensing and super-resolution to model the diffusion signal (at a given b-value) in a basis of spherical ridgelets with total-variation (TV) regularization to account for signal correlation in neighboring voxels. A computationally efficient algorithm based on the alternating direction method of multipliers (ADMM) is introduced for solving the CS-SRR problem. The performance of the proposed method is quantitatively evaluated on several in-vivo human data sets including a true SRR scenario. Our experimental results demonstrate that the proposed method can be used for reconstructing sub-millimeter super resolution dMRI data with very good data fidelity in clinically feasible acquisition time.
The research on staging of pre-symptomatic and prodromal phase of neurological disorders, e.g., Alzheimer's disease (AD), is essential for prevention of dementia. New strategies for AD staging with a focus on early detection, are demanded to optimize potential efficacy of disease-modifying therapies that can halt or slow the disease progression. Recently, neuroimaging are increasingly used as additional research-based markers to detect AD onset and predict conversion of MCI and normal control (NC) to AD. Researchers have proposed a variety of neuroimaging biomarkers to characterize the patterns of the pathology of AD and MCI, and suggested that multi-view neuroimaging biomarkers could lead to better performance than single-view biomarkers in AD staging. However, it is still unclear what leads to such synergy and how to preserve or maximize. In an attempt to answer these questions, we proposed a cross-view pattern analysis framework for investigating the synergy between different neuroimaging biomarkers. We quantitatively analyzed nine types of biomarkers derived from FDG-PET and T1-MRI, and evaluated their performance in a task of classifying AD, MCI, and NC subjects obtained from the ADNI baseline cohort. The experiment results showed that these biomarkers could depict the pathology of AD from different perspectives, and output distinct patterns that are significantly associated with the disease progression. Most importantly, we found that these features could be separated into clusters, each depicting a particular aspect; and the inter-cluster features could always achieve better performance than the intra-cluster features in AD staging.
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm's robustness to noise and missing information.
BACKGROUND: The nigrosome-1 region of the substantia nigra (SN) undergoes the greatest and earliest dopaminergic neuron loss in Parkinson's disease (PD). As T2-weighted magnetic resonance imaging (MRI) scans are often collected with routine clinical MRI protocols, this investigation aims to determine whether T2-imaging changes in the nigrosome-1 are related to clinical measures of PD and to assess their potential as a more clinically accessible biomarker for PD.
METHODS: Voxel intensity ratios were calculated for T2-weighted MRI scans from 47 subjects from the Parkinson's Progression Markers Initiative database. Three approaches were used to delineate the SN and nigrosome-1: (1) manual segmentation, (2) automated segmentation, and (3) area voxel-based morphometry. Voxel intensity ratios were calculated from voxel intensity values taken from the nigrosome-1 and two areas of the remaining SN. Linear regression analyses were conducted relating voxel intensity ratios with the Movement Disorder Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS) sub-scores for each subject.
RESULTS: For manual segmentation, linear regression tests consistently identified the voxel intensity ratio derived from the dorsolateral SN and nigrosome-1 (IR2) as predictive of nBehav (p = 0.0377) and nExp (p = 0.03856). For automated segmentation, linear regression tests identified IR2 as predictive of Subscore IA (nBehav) (p = 0.01134), Subscore IB (nExp) (p = 0.00336), Score II (mExp) (p = 0.02125), and Score III (mSign) (p = 0.008139). For the voxel-based morphometric approach, univariate simple linear regression analysis identified IR2 as yielding significant results for nBehav (p = 0.003102), mExp (p = 0.0172), and mSign (p = 0.00393).
CONCLUSION: Neuroimaging biomarkers may be used as a proxy of changes in the nigrosome-1, measured by MDS-UPDRS scores as an indicator of the severity of PD. The voxel intensity ratio derived from the dorsolateral SN and nigrosome-1 was consistently predictive of non-motor complex behaviors in all three analyses and predictive of non-motor experiences of daily living, motor experiences of daily living, and motor signs of PD in two of the three analyses. These results suggest that T2 changes in the nigrosome-1 may relate to certain clinical measures of PD. T2 changes in the nigrosome-1 may be considered when developing a more accessible clinical diagnostic tool for patients with suspected PD.
PURPOSE: Diffusion encoding with asymmetric gradient waveforms is appealing because the asymmetry provides superior efficiency. However, concomitant gradients may cause a residual gradient moment at the end of the waveform, which can cause significant signal error and image artifacts. The purpose of this study was to develop an asymmetric waveform designs for tensor-valued diffusion encoding that is not sensitive to concomitant gradients. METHODS: The "Maxwell index" was proposed as a scalar invariant to capture the effect of concomitant gradients. Optimization of "Maxwell-compensated" waveforms was performed in which this index was constrained. Resulting waveforms were compared to waveforms from literature, in terms of the measured and predicted impact of concomitant gradients, by numerical analysis as well as experiments in a phantom and in a healthy human brain. RESULTS: Maxwell-compensated waveforms with Maxwell indices below 100 (mT/m) ms showed negligible signal bias in both numerical analysis and experiments. By contrast, several waveforms from literature showed gross signal bias under the same conditions, leading to a signal bias that was large enough to markedly affect parameter maps. Experimental results were accurately predicted by theory. CONCLUSION: Constraining the Maxwell index in the optimization of asymmetric gradient waveforms yields efficient diffusion encoding that negates the effects of concomitant fields while enabling arbitrary shapes of the b-tensor. This waveform design is especially useful in combination with strong gradients, long encoding times, thick slices, simultaneous multi-slice acquisition, and large FOVs.
Computational biomechanics of the brain for neurosurgery is an emerging area of research recently gaining in importance and practical applications. This review paper presents the contributions of the Intelligent Systems for Medicine Laboratory and its collaborators to this field, discussing the modeling approaches adopted and the methods developed for obtaining the numerical solutions. We adopt a physics-based modeling approach and describe the brain deformation in mechanical terms (such as displacements, strains, and stresses), which can be computed using a biomechanical model, by solving a continuum mechanics problem. We present our modeling approaches related to geometry creation, boundary conditions, loading, and material properties. From the point of view of solution methods, we advocate the use of fully nonlinear modeling approaches, capable of capturing very large deformations and nonlinear material behavior. We discuss finite element and meshless domain discretization, the use of the total Lagrangian formulation of continuum mechanics, and explicit time integration for solving both time-accurate and steady-state problems. We present the methods developed for handling contacts and for warping 3D medical images using the results of our simulations. We present two examples to showcase these methods: brain shift estimation for image registration and brain deformation computation for neuronavigation in epilepsy treatment.
Jean-Jacques Lemaire, Antonio De Salles, Guillaume Coll, Youssef El Ouadih, Rémi Chaix, Jérôme Coste, Franck Durif, Nikos Makris, and Ron Kikinis. 8/2019. “MRI Atlas of the Human Deep Brain.” Front Neurol, 10, Pp. 851.Abstract
Mastering detailed anatomy of the human deep brain in clinical neurosciences is challenging. Although numerous pioneering works have gathered a large dataset of structural and topographic information, it is still difficult to transfer this knowledge into practice, even with advanced magnetic resonance imaging techniques. Thus, classical histological atlases continue to be used to identify structures for stereotactic targeting in functional neurosurgery. Physicians mainly use these atlases as a template co-registered with the patient's brain. However, it is possible to directly identify stereotactic targets on MRI scans, enabling personalized targeting. In order to help clinicians directly identify deep brain structures relevant to present and future medical applications, we built a volumetric MRI atlas of the deep brain (MDBA) on a large scale (infra millimetric). Twelve hypothalamic, 39 subthalamic, 36 telencephalic, and 32 thalamic structures were identified, contoured, and labeled. Nineteen coronal, 18 axial, and 15 sagittal MRI plates were created. Although primarily designed for direct labeling, the anatomic space was also subdivided in twelfths of AC-PC distance, leading to proportional scaling in the coronal, axial, and sagittal planes. This extensive work is now available to clinicians and neuroscientists, offering another representation of the human deep brain ([https://hal.archives-ouvertes.fr/] [hal-02116633]). The atlas may also be used by computer scientists who are interested in deciphering the topography of this complex region.
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (US) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy US. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. To improve accuracy of registration, we use high-dimensional texture attributes instead of image intensities and propose to replace the standard difference-based attribute matching with correlation-based attribute matching. We also present a strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images. We optimize key parameters across independent MR-iUS brain tumor datasets acquired at three different institutions, with a total of 43 tumor patients and 758 corresponding landmarks to validate the registration algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, our algorithm was able to reduce landmark errors prior to registration in three data sets (5.37 ± 4.27, 4.18 ± 1.97 and 6.18 ± 3.38 mm, respectively) to a consistently low level (2.28 ± 0.71, 2.08 ± 0.37 and 2.24 ± 0.78 mm, respectively). Our algorithm is compared to 15 other algorithms that have been previously tested on MR-iUS registration and it is competitive with the state-of-the-art on multiple datasets. We show that our algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). We further characterized landmark errors according to brain regions and tumor types, a topic so far missing in the literature. We found that landmark errors were higher in high-grade than low-grade glioma patients, and higher in tumor regions than in other brain regions.
PURPOSE: In image-guided surgery for glioma removal, neurosurgeons usually plan the resection on images acquired before surgery and use them for guidance during the subsequent intervention. However, after the surgical procedure has begun, the preplanning images become unreliable due to the brain shift phenomenon, caused by modifications of anatomical structures and imprecisions in the neuronavigation system. To obtain an updated view of the resection cavity, a solution is to collect intraoperative data, which can be additionally acquired at different stages of the procedure in order to provide a better understanding of the resection. A spatial mapping between structures identified in subsequent acquisitions would be beneficial. We propose here a fully automated segmentation-based registration method to register ultrasound (US) volumes acquired at multiple stages of neurosurgery. METHODS: We chose to segment sulci and falx cerebri in US volumes, which remain visible during resection. To automatically segment these elements, first we trained a convolutional neural network on manually annotated structures in volumes acquired before the opening of the dura mater and then we applied it to segment corresponding structures in different surgical phases. Finally, the obtained masks are used to register US volumes acquired at multiple resection stages. RESULTS: Our method reduces the mean target registration error (mTRE) between volumes acquired before the opening of the dura mater and during resection from 3.49 mm (± 1.55 mm) to 1.36 mm (± 0.61 mm). Moreover, the mTRE between volumes acquired before opening the dura mater and at the end of the resection is reduced from 3.54 mm (± 1.75 mm) to 2.05 mm (± 1.12 mm). CONCLUSION: The segmented structures demonstrated to be good candidates to register US volumes acquired at different neurosurgical phases. Therefore, our solution can compensate brain shift in neurosurgical procedures involving intraoperative US data.