We have investigated the adsorption of asymmetric poly(styrene-b-methyl methacrylate) block copolymers (PS-PMMA) from a selective solvent onto alumina (Al(2)O(3)) particles having variable and controllable radii. The solvent used was a bad solvent for the PS block (block A) and a good solvent for the PMMA block (block B), which has a higher affinity of the surface. Such a case represents a new class of adsorption, where both blocks compete for the adsorption sites of the metallic surface. Two theoretical models, the modified drops model and the perforated film model, have been evaluated as appropriate representation of such an adsorption scenario. The experimental results indicated that the adsorption of the PS-PMMA block copolymer generated a patterned surface comprised of a homogeneous melt layer of the PS block perforated with holes having a variable PMMA structure, depending on the distance from the bottom of the hole (alumina surface) and the distance from walls of the hole. The density gradient of the PMMA moiety in the hole reverted to the classical brush morphology at a critical distance from the surface of the hole.
OBJECTIVE: Impairment of white matter connecting frontal and temporal cortices has been reported in schizophrenia. Yet, not much is known about the effects of age on fibers connecting these brain regions. Using diffusion tensor imaging tractography, we investigated the relationship between age and fiber integrity in patients with schizophrenia vs. healthy adults.
METHODS: DTI tractography was used to create 3D reconstructions of the cingulum, uncinate and inferior occipito-frontal fasciculi in 27 patients with schizophrenia and 34 healthy volunteers (23-56 years of age, group-matched on age). Fractional anisotropy (FA), describing fiber integrity, was then calculated along the entire length of these tracts, and correlated with subjects' age.
RESULTS: Patients revealed a significant decline in FA with age in both the cingulum and uncinate, but not in the inferior occipito-frontal fasciculi. No statistically significant correlations were found in these fiber bundles in controls.
CONCLUSIONS: These results suggest an age-associated reduction of frontal-temporal connectivity in schizophrenia, but not in healthy controls.
In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement "collaborate" in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over "static" approaches, in which the tracking images are enhanced independently of each other.
Segmentation involves separating an object from the background in a given image. The use of image information alone often leads to poor segmentation results due to the presence of noise, clutter or occlusion. The introduction of shape priors in the geometric active contour (GAC) framework has proved to be an effective way to ameliorate some of these problems. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, using level-sets. Following the work of Leventon et al., we propose to revisit the use of PCA to introduce prior knowledge about shapes in a more robust manner. We utilize kernel PCA (KPCA) and show that this method outperforms linear PCA by allowing only those shapes that are close enough to the training data. In our segmentation framework, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description permits to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, occlusions, or smearing.
This paper proposes a deterministic observer framework for visual tracking based on non-parametric implicit (level-set) curve descriptions. The observer is continuous-discrete, with continuous-time system dynamics and discrete-time measurements. Its state-space consists of an estimated curve position augmented by additional states (e.g., velocities) associated with every point on the estimated curve. Multiple simulation models are proposed for state prediction. Measurements are performed through standard static segmentation algorithms and optical-flow computations. Special emphasis is given to the geometric formulation of the overall dynamical system. The discrete-time measurements lead to the problem of geometric curve interpolation and the discrete-time filtering of quantities propagated along with the estimated curve. Interpolation and filtering are intimately linked to the correspondence problem between curves. Correspondences are established by a Laplace-equation approach. The proposed scheme is implemented completely implicitly (by Eulerian numerical solutions of transport equations) and thus naturally allows for topological changes and subpixel accuracy on the computational grid.
We have previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. A phantom study was performed to compare quantitative imaging performance of MC-JOSEM with that of a triple-energy-window approach (TEW) in which estimated scatter was also included additively within JOSEM, TEW-JOSEM. We acquired high-count projections of a 5.5 cm3 sphere of 111In at different locations in the water-filled torso phantom; high-count projections were then obtained with 111In only in the liver or only in the soft-tissue background compartment, so that we could generate synthetic projections for spheres surrounded by various activity distributions. MC scatter estimates used by MC-JOSEM were computed once after five iterations of TEW-JOSEM. Images of different combinations of liver/background and sphere/background activity concentration ratios were reconstructed by both TEW-JOSEM and MC-JOSEM for 40 iterations. For activity estimation in the sphere, MC-JOSEM always produced better relative bias and relative standard deviation than TEW-JOSEM for each sphere location, iteration number, and activity combination. The average relative bias of activity estimates in the sphere for MC-JOSEM after 40 iterations was -6.9%, versus -15.8% for TEW-JOSEM, while the average relative standard deviation of the sphere activity estimates was 16.1% for MC-JOSEM, versus 27.4% for TEW-JOSEM. Additionally, the average relative bias of activity concentration estimates in the liver and the background for MC-JOSEM after 40 iterations was -3.9%, versus -12.2% for TEW-JOSEM, while the average relative standard deviation of these estimates was 2.5% for MC-JOSEM, versus 3.4% for TEW-JOSEM. MC-JOSEM is a promising approach for quantitative activity estimation in 111In SPECT.
The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.
Many promising MRI approaches for research or clinical management of multiple sclerosis (MS) have recently emerged, or are under development or refinement. Advanced MRI methods need to be assessed to determine whether they allow earlier diagnosis or better identification of phenotypes. Improved post-processing should allow more efficient and complete extraction of information from images. Magnetic resonance spectroscopy should improve in sensitivity and specificity with higher field strengths and should enable the detection of a wider array of metabolites. Diffusion imaging is moving closer to the goal of defining structural connectivity and, thereby, determining the functional significance of lesions at specific locations. Cell-specific imaging now seems feasible with new magnetic resonance contrast agents. The imaging of myelin water fraction brings the hope of providing a specific measure of myelin content. Ultra-high-field MRI increases sensitivity, but also presents new technical challenges. Here, we review these recent developments in MRI for MS, and also look forward to refinements in spinal-cord imaging, optic-nerve imaging, perfusion MRI, and functional MRI. Advances in MRI should improve our ability to diagnose, monitor, and understand the pathophysiology of MS.
BACKGROUND: We have acquired dual-echo spin-echo (DE SE) MRI data of the rhesus monkey brain since 1994 as part of an ongoing study of normal aging. To analyze these legacy data for regional volume changes, we have created a reference label atlas for the Template Driven Segmentation (TDS) algorithm.
METHODS: The atlas was manually created from DE SE legacy MRI data of one behaviorally normal, young, male rhesus monkey and consisted of 14 regions of interest (ROI's). We analyzed the reproducibility and validity of the TDS algorithm using the atlas relative to manual segmentation.
RESULTS: ROI volumes were comparable between the two segmentation methodologies, except TDS overestimated the volume of basal ganglia regions. Both methodologies were highly reproducible, but TDS had lower sensitivity and comparable specificity.
CONCLUSIONS: TDS segmentation calculates accurate volumes for most ROI's. Sensitivity will be improved in future studies through the acquisition of higher quality data.
In non-rigid registration, the tradeoff between warp regularization and image fidelity is typically determined empirically. In atlas-based segmentation, this leads to a probabilistic atlas of arbitrary sharpness: weak regularization results in well-aligned training images and a sharp atlas; strong regularization yields a "blurry" atlas. In this paper, we employ a generative model for the joint registration and segmentation of images. The atlas construction process arises naturally as estimation of the model parameters. This framework allows the computation of unbiased atlases from manually labeled data at various degrees of "sharpness", as well as the joint registration and segmentation of a novel brain in a consistent manner. We study the effects of the tradeoff of atlas sharpness and warp smoothness in the context of cortical surface parcellation. This is an important question because of the increasingly availability of atlases in public databases, and the development of registration algorithms separate from the atlas construction process. We find that the optimal segmentation (parcellation) corresponds to a unique balance of atlas sharpness and warp regularization, yielding statistically significant improvements over the FreeSurfer parcellation algorithm. Furthermore, we conclude that one can simply use a single atlas computed at an optimal sharpness for the registration-segmentation of a new subject with a pre-determined, fixed, optimal warp constraint. The optimal atlas sharpness and warp smoothness can be determined by probing the segmentation performance on available training data. Our experiments also suggest that segmentation accuracy is tolerant up to a small mismatch between atlas sharpness and warp smoothness.
BACKGROUND: We developed an image-guided robot system to provide mechanical assistance for skull base drilling, which is performed to gain access for some neurosurgical interventions, such as tumour resection. The motivation for introducing this robot was to improve safety by preventing the surgeon from accidentally damaging critical neurovascular structures during the drilling procedure.
METHODS: We integrated a Stealthstation navigation system, a NeuroMate robotic arm with a six-degree-of-freedom force sensor, and the 3D Slicer visualization software to allow the robotic arm to be used in a navigated, cooperatively-controlled fashion by the surgeon. We employed virtual fixtures to constrain the motion of the robot-held cutting tool, so that it remained in the safe zone that was defined on a preoperative CT scan.
RESULTS: We performed experiments on both foam skull and cadaver heads. The results for foam blocks cut using different registrations yielded an average placement error of 0.6 mm and an average dimensional error of 0.6 mm. We drilled the posterior porus acusticus in three cadaver heads and concluded that the robot-assisted procedure is clinically feasible and provides some ergonomic benefits, such as stabilizing the drill. We obtained postoperative CT scans of the cadaver heads to assess the accuracy and found that some bone outside the virtual fixture boundary was cut. The typical overcut was 1-2 mm, with a maximum overcut of about 3 mm.
CONCLUSIONS: The image-guided cooperatively-controlled robot system can improve the safety and ergonomics of skull base drilling by stabilizing the drill and enforcing virtual fixtures to protect critical neurovascular structures. The next step is to improve the accuracy so that the overcut can be reduced to a more clinically acceptable value of about 1 mm.
In this work, we describe a white matter trajectory clustering algorithm that allows for incorporating and appropriately weighting anatomical information. The influence of the anatomical prior reflects confidence in its accuracy and relevance. It can either be defined by the user or it can be inferred automatically. After a detailed description of our novel clustering framework, we demonstrate its properties through a set of preliminary experiments.
In this work, we explore the use of classification algorithms in predicting mental states from functional neuroimaging data. We train a linear support vector machine classifier to characterize spatial fMRI activation patterns. We employ a general linear model based feature extraction method and use the t-test for feature selection. We evaluate our method on a memory encoding task, using participants' subjective prediction about learning as a benchmark for our classifier. We show that the classifier achieves better than random predictions and the average accuracy is close to subject's own prediction performance. In addition, we validate our tool on a simple motor task where we demonstrate an average prediction accuracy of over 90%. Our experiments demonstrate that the classifier performance depends significantly on the complexity of the experimental design and the mental process of interest.
We explore unsupervised, hypothesis-free methods for fMRI analysis in two different types of experiments. First, we employ clustering to identify large-scale functionally homogeneous systems. We formulate a generative mixture model, derive the EM algorithm and apply it to delineate functional systems. We also investigate spectral clustering in application to this problem and demonstrate that both methods give rise to similar partitions of the brain based on resting state fMRI data. Second, we demonstrate how to extend this approach to include information about the experimental protocol. Specifically, we formulate a mixture model in the space of possible profiles of brain response to stimuli. In both applications, our methods confirm previously known results in brain mapping and point to new research directions for exploratory analysis of fMRI data.
We present iCluster, a fast and efficient algorithm that clusters a set of images while co-registering them using a parameterized, nonlinear transformation model. The output is a small number of template images that represent different modes in a population. This is in contrast with traditional approaches that assume a single template to construct atlases. We validate and explore the algorithm in two experiments. First, we employ iCluster to partition a data set of 416 whole brain MR volumes of subjects aged 18-96 years into three sub-groups, which mainly correspond to age groups. The templates reveal significant structural differences across these age groups that confirm previous findings in aging research. In the second experiment, we run iCluster on a group of 30 patients with dementia and 30 age-matched healthy controls. The algorithm produced three modes that mainly corresponded to a sub-population of healthy controls, a sub-population of patients with dementia and a mixture group that contained both types. These results suggest that the algorithm can be used to discover sub-populations that correspond to interesting structural or functional "modes".
We present a method for discovering patterns of activation observed through fMIRI in experiments with multiple stimuli/tasks. We introduce an explicit parameterization for the profiles of activation and represent fMRI time courses as such profiles using linear regression estimates. Working in the space of activation profiles, we design a mixture model that finds the major activation patterns along with their localization maps and derive an algorithm for fitting the model to the fMRI data. The method enables functional group analysis independent of spatial correspondence among subjects. We validate this model in the context of category selectivity in the visual cortex, demonstrating good agreement with prior findings based on hypothesis-driven methods.
We propose a novel l1l2-norm inverse solver for estimating the sources of EEG/MEG signals. Based on the standard l1-norm inverse solver, the proposed sparse distributed inverse solver integrates the l1-norm spatial model with a temporal model of the source signals in order to avoid unstable activation patterns and "spiky" reconstructed signals often produced by the original solvers. The joint spatio-temporal model leads to a cost function with an l1l2-norm regularizer whose minimization can be reduced to a convex second-order cone programming problem and efficiently solved using the interior-point method. Validation with simulated and real MEG data shows that the proposed solver yields source time course estimates qualitatively similar to those obtained through dipole fitting, but without the need to specify the number of dipole sources in advance. Furthermore, the l1l2-norm solver achieves fewer false positives and a better representation of the source locations than the conventional l2 minimum-norm estimates.
We describe a method for correcting the distortions present in echo planar images (EPI) and registering the EPI to structural MRI. A fieldmap is predicted from an air / tissue segmentation of the MRI using a perturbation method and subsequently used to unwarp the EPI data. Shim and other missing parameters are estimated by registration. We obtain results that are similar to those obtained using fieldmaps, however neither fieldmaps, nor knowledge of shim coefficients is required.
Several recent studies explored the use of unsupervised segmentation methods for segmenting thalamic nuclei from diffusion tensor images. These methods provide a plausible segmentation on individual subjects; however, they do not address the problem of consistently identifying the same functional areas in a population. The lack of correspondence between the segmented nuclei make it more difficult to use the results from the unsupervised segmentation tools for morphometry. In this paper we present a novel segmentation algorithm to automatically segment the gray matter nuclei while ensuring consistency between subjects in a population. This new algorithm, referred to as Consistency Clustering, finds correspondence between the nuclei as the segmentation is achieved through a single model for the whole population, similar to the brain atlases experts use to identify thalamic nuclei.