Publications

2006

Wu Y, Warfield SK, Tan L, Wells WM, Meier DS, van Schijndel RA, Barkhof F, Guttmann CRG. Automated segmentation of multiple sclerosis lesion subtypes with multichannel MRI. Neuroimage. 2006;32(3):1205–15.
PURPOSE: To automatically segment multiple sclerosis (MS) lesions into three subtypes (i.e., enhancing lesions, T1 "black holes", T2 hyperintense lesions). MATERIALS AND METHODS: Proton density-, T2- and contrast-enhanced T1-weighted brain images of 12 MR scans were pre-processed through intracranial cavity (IC) extraction, inhomogeneity correction and intensity normalization. Intensity-based statistical k-nearest neighbor (k-NN) classification was combined with template-driven segmentation and partial volume artifact correction (TDS+) for segmentation of MS lesions subtypes and brain tissue compartments. Operator-supervised tissue sampling and parameter calibration were performed on 2 randomly selected scans and were applied automatically to the remaining 10 scans. Results from this three-channel TDS+ (3ch-TDS+) were compared to those from a previously validated two-channel TDS+ (2ch-TDS+) method. The results of both the 3ch-TDS+ and 2ch-TDS+ were also compared to manual segmentation performed by experts.
Dimaio SP, Kacher DF, Ellis RE, Fichtinger G, Hata N, Zientara GP, Panych LP, Kikinis R, Jolesz FA. Needle Artifact Localization in 3T MR Images. Stud Health Technol Inform. 2006;119:120–5.
This work explores an image-based approach for localizing needles during MRI-guided interventions, for the purpose of tracking and navigation. Susceptibility artifacts for several needles of varying thickness were imaged, in phantoms, using a 3 tesla MRI system, under a variety of conditions. The relationship between the true needle positions and the locations of artifacts within the images, determined both by manual and automatic segmentation methods, have been quantified and are presented here.
Dambreville S, Rathi Y, Tannenbaum A. Shape-Based Approach to Robust Image Segmentation using Kernel PCA. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2006;:977–984.
Segmentation involves separating an object from the background. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, within the level-set framework. Following the work of Leventon et al., we revisit the use of principal component analysis (PCA) to introduce prior knowledge about shapes in a more robust manner. To this end, we utilize Kernel PCA and show that this method of learning shapes outperforms linear PCA, by allowing only shapes that are close enough to the training data. In the proposed segmentation algorithm, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description allows to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, clutter, partial occlusions, or smearing.
Angenent S, Pichon E, Tannenbaum A. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING. Bull New Ser Am Math Soc. 2006;43:365–396.
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation.
Magnotta VA, Friedman L. Measurement of Signal-to-Noise and Contrast-to-Noise in the fBIRN Multicenter Imaging Study. J Digit Imaging. 2006;19(2):140–7.
The ability to analyze and merge data across sites, vendors, and field strengths depends on one’s ability to acquire images with the same image quality including image smoothness, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). SNR can be used to compare different magnetic resonance scanners as a measure of comparability between the systems. This study looks at the SNR and CNR ratios in structural fast spin-echo T2-weighted scans acquired in five individuals across ten sites that are part of Functional Imaging Research of Schizophrenia Testbed Biomedical Informatics Research Network (fBIRN). Different manufacturers, field strengths, gradient coils, and RF coils were used at these sites. The SNR of gray matter was fairly uniform (41.3-43.3) across scanners at 1.5 T. The higher field scanners produced images with significantly higher SNR values (44.5-108.7 at 3 T and 50.8 at 4 T). Similar results were obtained for CNR measurements between gray/white matter at 1.5 T (9.5-10.2), again increasing at higher fields (10.1-28.9 at 3 T and 10.9 at 4 T).
Michailovich O V, Tannenbaum A. Despeckling of medical ultrasound images. IEEE Trans Ultrason Ferroelectr Freq Control. 2006;53(1):64–78.
Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters—wavelet denoising, total variation filtering, and anisotropic diffusion—and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.
Talos IF, Mian AZ, Zou KH, Hsu L, Goldberg-Zimring D, Haker S, Bhagwat JG, Mulkern R V. Magnetic resonance and the human brain: anatomy, function and metabolism. Cell Mol Life Sci. 2006;63(10):1106–24.
The introduction and development, over the last three decades, of magnetic resonance (MR) imaging and MR spectroscopy technology for in vivo studies of the human brain represents a truly remarkable achievement, with enormous scientific and clinical ramifications. These effectively non-invasive techniques allow for studies of the anatomy, the function and the metabolism of the living human brain. They have allowed for new understandings of how the healthy brain works and have provided insights into the mechanisms underlying multiple disease processes which affect the brain. Different MR techniques have been developed for studying anatomy, function and metabolism. The primary focus of this review is to describe these different methodologies and to briefly review how they are being employed to more fully appreciate the intricacies associated with the organ, which most distinctly differentiates the human species from the other animal forms on earth.
Guttmann CRG, Meier DS, Holland CM. Can MRI reveal phenotypes of multiple sclerosis?. Magn Reson Imaging. 2006;24(4):475–81.
The multicontrast capability of magnetic resonance imaging (MRI) is discussed in its role in the search for phenotypes of multiple sclerosis (MS). Aspects of MRI specificity, putative markers for pathogenetic components of disease and issues of spatial and temporal distribution are discussed. While particular reference is made to MS, the concepts apply to common pathological features of many neurologic diseases and to neurodegenerative disease in general. The assessment and dissociation of disease activity and disease severity, as well as the combination of varied metrics for the purposes of inferential and predictive disease modeling, are explored with respect to biomarkers and clinical outcomes. By virtue of its noninvasive nature and multicontrast capabilities depicting multiple facets of MS pathology, MRI lends itself to the systematic search of pathogenetically distinct subtypes of MS in large populations of patients. In conjunction with clinical, immunological, serological and genetic information, clusters of MS patients with distinct clinical prognosis and diverse response profiles to available and future treatments may be identified.
Learned-Miller EG. Data driven image models through continuous joint alignment. IEEE Trans Pattern Anal Mach Intell. 2006;28(2):236–50.
This paper presents a family of techniques that we call congealing for modeling image classes from data. The idea is to start with a set of images and make them appear as similar as possible by removing variability along the known axes of variation. This technique can be used to eliminate "nuisance" variables such as affine deformations from handwritten digits or unwanted bias fields from magnetic resonance images. In addition to separating and modeling the latent images-i.e., the images without the nuisance variables-we can model the nuisance variables themselves, leading to factorized generative image models. When nuisance variable distributions are shared between classes, one can share the knowledge learned in one task with another task, leading to efficient learning. We demonstrate this process by building a handwritten digit classifier from just a single example of each class. In addition to applications in handwritten character recognition, we describe in detail the application of bias removal from magnetic resonance images. Unlike previous methods, we use a separate, nonparametric model for the intensity values at each pixel. This allows us to leverage the data from the MR images of different patients to remove bias from each other. Only very weak assumptions are made about the distributions of intensity values in the images. In addition to the digit and MR applications, we discuss a number of other uses of congealing and describe experiments about the robustness and consistency of the method.
Pohl KM, Fisher J, Grimson EL, Kikinis R, Wells WM III. A Bayesian Model for Joint Segmentation and Registration. Neuroimage. 2006;31(1):228–39.
A statistical model is presented that combines the registration of an atlas with the segmentation of magnetic resonance images. We use an Expectation Maximization-based algorithm to find a solution within the model, which simultaneously estimates image artifacts, anatomical labelmaps, and a structure-dependent hierarchical mapping from the atlas to the image space. The algorithm produces segmentations for brain tissues as well as their substructures. We demonstrate the approach on a set of 22 magnetic resonance images. On this set of images, the new approach performs significantly better than similar methods which sequentially apply registration and segmentation.