Publications by Year: 2013

2013

We propose two novel distance measures, normalized between 0 and 1, and based on normalized cross-correlation for image matching. These distance measures explicitly utilize the fact that for natural images there is a high correlation between spatially close pixels. Image matching is used in various computer vision tasks, and the requirements to the distance measure are application dependent. Image recognition applications require more shift and rotation robust measures. In contrast, registration and tracking applications require better localization and noise tolerance. In this paper, we explore different advantages of our distance measures, and compare them to other popular measures, including Normalized Cross-Correlation (NCC) and Image Euclidean Distance (IMED). We show which of the proposed measures is more appropriate for tracking, and which is appropriate for image recognition tasks.
Mueller M, Karasev P, Kolesov I, Tannenbaum A. Optical flow estimation for flame detection in videos. IEEE Trans Image Process. 2013;22(7):2786–97.
Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise.
Montillo A, Song Q, Liu X, Miller J V. Parsing radiographs by integrating landmark set detection and multi-object active appearance models. Proc SPIE Int Soc Opt Eng. 2013;8669:86690H.
This work addresses the challenging problem of parsing 2D radiographs into salient anatomical regions such as the left and right lungs and the heart. We propose the integration of an automatic detection of a constellation of landmarks via rejection cascade classifiers and a learned geometric constellation subset detector model with a multi-object active appearance model (MO-AAM) initialized by the detected landmark constellation subset. Our main contribution is twofold. First, we propose a recovery method for false positive and negative landmarks which allows to handle extreme ranges of anatomical and pathological variability. Specifically we (1) recover false negative (missing) landmarks through the consensus of inferences from subsets of the detected landmarks, and (2) choose one from multiple false positives for the same landmark by learning Gaussian distributions for the relative location of each landmark. Second, we train a MO-AAM using the true landmarks for the detectors and during test, initialize the model using the detected landmarks. Our model fitting allows simultaneous localization of multiple regions by encoding the shape and appearance information of multiple objects in a single model. The integration of landmark detection method and MO-AAM reduces mean distance error of the detected landmarks from 20.0mm to 12.6mm. We assess our method using a database of scout CT scans from 80 subjects with widely varying pathology.
Kikinis Z, Makris N, Finn CT, Bouix S, Lucia D, Coleman MJ, Tworog-Dube E, Kikinis R, Kucherlapati R, Shenton ME, Kubicki M. Genetic contributions to changes of fiber tracts of ventral visual stream in 22q11.2 deletion syndrome. Brain Imaging Behav. 2013;7(3):316–25.
Patients with 22q11.2 deletion syndrome (22q11.2DS) represent a population at high risk for developing schizophrenia, as well as learning disabilities. Deficits in visuo-spatial memory are thought to underlie some of the cognitive disabilities. Neuronal substrates of visuo-spatial memory include the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal fasciculus (ILF), two tracts that comprise the ventral visual stream. Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is an established method to evaluate white matter (WM) connections in vivo. DT-MRI scans of nine 22q11.2DS young adults and nine matched healthy subjects were acquired. Tractography of the IFOF and the ILF was performed. DT-MRI indices, including Fractional anisotropy (FA, measure of WM changes), axial diffusivity (AD, measure of axonal changes) and radial diffusivity (RD, measure of myelin changes) of each of the tracts and each group were measured and compared. The 22q11.2DS group showed statistically significant reductions of FA in IFOF in the left hemisphere. Additionally, reductions of AD were found in the IFOF and the ILF in both hemispheres. These findings might be the consequence of axonal changes, which is possibly due to fewer, thinner, or less organized fibers. No changes in RD were detected in any of the tracts delineated, which is in contrast to findings in schizophrenia patients where increases in RD are believed to be indicative of demyelination. We conclude that reduced axonal changes may be key to understanding the underlying pathology of WM leading to the visuo-spatial phenotype in 22q11.2DS.
Bianchi A, Miller J V, Tan ET, Montillo A. Brain Tumor Segmentation with Symmetric Texture and Symmetric Intensity-based Decision Forests. Proc IEEE Int Symp Biomed Imaging. 2013;2013:748–51.
Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.
Lou Y, Niu T, Jia X, Vela PA, Zhu L, Tannenbaum AR. Joint CT/CBCT deformable registration and CBCT enhancement for cancer radiotherapy. Med Image Anal. 2013;17(3):387–400.
This paper details an algorithm to simultaneously perform registration of computed tomography (CT) and cone-beam computed (CBCT) images, and image enhancement of CBCT. The algorithm employs a viscous fluid model which naturally incorporates two components: a similarity measure for registration and an intensity correction term for image enhancement. Incorporating an intensity correction term improves the registration results. Furthermore, applying the image enhancement term to CBCT imagery leads to an intensity corrected CBCT with better image quality. To achieve minimal processing time, the algorithm is implemented on a graphic processing unit (GPU) platform. The advantage of the simultaneous optimization strategy is quantitatively validated and discussed using a synthetic example. The effectiveness of the proposed algorithm is then illustrated using six patient datasets, three head-and-neck datasets and three prostate datasets.
Kolesov I, Karasev P, Shusharina N, Vela P, Tannenbaum A, Sharp G. Interactive Segmentation of Structures in the Head and Neck Using Steerable Active Contours. Med Phys. 2013;40(6Part32):536.
PURPOSE: The purpose of this work is to investigate the performance of an interactive image segmentation method for radiotherapy contouring on computed tomography (CT) images. Manual segmentation is a time consuming task that is essential for treatment. Due to the low contrast of target structures, their similarity to surrounding tissue, and the required precision for the final segmentation Result, automatic methods do not exhibit robust performance. Furthermore, when an automatic segmentation algorithm produces errors at the structure boundary, they are tedious for a human user to correct. For this experiment, it is hypothesized that an interactive algorithm can attain ground truth results in a fraction of the the time needed for manual segmentation. METHODS: The proposed method is interactive segmentation that tightly couples a human "expert user" with a framework from computer vision called "active contours" to create a closed loop control system. As a Result, the strengths (i.e., quickly delineating complicated target boundaries) of the automatic method can be leveraged by the user, who guides the algorithm based on his expert knowledge throughout the process. Experimental segmentations have been performed both with and without the control system feedback, the accuracy of the resulting labels will be compared along with the time required to create the labels. RESULTS: Four structures were evaluated: left/right eye ball, brain stem, and mandible. Tests show that virtually identical segmentations are performed with and without control system feedback. However, the time required to complete the task is significantly less than what is needed for fully manual contouring. CONCLUSION: Interactive segmentation using control system feedback is shown to reduce the time and effort needed to segment targets in CT volumes of the head and neck region. 
Pani JR, Chariker JH, Naaz F. Computer-based learning: interleaving whole and sectional representation of neuroanatomy. Anat Sci Educ. 2013;6(1):11–8.
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI).
Irimia A, Goh SYM, Torgerson CM, Chambers MC, Kikinis R, Van Horn JD. Forward and Inverse Electroencephalographic Modeling in Health and in Acute Traumatic Brain Injury. Clin Neurophysiol. 2013;124(11):2129–45.
OBJECTIVE: EEG source localization is demonstrated in three cases of acute traumatic brain injury (TBI) with progressive lesion loads using anatomically faithful models of the head which account for pathology. METHODS: Multimodal magnetic resonance imaging (MRI) volumes were used to generate head models via the finite element method (FEM). A total of 25 tissue types-including 6 types accounting for pathology-were included. To determine the effects of TBI upon source localization accuracy, a minimum-norm operator was used to perform inverse localization and to determine the accuracy of the latter. RESULTS: The importance of using a more comprehensive number of tissue types is confirmed in both health and in TBI. Pathology omission is found to cause substantial inaccuracies in EEG forward matrix calculations, with lead field sensitivity being underestimated by as much as ≈ 200% in (peri-) contusional regions when TBI-related changes are ignored. Failing to account for such conductivity changes is found to misestimate substantial localization error by up to 35 mm. CONCLUSIONS: Changes in head conductivity profiles should be accounted for when performing EEG modeling in acute TBI. SIGNIFICANCE: Given the challenges of inverse localization in TBI, this framework can benefit neurotrauma patients by providing useful insights on pathophysiology.