Publications

2013

Raul San Jose Estepar, Gregory L Kinney, Jennifer L Black-Shinn, Russell P Bowler, Gordon L Kindlmann, James C Ross, Ron Kikinis, MeiLan K Han, Carolyn E Come, Alejandro A Diaz, Michael H Cho, Craig P Hersh, Joyce D Schroeder, John J Reilly, David A Lynch, James D Crapo, J, Mark T Dransfield, John E Hokanson, George R Washko, and COPDGene Study. 2013. Computed Tomographic Measures Of Pulmonary Vascular Morphology In Smokers And Their Clinical Implications. Am J Respir Crit Care Med, 188, 2, Pp. 231-9.
RATIONALE: Angiographic investigation suggests that pulmonary vascular remodeling in smokers is characterized by distal pruning of the blood vessels. OBJECTIVES: Using volumetric computed tomography scans of the chest we sought to quantitatively evaluate this process and assess its clinical associations. METHODS: Pulmonary vessels were automatically identified, segmented, and measured. Total blood vessel volume (TBV) and the aggregate vessel volume for vessels less than 5 mm(2) (BV5) were calculated for all lobes. The lobe-specific BV5 measures were normalized to the TBV of that lobe and the nonvascular tissue volume (BV5/T(issue)V) to calculate lobe-specific BV5/TBV and BV5/T(issue)V ratios. Densitometric measures of emphysema were obtained using a Hounsfield unit threshold of -950 (%LAA-950). Measures of chronic obstructive pulmonary disease severity included single breath measures of diffusing capacity of carbon monoxide, oxygen saturation, the 6-minute-walk distance, St George’s Respiratory Questionnaire total score (SGRQ), and the body mass index, airflow obstruction, dyspnea, and exercise capacity (BODE) index. MEASUREMENTS AND MAIN RESULTS: The %LAA-950 was inversely related to all calculated vascular ratios. In multivariate models including age, sex, and %LAA-950, lobe-specific measurements of BV5/TBV were directly related to resting oxygen saturation and inversely associated with both the SGRQ and BODE scores. In similar multivariate adjustment lobe-specific BV5/T(issue)V ratios were inversely related to resting oxygen saturation, diffusing capacity of carbon monoxide, 6-minute-walk distance, and directly related to the SGRQ and BODE. CONCLUSIONS: Smoking-related chronic obstructive pulmonary disease is characterized by distal pruning of the small blood vessels (<5 mm(2)) and loss of tissue in excess of the vasculature. The magnitude of these changes predicts the clinical severity of disease.
Arie Nakhmani and Allen Tannenbaum. 2013. A New Distance Measure Based on Generalized Image Normalized Cross-Correlation for Robust Video Tracking and Image Recognition. Pattern Recognit Lett, 34, 3, Pp. 315-21.
We propose two novel distance measures, normalized between 0 and 1, and based on normalized cross-correlation for image matching. These distance measures explicitly utilize the fact that for natural images there is a high correlation between spatially close pixels. Image matching is used in various computer vision tasks, and the requirements to the distance measure are application dependent. Image recognition applications require more shift and rotation robust measures. In contrast, registration and tracking applications require better localization and noise tolerance. In this paper, we explore different advantages of our distance measures, and compare them to other popular measures, including Normalized Cross-Correlation (NCC) and Image Euclidean Distance (IMED). We show which of the proposed measures is more appropriate for tracking, and which is appropriate for image recognition tasks.
Martin Mueller, Peter Karasev, Ivan Kolesov, and Allen Tannenbaum. 2013. Optical flow estimation for flame detection in videos. IEEE Trans Image Process, 22, 7, Pp. 2786-97.
Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise.
Albert Montillo, Qi Song, Xiaoming Liu, and James Miller. 2013. Parsing radiographs by integrating landmark set detection and multi-object active appearance models. Proc SPIE Int Soc Opt Eng, 8669, Pp. 86690H.
This work addresses the challenging problem of parsing 2D radiographs into salient anatomical regions such as the left and right lungs and the heart. We propose the integration of an automatic detection of a constellation of landmarks via rejection cascade classifiers and a learned geometric constellation subset detector model with a multi-object active appearance model (MO-AAM) initialized by the detected landmark constellation subset. Our main contribution is twofold. First, we propose a recovery method for false positive and negative landmarks which allows to handle extreme ranges of anatomical and pathological variability. Specifically we (1) recover false negative (missing) landmarks through the consensus of inferences from subsets of the detected landmarks, and (2) choose one from multiple false positives for the same landmark by learning Gaussian distributions for the relative location of each landmark. Second, we train a MO-AAM using the true landmarks for the detectors and during test, initialize the model using the detected landmarks. Our model fitting allows simultaneous localization of multiple regions by encoding the shape and appearance information of multiple objects in a single model. The integration of landmark detection method and MO-AAM reduces mean distance error of the detected landmarks from 20.0mm to 12.6mm. We assess our method using a database of scout CT scans from 80 subjects with widely varying pathology.
Zora Kikinis, Nikos Makris, Christine T Finn, Sylvain Bouix, Diandra Lucia, Michael J Coleman, Erica Tworog-Dube, Ron Kikinis, Raju Kucherlapati, Martha E Shenton, and Marek Kubicki. 2013. Genetic contributions to changes of fiber tracts of ventral visual stream in 22q11.2 deletion syndrome. Brain Imaging Behav, 7, 3, Pp. 316-25.
Patients with 22q11.2 deletion syndrome (22q11.2DS) represent a population at high risk for developing schizophrenia, as well as learning disabilities. Deficits in visuo-spatial memory are thought to underlie some of the cognitive disabilities. Neuronal substrates of visuo-spatial memory include the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal fasciculus (ILF), two tracts that comprise the ventral visual stream. Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is an established method to evaluate white matter (WM) connections in vivo. DT-MRI scans of nine 22q11.2DS young adults and nine matched healthy subjects were acquired. Tractography of the IFOF and the ILF was performed. DT-MRI indices, including Fractional anisotropy (FA, measure of WM changes), axial diffusivity (AD, measure of axonal changes) and radial diffusivity (RD, measure of myelin changes) of each of the tracts and each group were measured and compared. The 22q11.2DS group showed statistically significant reductions of FA in IFOF in the left hemisphere. Additionally, reductions of AD were found in the IFOF and the ILF in both hemispheres. These findings might be the consequence of axonal changes, which is possibly due to fewer, thinner, or less organized fibers. No changes in RD were detected in any of the tracts delineated, which is in contrast to findings in schizophrenia patients where increases in RD are believed to be indicative of demyelination. We conclude that reduced axonal changes may be key to understanding the underlying pathology of WM leading to the visuo-spatial phenotype in 22q11.2DS.
Anthony Bianchi, James Miller, Ek Tsoon Tan, and Albert Montillo. 2013. Brain Tumor Segmentation with Symmetric Texture and Symmetric Intensity-based Decision Forests. Proc IEEE Int Symp Biomed Imaging, 2013, Pp. 748-51.
Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.
Yifei Lou, Tianye Niu, Xun Jia, Patricio A Vela, Lei Zhu, and Allen R Tannenbaum. 2013. Joint CT/CBCT deformable registration and CBCT enhancement for cancer radiotherapy. Med Image Anal, 17, 3, Pp. 387-400.
This paper details an algorithm to simultaneously perform registration of computed tomography (CT) and cone-beam computed (CBCT) images, and image enhancement of CBCT. The algorithm employs a viscous fluid model which naturally incorporates two components: a similarity measure for registration and an intensity correction term for image enhancement. Incorporating an intensity correction term improves the registration results. Furthermore, applying the image enhancement term to CBCT imagery leads to an intensity corrected CBCT with better image quality. To achieve minimal processing time, the algorithm is implemented on a graphic processing unit (GPU) platform. The advantage of the simultaneous optimization strategy is quantitatively validated and discussed using a synthetic example. The effectiveness of the proposed algorithm is then illustrated using six patient datasets, three head-and-neck datasets and three prostate datasets.
Ivan Kolesov, Peter Karasev, N Shusharina, Patricio Vela, Allen Tannenbaum, and Gregory Sharp. 2013. Interactive Segmentation of Structures in the Head and Neck Using Steerable Active Contours. Med Phys, 40, 6Part32, Pp. 536.
PURPOSE: The purpose of this work is to investigate the performance of an interactive image segmentation method for radiotherapy contouring on computed tomography (CT) images. Manual segmentation is a time consuming task that is essential for treatment. Due to the low contrast of target structures, their similarity to surrounding tissue, and the required precision for the final segmentation Result, automatic methods do not exhibit robust performance. Furthermore, when an automatic segmentation algorithm produces errors at the structure boundary, they are tedious for a human user to correct. For this experiment, it is hypothesized that an interactive algorithm can attain ground truth results in a fraction of the the time needed for manual segmentation. METHODS: The proposed method is interactive segmentation that tightly couples a human "expert user" with a framework from computer vision called "active contours" to create a closed loop control system. As a Result, the strengths (i.e., quickly delineating complicated target boundaries) of the automatic method can be leveraged by the user, who guides the algorithm based on his expert knowledge throughout the process. Experimental segmentations have been performed both with and without the control system feedback, the accuracy of the resulting labels will be compared along with the time required to create the labels. RESULTS: Four structures were evaluated: left/right eye ball, brain stem, and mandible. Tests show that virtually identical segmentations are performed with and without control system feedback. However, the time required to complete the task is significantly less than what is needed for fully manual contouring. CONCLUSION: Interactive segmentation using control system feedback is shown to reduce the time and effort needed to segment targets in CT volumes of the head and neck region. 
John R Pani, Julia H Chariker, and Farah Naaz. 2013. Computer-based learning: interleaving whole and sectional representation of neuroanatomy. Anat Sci Educ, 6, 1, Pp. 11-8.
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI).