Publications by Year: 2015

2015
Jens Sjölund, Filip Szczepankiewicz, Markus Nilsson, Daniel Topgaard, Carl-Fredrik Westin, and Hans Knutsson. 12/2015. “Constrained Optimization of Gradient Waveforms for Generalized Diffusion Encoding.” J Magn Reson, 261, Pp. 157-68.Abstract

Diffusion MRI is a useful probe of tissue microstructure. The conventional diffusion encoding sequence, the single pulsed field gradient, has recently been challenged as more general gradient waveforms have been introduced. Out of these, we focus on q-space trajectory imaging, which generalizes the scalar b-value to a tensor valued entity. To take full advantage of its capabilities, it is imperative to respect the constraints imposed by the hardware, while at the same time maximizing the diffusion encoding strength. We provide a tool that achieves this by solving a constrained optimization problem that accommodates constraints on maximum gradient amplitude, slew rate, coil heating and positioning of radio frequency pulses. The method's efficacy and flexibility is demonstrated both experimentally and by comparison with previous work on optimization of isotropic diffusion sequences.

Yifei Lou and Allen Tannenbaum. 10/2015. “Inter-modality Deformable Registration.” In Jia, X., and Jiang, S.B. (Eds.). (2015). Graphics Processing Unit-Based High Performance Computing in Radiation Therapy. Vol. Ch 10. CRC Press.Abstract
Deformable image registration (DIR) is one of the major problems in medical image processing, such as dose calculation [18], treatment planning [33] and scatter removal of cone beam CT (CBCT) [22]. It is of prime importance to establish a pixel-to-pixel correspondence between two images in many clinical scenarios. For instance, registration of a CT image to MRI of a patient taken at different time can provide complementary diagnostic information. For applications as such, since the deformation of the patient anatomy cannot be represented by a rigid transform, DIR is almost the sole means to establish this mapping. DIR can be generally categorized into intra-modality and inter-modality, or multi-modality. While intra-modality DIR can be easily handled by conventional intensity-based methods [11, 30], intermodality DIR problems are still far from being satisfactory. Yet, since different imaging modalities usually provide their unique angles to reveal patient anatomy and delineate microscopic disease, intermodality registration plays a key role to combine the information from multiple modalities to facilitate diagnostics and treatment of a certain disease.
Yifei Lou and Allen Tannenbaum. 10/2015. “Inter-modality Deformable Registration.” In Jia, X., and Jiang, S.B. (Eds.). (2015). Graphics Processing Unit-Based High Performance Computing in Radiation Therapy. Vol. Ch 10. CRC Press.Abstract
Deformable image registration (DIR) is one of the major problems in medical image processing, such as dose calculation [18], treatment planning [33] and scatter removal of cone beam CT (CBCT) [22]. It is of prime importance to establish a pixel-to-pixel correspondence between two images in many clinical scenarios. For instance, registration of a CT image to MRI of a patient taken at different time can provide complementary diagnostic information. For applications as such, since the deformation of the patient anatomy cannot be represented by a rigid transform, DIR is almost the sole means to establish this mapping. DIR can be generally categorized into intra-modality and inter-modality, or multi-modality. While intra-modality DIR can be easily handled by conventional intensity-based methods [11, 30], intermodality DIR problems are still far from being satisfactory. Yet, since different imaging modalities usually provide their unique angles to reveal patient anatomy and delineate microscopic disease, intermodality registration plays a key role to combine the information from multiple modalities to facilitate diagnostics and treatment of a certain disease.
Frank King, Jagadeesan Jayender, Steve Pieper, Tina Kapur, Andras Lasso, and Gabor Fichtinger. 10/2015. “An Immersive Virtual Reality Environment for Diagnostic Imaging.” Int Conf Med Image Comput Comput Assist Interv. 18(WS). King MICCAI WS 2015
Mukund Balasubramanian, Robert V. Mulkern, William M Wells III, Padmavathi Sundaram, and Darren B Orbach. 10/2015. “Magnetic Resonance Imaging of Ionic Currents in Solution: The Effect of Magnetohydrodynamic Flow.” Magn Reson Med, 74, 4, Pp. 1145-55.Abstract

PURPOSE: Reliably detecting MRI signals in the brain that are more tightly coupled to neural activity than blood-oxygen-level-dependent fMRI signals could not only prove valuable for basic scientific research but could also enhance clinical applications such as epilepsy presurgical mapping. This endeavor will likely benefit from an improved understanding of the behavior of ionic currents, the mediators of neural activity, in the presence of the strong magnetic fields that are typical of modern-day MRI scanners. THEORY: Of the various mechanisms that have been proposed to explain the behavior of ionic volume currents in a magnetic field, only one-magnetohydrodynamic flow-predicts a slow evolution of signals, on the order of a minute for normal saline in a typical MRI scanner. METHODS: This prediction was tested by scanning a volume-current phantom containing normal saline with gradient-echo-planar imaging at 3 T. RESULTS: Greater signal changes were observed in the phase of the images than in the magnitude, with the changes evolving on the order of a minute. CONCLUSION: These results provide experimental support for the MHD flow hypothesis. Furthermore, MHD-driven cerebrospinal fluid flow could provide a novel fMRI contrast mechanism.

Bjoern H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Keyvan Farahani, and et al. 10/2015. “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).” IEEE Trans Med Imaging, 34, 10, Pp. 1993-2024.Abstract

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

Adrian V Dalca, Ramesh Sridharan, Mert R Sabuncu, and Polina Golland. 10/2015. “Predictive Modeling of Anatomy with Genetic and Clinical Data.” Med Image Comput Comput Assist Interv, 9351, Pp. 519-26.Abstract

We present a semi-parametric generative model for predicting anatomy of a patient in subsequent scans following a single baseline image. Such predictive modeling promises to facilitate novel analyses in both voxel-level studies and longitudinal biomarker evaluation. We capture anatomical change through a combination of population-wide regression and a non-parametric model of the subject's health based on individual genetic and clinical indicators. In contrast to classical correlation and longitudinal analysis, we focus on predicting new observations from a single subject observation. We demonstrate prediction of follow-up anatomical scans in the ADNI cohort, and illustrate a novel analysis approach that compares a patient's scans to the predicted subject-specific healthy anatomical trajectory.

Tassilo Klein and William M Wells III. 10/2015. “RF Ultrasound Distribution-Based Confidence Maps.” Int Conf Med Image Comput Comput Assist Interv. 18 (Pt2), Pp. 595-602.Abstract
Ultrasound is becoming an ever increasingly important modality in medical care. However, underlying physical acquisition principles are prone to image artifacts and result in overall quality variation. Therefore processing medical ultrasound data remains a challenging task. We propose a novel distribution-based measure of assessing the confidence in the signal, which emphasizes uncertainty in attenuated as well as shadow regions. In contrast to the similar recently proposed method that relies on image intensities, the new approach makes use of the enveloped-detected radio-frequency data, facilitating the use of Nakagami speckle statistics. Employing J-divergence as distance measure for the random-walk based algorithm, provides a natural measure of similarity, yielding a more reliable estimate of confidence. For evaluation of the model’s performance, tests are conducted on the application of shadow detection. Additionally, computed maps are presented for different organs such as neck, liver and prostate, showcasing the properties of the model. The probabilistic approach is shown to have beneficial features for image processing tasks.
 
Klein MICCAI 2015
Ion-Florin Talos, Marianna Jakab, and Ron Kikinis. 9/2015. CT-based Atlas of the Abdomen. Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. Publisher's VersionAbstract
The Surgical Planning Laboratory at Brigham and Women's Hospital, Harvard Medical School, developed the SPL Abdominal Atlas. The atlas was derived from a computed tomography (CT) scan, using semi-automated image segmentation and three-dimensional reconstruction techniques. The current version consists of: 1. the original CT scan; 2. a set of detailed label maps; 3. a set of three-dimensional models of the labeled anatomical structures; 4. a mrml-file that allows loading all of the data into the 3D Slicer for visualization (see the tutorial associated with the atlas); 5. several pre-defined 3D-views (“anatomy teaching files”). The SPL Abdominal Atlas provides important reference information for surgical planning, anatomy teaching, and template driven segmentation. Visualization of the data requires Slicer 3. This software package can be downloaded from here. We are pleased to make this atlas available to our colleagues for free download. Please note that the data is being distributed under the Slicer license. By downloading these data, you agree to acknowledge our contribution in any of your publications that result form the use of this atlas. 
The Slicer4 version archived in a mrb (Medical Reality Bundle) file that contains the mrml scene file and all data for loading into Slicer 4 for displaying the volumes in 3D Slicer version 4.0 or greater, available for download.
This work is funded as part of the Neuroimaging Analysis Center, grant number P41 RR013218, by the NIH's National Center for Research Resources (NCRR) and grant number P41 EB015902, by the NIH's National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Google Faculty Research Award.
Contributors: Matthew D'Artista, Alex Kikinis, Tobias Schmidt, Svenja van der Gaag.
This atlas maybe viewed with our Open Anatomy Browser.
Marianna Jakab and Ron Kikinis. 9/2015. CT-based Atlas of the Head and Neck. Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.Abstract
This Head and Neck Atlas has been made available by the Surgical Planning Laboratory at Brigham and Women's Hospital. The data set consists of: 1. Reduced resolution (256x256) of the MANIX data set from the OSIRIX data sets. 2. A set of detailed label maps. 3. A set of three-dimensional models of the labeled anatomical structures. 4. Several pre-defined Scene Views (“anatomy teaching files”). 5. Annotation as supplementary information associated with a scene. 6. Anatomical model hierarchy. All in a mrb (Medical Reality Bundle) archive file that contains the mrml scene file and all data for loading into Slicer 4 for displaying the volumes in 3D Slicer version 4.0 or greater, available for download. The atlas data is made available under terms of the 3D Slicer License section B.
This work is funded as part of the Neuroimaging Analysis Center, grant number P41 EB015902, by the NIH's National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Google Faculty Research Award.
Contributors: Neha Agrawal, Matthew D'Artista, Susan Kikinis, Dashawn Richardson, Daniel Sachs.
This atlas maybe viewed with our Open Anatomy Browser.
Jens Richolt, Marianna Jakab, and Ron Kikinis. 9/2015. MRI-based Atlas of the Knee. Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. Publisher's VersionAbstract
The Surgical Planning Laboratory at Brigham and Women's Hospital, Harvard Medical School, developed the SPL Knee Atlas. The atlas was derived from a MRI scan, using semi-automated image segmentation and three-dimensional reconstruction techniques. The current version consists of: 1. the original MRI scan; 2. a set of detailed label maps; 3. a set of three-dimensional models of the labeled anatomical structures; 4. a mrml-file that allows loading all of the data into the 3D Slicer for visualization. 5. several pre-defined 3D views (“anatomy teaching files”). The SPL Knee Atlas provides important reference information for anatomy teaching, and template driven segmentation. Visualization of the data requires Slicer 3. This software package can be downloaded from here. We are pleased to make this atlas available to our colleagues for free download. Please note that the data is being distributed under the Slicer license. By downloading these data, you agree to acknowledge our contribution in any of your publications that result form the use of this atlas. 
The Slicer4 version archived in a mrb (Medical Reality Bundle) file that contains the mrml scene file and all data for loading into Slicer 4 for displaying the volumes in 3D Slicer version 4.0 or greater, available for download.
This work is funded as part of the Neuroimaging Analysis Center, grant number P41 RR013218, by the NIH's National Center for Research Resources (NCRR) and grant number P41 EB015902, by the NIH's National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Google Faculty Research Award.
Contributors: Matthew D'Artista, Alex Kikinis.
This atlas maybe viewed with our Open Anatomy Browser.
Lipeng Ning, Kawin Setsompop, Oleg Michailovich, Nikos Makris, Carl-Fredrik Westin, and Yogesh Rathi. 6/2015. “A Compressed-Sensing Approach for Super-Resolution Reconstruction of Diffusion MRI.” Inf Process Med Imaging, 24, Pp. 57-68.Abstract

We present an innovative framework for reconstructing high-spatial-resolution diffusion magnetic resonance imaging (dMRI) from multiple low-resolution (LR) images. Our approach combines the twin concepts of compressed sensing (CS) and classical super-resolution to reduce acquisition time while increasing spatial resolution. We use subpixel-shifted LR images with down-sampled and non-overlapping diffusion directions to reduce acquisition time. The diffusion signal in the high resolution (HR) image is represented in a sparsifying basis of spherical ridgelets to model complex fiber orientations with reduced number of measurements. The HR image is obtained as the solution of a convex optimization problem which can be solved using the proposed algorithm based on the alternating direction method of multipliers (ADMM). We qualitatively and quantitatively evaluate the performance of our method on two sets of in-vivo human brain data and show its effectiveness in accurately recovering very high resolution diffusion images.

Matthew Toews, Christian Wachinger, Raul San Jose Estepar, and William M Wells III. 6/2015. “A Feature-Based Approach to Big Data Analysis of Medical Images.” Inf Process Med Imaging, 24, Pp. 339-50.Abstract

This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

Christian Wachinger, Matthew Toews, Georg Langs, William M Wells III, and Polina Golland. 6/2015. “Keypoint Transfer Segmentation.” Inf Process Med Imaging, 24, Pp. 233-45.Abstract

We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm's robustness enables the segmentation of scans with highly variable field-of-view.

Siqi Liu, Sidong Liu, Weidong Cai, Sonia Pujol, Ron Kikinis, and David Dagan Feng. 2/2015. “Multi-Phase Feature Representation Learning for Neurodegenerative Disease Diagnosis.” Artificial Life and Computational Intelligence LNAI 8955, Pp. 350-9.Abstract
Feature learning with high dimensional neuroimaging features has been explored for the applications on neurodegenerative diseases. Low-dimensional biomarkers, such as mental status test scores and cerebrospinal fluid level, are essential in clinical diagnosis of neurological disorders, because they could be simple and effective for the clinicians to assess the disorder’s progression and severity. Rather than only using the low-dimensional biomarkers as inputs for decision making systems, we believe that such low-dimensional biomarkers can be used for enhancing the feature learning pipeline. In this study, we proposed a novel feature representation learning framework, Multi-Phase Feature Representation (MPFR), with low-dimensional biomarkers embedded. MPFR learns high-level neuroimaging features by extracting the associations between the low-dimensional biomarkers and the highdimensional neuroimaging features with a deep neural network. We validated the proposed framework using the Mini-Mental-State-Examination (MMSE) scores as a low-dimensional biomarker and multi-modal neuroimaging data as the high-dimensional neuroimaging features from the ADNI baseline cohort. The proposed approach outperformed the original neural network in both binary and ternary Alzheimer’s disease classification tasks.
Liu ACALCI 2015
Clare B Poynton, Mark Jenkinson, Elfar Adalsteinsson, Edith V Sullivan, Adolf Pfefferbaum, and William M Wells III. 1/2015. “Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation with Brain Iron in Normal Aging.” IEEE Trans Med Imaging, 34, 1, Pp. 339-53.Abstract

There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or "QSIP." The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase, and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in vivo FDRI: statistically significant Spearman correlations ranging from Rho=0.905 to Rho=1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions.

Julie M Stamm, Inga K Koerte, Marc Muehlmann, Ofer Pasternak, Alexandra P Bourlas, Christine M Baugh, Michelle Y Giwerc, Anni Zhu, Michael J Coleman, Sylvain Bouix, Nathan G Fritts, Brett M Martin, Christine Chaisson, Michael D McClean, Alexander P Lin, Robert C Cantu, Yorghos Tripodis, Robert A Stern, and Martha E Shenton. 2015. “Age at First Exposure to Football Is Associated with Altered Corpus Callosum White Matter Microstructure in Former Professional Football Players.” J Neurotrauma, 32, 22, Pp. 1768-76.Abstract
Youth football players may incur hundreds of repetitive head impacts (RHI) in one season. Our recent research suggests that exposure to RHI during a critical neurodevelopmental period prior to age 12 may lead to greater later-life mood, behavioral, and cognitive impairments. Here, we examine the relationship between age of first exposure (AFE) to RHI through tackle football and later-life corpus callosum (CC) microstructure using magnetic resonance diffusion tensor imaging (DTI). Forty retired National Football League (NFL) players, ages 40-65, were matched by age and divided into two groups based on their AFE to tackle football: before age 12 or at age 12 or older. Participants underwent DTI on a 3 Tesla Siemens (TIM-Verio) magnet. The whole CC and five subregions were defined and seeded using deterministic tractography. Dependent measures were fractional anisotropy (FA), trace, axial diffusivity, and radial diffusivity. Results showed that former NFL players in the AFE <12 group had significantly lower FA in anterior three CC regions and higher radial diffusivity in the most anterior CC region than those in the AFE ≥12 group. This is the first study to find a relationship between AFE to RHI and later-life CC microstructure. These results suggest that incurring RHI during critical periods of CC development may disrupt neurodevelopmental processes, including myelination, resulting in altered CC microstructure.
Alireza Radmanesh, Amir A Zamani, Stephen Whalen, Yanmei Tie, Ralph O Suarez, and Alexandra J Golby. 2015. “Comparison of seeding methods for visualization of the corticospinal tracts using single tensor tractography.” Clin Neurol Neurosurg, 129, Pp. 44-9.Abstract
OBJECTIVES: To compare five different seeding methods to delineate hand, foot, and lip components of the corticospinal tract (CST) using single tensor tractography. METHODS: We studied five healthy subjects and 10 brain tumor patients. For each subject, we used five different seeding methods, from (1) cerebral peduncle (CP), (2) posterior limb of the internal capsule (PLIC), (3) white matter subjacent to functional MRI activations (fMRI), (4) whole brain and then selecting the fibers that pass through both fMRI and CP (WBF-CP), and (5) whole brain and then selecting the fibers that pass through both fMRI and PLIC (WBF-PLIC). Two blinded neuroradiologists rated delineations as anatomically successful or unsuccessful tractography. The proportions of successful trials from different methods were compared by Fisher's exact test. RESULTS: To delineate hand motor tract, seeding through fMRI activation areas was more effective than through CP (p<0.01), but not significantly different from PLIC (p>0.1). WBF-CP delineated hand motor tracts in a larger proportion of trials than CP alone (p<0.05). Similarly, WBF-PLIC depicted hand motor tracts in a larger proportion of trials than PLIC alone (p<0.01). Foot motor tracts were delineated in all trials by either PLIC or whole brain seeding (WBF-CP and WBF-PLIC). Seeding from CP or fMRI activation resulted in foot motor tract visualization in 87% of the trials (95% confidence interval: 60-98%). The lip motor tracts were delineated only by WBF-PLIC and in 36% of trials (95% confidence interval: 11-69%). CONCLUSIONS: Whole brain seeding and then selecting the tracts that pass through two anatomically relevant ROIs can delineate more plausible hand and lip motor tracts than seeding from a single ROI. Foot motor tracts can be successfully delineated regardless of the seeding method used.
Lauren J O'Donnell and Ofer Pasternak. 2015. “Does diffusion MRI tell us anything about the white matter? An overview of methods and pitfalls.” Schizophr Res, 161, 1, Pp. 133-41.Abstract
One key pitfall in diffusion magnetic resonance imaging (dMRI) clinical neuroimaging research is the challenge of understanding and interpreting the results of a complex analysis pipeline. The sophisticated algorithms employed by the analysis software, combined with the relatively non-specific nature of many diffusion measurements, lead to challenges in interpretation of the results. This paper is aimed at an intended audience of clinical researchers who are learning about dMRI or trying to interpret dMRI results, and who may be wondering "Does dMRI tell us anything about the white matter?" We present a critical review of dMRI methods and measures used in clinical neuroimaging research, focusing on the most commonly used analysis methods and the most commonly reported measures. We describe important pitfalls in every section, and provide extensive references for the reader interested in more detail.
Romeil Sandhu, Tryphon Georgiou, Ed Reznik, Liangjia Zhu, Ivan Kolesov, Yasin Senbabaoglu, and Allen Tannenbaum. 2015. “Graph Curvature for Differentiating Cancer Networks.” Sci Rep, 5, Pp. 12323.Abstract
Cellular interactions can be modeled as complex dynamical systems represented by weighted graphs. The functionality of such networks, including measures of robustness, reliability, performance, and efficiency, are intrinsically tied to the topology and geometry of the underlying graph. Utilizing recently proposed geometric notions of curvature on weighted graphs, we investigate the features of gene co-expression networks derived from large-scale genomic studies of cancer. We find that the curvature of these networks reliably distinguishes between cancer and normal samples, with cancer networks exhibiting higher curvature than their normal counterparts. We establish a quantitative relationship between our findings and prior investigations of network entropy. Furthermore, we demonstrate how our approach yields additional, non-trivial pair-wise (i.e. gene-gene) interactions which may be disrupted in cancer samples. The mathematical formulation of our approach yields an exact solution to calculating pair-wise changes in curvature which was computationally infeasible using prior methods. As such, our findings lay the foundation for an analytical approach to studying complex biological networks.

Pages