During neurosurgical procedures the objective of the neurosurgeon is to achieve the resection of as much diseased tissue as possible while achieving the preservation of healthy brain tissue. The restricted capacity of the conventional operating room to enable the surgeon to visualize critical healthy brain structures and tumor margin has lead, over the past decade, to the development of sophisticated intraoperative imaging techniques to enhance visualization. However, both rigid motion due to patient placement and nonrigid deformations occurring as a consequence of the surgical intervention disrupt the correspondence between preoperative data used to plan surgery and the intraoperative configuration of the patient's brain. Similar challenges are faced in other interventional therapies, such as in cryoablation of the liver, or biopsy of the prostate. We have developed algorithms to model the motion of key anatomical structures and system implementations that enable us to estimate the deformation of the critical anatomy from sequences of volumetric images and to prepare updated fused visualizations of preoperative and intraoperative images at a rate compatible with surgical decision making. This paper reviews the experience at Brigham and Women's Hospital through the process of developing and applying novel algorithms for capturing intraoperative deformations in support of image guided therapy.