RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany

ISMAR Sessions

Call for Participation


Supporters and Media


Social Media


Previous Years

ISMAR Papers for Session "Medical AR"

Medical AR
Session : 
Medical AR
Date & Time : September 12 04:00 pm - 05:30 pm
Location : HS1
Chair : Nassir Navab, TU Munich
Papers : 
Single View Augmentation of 3D Elastic Objects
Authors: Nazim Haouchine, Jeremie Dequidt, Marie-Odile Berger, Stephane Cotin
Abstract :
This paper proposes an efficient method to capture and augment highly elastic objects from a single view. 3D shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and re-solve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or ressort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method which makes use of a mechanical model and is able to handle highly elastic objects. Our method is formulated as a energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. The only parameter involved in the method is the Young’s modulus but we show in experiments that a rough estimate of the Young’s modulus is sufficient to obtain a good reconstruction. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of our approach. Experiments in the context of minimally invasive liver surgery are also provided.
Improved Interventional X-ray Appearance
Authors: Xiang Wang, Christian Schulte zu Berge, Stefanie Demirci, Pascal Fallavollita, Nassir Navab
Abstract :
Depth cues are an essential part of navigation and device positioning tasks during clinical interventions. Yet, many minimally-invasive procedures, such as catheterizations, are usually performed under X-ray guidance only depicting a 2D projection of the anatomy, which lacks depth information. Previous attempts to integrate pre-operative 3D data of the patient by registering these to intra-operative data have led to virtual 3D renderings independent of the original X-ray appearance and planar 2D color overlays (e.g. roadmaps). A major drawback associated to these solutions is the trade-off between X-ray attenuation values that is completely neglected during 3D renderings, and depth perception not being incorporated into the 2D roadmaps. This paper presents a novel technique for enhancing depth perception of interventional X-ray images preserving the original attenuation appearance. Starting from patient-specific pre-operative 3D data, our method relies on GPU ray casting to compute a colored depth map, which assigns a predefined color to the first incidence of gradient magnitude value above a predefined threshold along the ray. The colored depth map values are carefully integrated into the X-Ray image while maintaining its original grayscale intensities. The presented method was tested and analysed for three relevant clinical scenarios covering different anatomical aspects and targeting different levels of interventional expertise. Results demonstrate that improving depth perception of X-ray images has the potential to lead to safer and more efficient clinical interventions.
Computer-Assisted Laparoscopic Myomectomy by Augmenting the Uterus with Pre-operative MRI Data
Authors: Toby Collins, Daniel Pizarro, Adrien Bartoli, Nicolas Bourdel
Abstract :
An active research objective in Computer Assisted Intervention (CAI) is to develop guidance systems to aid surgical teams in laparoscopic Minimal Invasive Surgery (MIS) using Augmented Reality (AR). This involves registering and fusing additional data from other modalities and overlaying it onto the laparoscopic video in realtime. We present the first AR-based image guidance system for assisted myoma localisation in uterine laparosurgery. This involves a framework for semi-automatically registering a pre-operative Magnetic Resonance Image (MRI) to the laparoscopic video with a deformable model. Although there has been several previous works involving other organs, this is the first to tackle the uterus. Furthermore, whereas previous works perform registration between one or two laparoscopic images (which come from a stereo laparoscope) we show how to solve the problem using many images (e.g. 20 or more), and show that this can dramatically improve registration. Also unlike previous works, we show how to integrate occluding contours as registration cues. These cues provide powerful registration constraints and should be used wherever possible. We present retrospective qualitative results on a patient with two myomas and quantitative semi-synthetic results. Our multi-image framework is quite general and could be adapted to improve registration other organs with other modalities such as CT.

Sponsors (Become one)

supported by

in special cooperation with