RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany

ieee.org
computer.org
vgtc.org

ISMAR Sessions

Call for Participation

Committees

Supporters and Media

Contact




Social Media

LinkedInLinkedIn



Previous Years

ISMAR Papers for Session "Reconstruction and Fusion"

Reconstruction and Fusion
Session : 
Reconstruction and Fusion
Date & Time : September 11 10:00 am - 12:45 pm
Location : HS1
Chair : Walterio Mayol-Cuevas, Bristol University
Papers : 
Improved Registration for Vehicular AR using Auto-Harmonization
Authors: Eric Foxlin, Thomas Calloway, Hongsheng Zhang
Abstract :
This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and “swim”. The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. A system accuracy analysis is presented with simulation results to predict the registration accuracy. Finally, a car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving.
Real-Time Illumination Estimation from Faces for Coherent Rendering
Authors: Sebastian B. Knorr, Daniel Kurz
Abstract :
We present a method for estimating the real-world lighting conditions within a scene in real-time. The estimation is based on the visual appearance of a human face in the real scene captured in a single image of a monocular camera. In hardware setups featuring a user-facing camera, an image of the user's face can be acquired at any time. The limited range in variations between different human faces makes it possible to analyze their appearance offline, and to apply the results to new faces. Our approach uses radiance transfer functions - learned offline from a dataset of images of faces under different known illuminations - for particular points on the human face. Based on these functions, we recover the most plausible real-world lighting conditions for measured reflections in a face, represented by a function depending on incident light angle using Spherical Harmonics. The pose of the camera relative to the face is determined by means of optical tracking, and virtual 3D content is rendered and overlaid onto the real scene with a fixed spatial relationship to the face. By applying the estimated lighting conditions to the rendering of the virtual content, the augmented scene is shaded coherently with regard to the real and virtual parts of the scene. We show with different examples under a variety of lighting conditions, that our approach provides plausible results, which considerably enhance the visual realism in real-time Augmented Reality applications.
Comprehensive Workspace Calibration for Visuo-Haptic Augmented Reality
Authors: Ulrich Eck, Frieder Pankratz, Christian Sandor, Gudrun Klinker, Hamid Laga
Abstract :
Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. Precise co-location of computer graphics and the haptic stylus is necessary to provide a realistic user experience. PHANToM haptic devices are often used in such systems to provide haptic feedback. They consist of two interlinked joints, whose angles define the position of the haptic stylus and three sensors at the gimbal to sense its orientation. Previous work has focused on calibration procedures that align the haptic workspace within a global reference coordinate system and developing algorithms that compensate the non-linear position error, caused by inaccuracies in the joint angle sensors. In this paper, we present an improved workspace calibration that additionally compensates for errors in the gimbal sensors. This enables us to also align the orientation of the haptic stylus with high precision. To reduce the required time for calibration and to increase the sampling coverage, we utilize time-delay estimation to temporally align external sensor readings. This enables users to continuously move the haptic stylus during the calibration process, as opposed to commonly used point and hold processes. We conducted an evaluation of the calibration procedure for visuo-haptic augmented reality setups with two different PHANToMs and two different optical trackers. Our results show a significant improvement of orientation alignment for both setups over the previous state of the art calibration procedure. Improved position and orientation accuracy results in higher fidelity visual and haptic augmentations, which is crucial for fine-motor tasks in areas including medical training simulators, assembly planning tools, or rapid prototyping applications. A user friendly calibration procedure is essential for real-world applications of VHAR.
Recognition and reconstruction of transparent objects for Augmented Reality
Authors: Alan Francisco Torres-Gomez, Walterio Mayol-Cuevas
Abstract :
Dealing with real transparent objects for AR is challenging due to their lack of texture and visual features as well as the drastic changes in appearance as the background, illumination and camera pose change. The few existing methods for glass object detection usually require a carefully controlled environment, specialized illumination hardware or ignore information from different viewpoints. In his work, we explore the use of a learning approach for classifying transparent objects from multiple images with the aim of both discovering such objects and building a 3D reconstruction to support convincing augmentations. We extract, classify and group small image patches using a fast graph-based segmentation and employ a probabilistic formulation for aggregating spatially consistent glass regions. We demonstrate our approach via analysis of the performance of glass region detection and example 3D reconstructions that allow virtual objects to interact with them.

Sponsors (Become one)

supported by



in special cooperation with