Summary of full paper, presented at IWAR'99, San Francisco, October 20-21, 1999
The full article is published in the printed "Proceedings of the 2nd IWAR'99" and copyright protected by IEEE Computer Society.

Vision-based Pose Computation: Robust and Accurate Augmented Reality Tracking

Jun Park, Bolan Jiang, and Ulrich Neumann

Computer Graphics and Immersive Technology Laboratory
Computer Science Department
University of Southern California

Abstract

Vision-based tracking systems have advantages for augmented reality (AR) applications. Their registration can be very accurate, and there is no delay between the motions of real and virtual scene elements. However, vision-based tracking often suffers from limited range, intermittent errors, and dropouts. These shortcomings are due to the need to see multiple calibrated features or fiducials in each frame. To address these shortcomings, features in the scene can be dynamically calibrated and pose calculations can be made robust to noise and numerical instability. In this paper, we survey classic vision-based pose computations and present two methods that offer increased robustness and accuracy in the context of real-time AR tracking.

Copyright (c) 1999 IEEE. Reprinted, with permission, from IWAR'99 proceedings.

  1. Problems in Current Pose Computation Method

  2. Summary of Pose Computation Methods
  3. Results

This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to info.pub.permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.