Long-Term Tracking for Augmented Reality
Augmented Reality (AR) requires visual detection and tracking of real objects or environments in order to display related digital information spatially registered with them. In many cases, a model describing the visual appearance of an object is created once based on images and then being used for detection and tracking in the long term. One of the major challenges in this context is that the visual appearance of the world around us changes, mainly due to changes in illumination and viewpoint. However, most of the objects and environments serving as reference in AR applications do not change their spatial properties over time. This not only refers to shape but for most objects, e.g. cars or furniture, also their absolute orientation with respect to gravity is constant and known. For objects such as landmarks and buildings additionally the absolute location remains the same in the long term. This presentation shows different means to exploit knowledge on the absolute orientation (and location) of real objects, and sensor measurements of the absolute orientation (and location) of the camera, to aid low level computer vision algorithms. In particular, both the invariance and the distinctiveness of visual feature descriptors can be increased by not only considering the visual appearance – which changes over time – but also the absolute spatial context of features – which remains constant.
Furthermore this talk will discuss that changes in the visual appearance of real-world objects are not arbitrary, but they mostly follow repeatable rules. This enables storing reference models of real objects under different appearances and if some of the factors that influence the appearance can be measured, then long term applications can always choose the reference model that is expected to be most consistent with the current situation. Specifically when tracking outdoor environments, the current time and date, as well as up-to-date weather information, can act as basis to select the best suited reference model for camera localization. Another approach discussed in this presentation determines a representative set of feature descriptors for objects from different viewpoints. During runtime this approach then chooses the set as reference model, which is most consistent with the current camera orientation measured with inertial sensors.
Besides dealing with technical details of visual detection and tracking technologies in the context of mobile Augmented Reality, the talk will give an overview of the current state and challenges in AR.
Peter Meier is found metaio in 2002 together with Thomas Alt. Since then, he serves as CTO and helped to make metaio one of the leading companies for the development and licensing of Augmented Reality and mobile vision technologies. Metaio has over 100 employees in its locations in Munich, Dallas, New York and San Francisco. Peter Meier actively takes part in expanding metaio’s technology and ip portfolio. He is known to be one of the top technical experts in AR, involved in over 60 AR-patents and many game-changing applications of computer vision for companies like Lego, Volkswagen and IKEA. He shaped junaio, metaio’s open AR platform and works hard to make AR and CV an established user interface technology, used daily.