Visual-Inertial Odometer-Based Global High Precision Indoor Human Navigation in a University Library
AR (Augmented Reality) could be realized as a basic and high-level function on latest smartphones with a reasonable price. AR enables users to experience consistent three-dimensional (3D) spaces co-existing with 3D real and virtual objects with sensing real 3D environments and reconstructing them in the virtual world through a camera. The accuracy of sensing real 3D environments using an AR function, that is, visual-inertial odometer, of a smartphone is extremely higher than one of a GPS receiver on it, and can be less than one centimeter. However, current common AR applications generally focus on “small” real 3D spaces, not large real 3D spaces. In other words, most of the current AR applications are not designed for uses based on a geographic coordinate system.We proposed a global extension of the visual-inertial odometer with an image recognition function of geo-referenced image markers installed in real 3D spaces. Examples of geo-referenced image markers can be generated from analog guide boards existing in the real world. We tested this framework of a global extension of the visual-inertial odometer embedded in a smartphone on the first floor in the central library of Akita University. The geo-referenced image markers such as floor map boards and book categories sign boards were registered in a database of 3D geo-referenced real-world scene images. Our prototype system developed on a smartphone, that is, iPhone XS, Apple Inc., could first recognized a floor map board (Fig. 1), and could determine the 3D precise distance and direction of the smartphone from the central position of the floor map board in a local 3D coordinate space with the origin point as the central positon of the board. Then, the system could convert the relative precise position and the relative direction of the smartphone’s camera in a local coordinate space into a global precise location and orientation of it. A subject was walking the first floor in the building of the library with a world tracking function of the smartphone. The experimental result shows that the error of tracking a real 3D space of a global coordinate system was accumulated, but not bad. The accumulated error was only about 30 centimeters after the subject’s walking about 30 meters (Fig. 2). We are now planning to improve our prototype system in the accuracy of indoor navigation with calibrating the location and orientation of a smartphone based sequential recognitions of multiple referenced scene image markers which have already existed for a general user services of the library before developing this proposed new services. As the conclusion, the experiment’s result of testing our prototype system was impressive, we are now preparing a more practical high-precision LBS which enables a user to be navigated to the exact location of a book of a user’s interest in a bookshelf on a floor with AR and floor map interfaces.