DEVELOPING A DATA FUSION STRATEGY BETWEEN OMNIDIRECTIONAL IMAGE AND INDOORGML DATA
As the interest in indoor spaces increases, there is a growing need for indoor spatial applications. As these spaces grow in complexity and size, research is being carried out towards effective and efficient representation. Omnidirectional images give a snapshot of interiors and give visually rich content, but only contain pixel data. For it to be used in providing indoor services, its limitations must be overcome. First, the images must be connected to each other to represent indoor space continuously based on spatial relationships that may be provided by topological data. Second, the objects and spaces that we see in these images must also be recognized. This paper presents a study on how to link omnidirectional images and an IndoorGML data without the need for data conversion, provision of reference data, or use of different data models in order to provide Indoor Location-Based Service (LBS). We introduce the use of the Spatial Extended Point (SEP) to characterize the relationship between the omnidirectional image and the topological data. Position information of the object is used to define a region of 3D space, to determine the inclusion relationship of an IndoorGML node. We conduct an experimental implementation of the integrated data in the form of a 3D Virtual Tour. The connection of the Omnidirectional images is demonstrated by a visualization of navigation through a hallway towards a room’s interior delivered to the user through a clicking action on the image.