Indoor Localization with an Autoencoder based Convolutional Neural Network

Nowadays, studies on indoor localization systems based on wireless systems are increasing widely. Indoor localization is the process of determining the location of objects or people inside a building. Global Navigation Satellite System (GPS) signals do not provide sufficient location data indoors be...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 12; p. 1
Main Authors: Arslantas, Hatice, Okdem, Selcuk
Format: Journal Article
Language:English
Published: Piscataway IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Nowadays, studies on indoor localization systems based on wireless systems are increasing widely. Indoor localization is the process of determining the location of objects or people inside a building. Global Navigation Satellite System (GPS) signals do not provide sufficient location data indoors because they are interrupted or completely lost in closed areas. For this reason, studies on indoor localization system design with machine learning and deep learning techniques based on Wi-Fi technology are increasing. In this study, we propose a method and training strategy that is entirely based on a Convolutional Neural Network (CNN) and a combined autoencoder that automatically extracts features from Wi-Fi fingerprint samples. In this model, we coupled an autoencoder and a CNN and we trained them simultaneously. Thus, we guarantee that the encoder and the CNN are trained simultaneously. The proposed system was evaluated on the UJIIndoorLoc and Tampere datasets. The experimental results show that the proposed model performs significantly better than the current state-of-the-art methods in terms of location coordinates (x, y) localization. In our study, runtime analysis is also presented to show the real-time performance of the network we proposed.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3382135