Please use this identifier to cite or link to this item:
https://hdl.handle.net/11000/31598
Environment modeling and localization from datasets
of omnidirectional scenes using machine learning techniques
Title: Environment modeling and localization from datasets
of omnidirectional scenes using machine learning techniques |
Authors: Cebollada, Sergio Paya, Luis Peidro, Adrian Mayol-Cuevas, Walterio Reinoso, Oscar |
Editor: Springer Link |
Department: Departamentos de la UMH::Ingeniería de Sistemas y Automática |
Issue Date: 2023-03 |
URI: https://hdl.handle.net/11000/31598 |
Abstract:
This work presents a framework to create a visual model of the environment which can be used to estimate the position of a
mobile robot by means of artificial intelligence techniques. The proposed framework retrieves the structure of the environment
from a dataset composed of omnidirectional images captured along it. These images are described by means of
global-appearance approaches. The information is arranged in two layers, with different levels of granularity. The first
layer is obtained by means of classifiers and the second layer is composed of a set of data fitting neural networks.
Subsequently, the model is used to estimate the position of the robot, in a hierarchical fashion, by comparing the image
captured from the unknown position with the information in the model. Throughout this work, five classifiers are evaluated
(Naı¨ve Bayes, SVM, random forest, linear discriminant classifier and a classifier based on a shallow neural network) along
with three different global-appearance descriptors (HOG, gist, and a descriptor calculated from an intermediate layer of a
pre-trained CNN). The experiments have been tackled with some publicly available datasets of omnidirectional images
captured indoors with the presence of dynamic changes. Several parameters are used to assess the efficiency of the
proposal: the ability of the algorithm to estimate coarsely the position (hit ratio), the average error (cm) and the necessary
computing time. The results prove the efficiency of the framework to model the environment and localize the robot from
the knowledge extracted from a set of omnidirectional images with the proposed artificial intelligence techniques
|
Keywords/Subjects: Machine learning Hierarchical localization Omnidirectional vision Global-appearance description |
Knowledge area: CDU: Ciencias aplicadas: Ingeniería. Tecnología |
Type of document: info:eu-repo/semantics/article |
Access rights: info:eu-repo/semantics/openAccess Attribution-NonCommercial-NoDerivatives 4.0 Internacional |
DOI: https://doi.org/10.1007/s00521-023-08515-y |
Appears in Collections: Artículos Ingeniería de Sistemas y Automática
|
???jsp.display-item.text9???