Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/11000/31598
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorCebollada, Sergio-
dc.contributor.authorPaya, Luis-
dc.contributor.authorPeidro, Adrian-
dc.contributor.authorMayol-Cuevas, Walterio-
dc.contributor.authorReinoso, Oscar-
dc.contributor.otherDepartamentos de la UMH::Ingeniería de Sistemas y Automáticaes_ES
dc.date.accessioned2024-02-28T12:24:31Z-
dc.date.available2024-02-28T12:24:31Z-
dc.date.created2023-03-
dc.identifier.citationNeural Computing and Applications Volumen 35 , páginas 16487–16508, ( 2023 )es_ES
dc.identifier.isbn1433-3058-
dc.identifier.issn0941-0643-
dc.identifier.urihttps://hdl.handle.net/11000/31598-
dc.description.abstractThis work presents a framework to create a visual model of the environment which can be used to estimate the position of a mobile robot by means of artificial intelligence techniques. The proposed framework retrieves the structure of the environment from a dataset composed of omnidirectional images captured along it. These images are described by means of global-appearance approaches. The information is arranged in two layers, with different levels of granularity. The first layer is obtained by means of classifiers and the second layer is composed of a set of data fitting neural networks. Subsequently, the model is used to estimate the position of the robot, in a hierarchical fashion, by comparing the image captured from the unknown position with the information in the model. Throughout this work, five classifiers are evaluated (Naı¨ve Bayes, SVM, random forest, linear discriminant classifier and a classifier based on a shallow neural network) along with three different global-appearance descriptors (HOG, gist, and a descriptor calculated from an intermediate layer of a pre-trained CNN). The experiments have been tackled with some publicly available datasets of omnidirectional images captured indoors with the presence of dynamic changes. Several parameters are used to assess the efficiency of the proposal: the ability of the algorithm to estimate coarsely the position (hit ratio), the average error (cm) and the necessary computing time. The results prove the efficiency of the framework to model the environment and localize the robot from the knowledge extracted from a set of omnidirectional images with the proposed artificial intelligence techniqueses_ES
dc.formatapplication/pdfes_ES
dc.format.extent22es_ES
dc.language.isoenges_ES
dc.publisherSpringer Linkes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectMachine learninges_ES
dc.subjectHierarchical localizationes_ES
dc.subjectOmnidirectional visiones_ES
dc.subjectGlobal-appearance descriptiones_ES
dc.subject.otherCDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnologíaes_ES
dc.titleEnvironment modeling and localization from datasets of omnidirectional scenes using machine learning techniqueses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherversionhttps://doi.org/10.1007/s00521-023-08515-yes_ES
Aparece en las colecciones:
Artículos Ingeniería de Sistemas y Automática


Vista previa

Ver/Abrir:
 16-s00521-023-08515-y (3) (1).pdf

2,68 MB
Adobe PDF
Compartir:


Creative Commons La licencia se describe como: Atribución-NonComercial-NoDerivada 4.0 Internacional.