Please use this identifier to cite or link to this item: https://hdl.handle.net/11000/34274
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRomán, Vicente-
dc.contributor.authorPayá, Luis-
dc.contributor.authorPeidró, Adrián-
dc.contributor.authorBallesta, Mónica-
dc.contributor.authorReinoso, Oscar-
dc.contributor.otherDepartamentos de la UMH::Ingeniería de Sistemas y Automáticaes_ES
dc.date.accessioned2025-01-10T16:37:17Z-
dc.date.available2025-01-10T16:37:17Z-
dc.date.created2021-05-
dc.identifier.citationSensors 2021, 21es_ES
dc.identifier.issn1424-8220-
dc.identifier.urihttps://hdl.handle.net/11000/34274-
dc.description.abstractOver the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noisees_ES
dc.formatapplication/pdfes_ES
dc.format.extent37es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectomnidirectional imaginges_ES
dc.subjectglobal appearance descriptiones_ES
dc.subjectlocalizationes_ES
dc.subjectimage retrievales_ES
dc.subjectrelative orientationes_ES
dc.subjectfourier signaturees_ES
dc.subjecthistogram of oriented gradientses_ES
dc.subjectgistes_ES
dc.subject.otherCDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnologíaes_ES
dc.titleThe Role of Global Appearance of Omnidirectional Images in Relative Distance and Orientation Retrievales_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherversionhttps://doi.org/10.3390/s21103327es_ES
Appears in Collections:
Artículos Ingeniería de Sistemas y Automática


Thumbnail

View/Open:
 sensors-21-03327-v2 (1).pdf

6,87 MB
Adobe PDF
Share:


Creative Commons ???jsp.display-item.text9???