Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/11000/39251
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorFlores, María-
dc.contributor.authorValiente, David-
dc.contributor.authorGil, Arturo-
dc.contributor.authorReinoso, Oscar-
dc.contributor.authorPayá, Luis-
dc.contributor.otherDepartamentos de la UMH::Ingeniería de Comunicacioneses_ES
dc.date.accessioned2026-02-12T17:47:54Z-
dc.date.available2026-02-12T17:47:54Z-
dc.date.created2022-
dc.identifier.citationEngineering Applications of Artificial Intelligence, Vol. 107 (2022)es_ES
dc.identifier.issn0952-1976-
dc.identifier.urihttps://hdl.handle.net/11000/39251-
dc.description.abstractFeature matching is a key technique for a wide variety of computer vision and image processing applications such as visual localization. It permits finding correspondences of significant points within the environment that eventually determine the localization of a mobile agent. In this context, this work evaluates an Adaptive Probability-Oriented Feature Matching (APOFM) method that dynamically models the visual knowledge of the environment in terms of the probability of existence of features. Several improvements are proposed to achieve a more robust matching in a visual odometry framework: a study on the classification of the matching candidates, enhanced by a nearest neighbour search policy; a dynamic weighted matching that exploits the probability of feature existence in order to tune the matching thresholds; and an automatic false positive detector. Additionally, a comparison of performance is carried out, considering a publicly available dataset composed of two kinds of wide field-of-view images: catadioptric and fisheye. Overall, the results validate the appropriateness of these contributions, which outperform other well-recognized implementations within this framework, such as the standard visual odometry, a visual odometry method based on RANSAC, as well as the basic APOFM. The analysis shows that fisheye images provide more visual information of the scene, with more feature candidates. Contrarily, omnidirectional images produce fewer feature candidates, but with higher ratios of feature acceptance. Finally, it is concluded that improved precision is obtained when the location problem is solved by this method.es_ES
dc.formatapplication/pdfes_ES
dc.format.extent18es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectfeature matchinges_ES
dc.subjectdynamic visual modeles_ES
dc.subjectadaptive probability-oriented feature matchinges_ES
dc.subjectfisheye lenseses_ES
dc.subjectomnidirectional imageses_ES
dc.subjectvisual localizationes_ES
dc.subject.otherCDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnología::621 - Ingeniería mecánica en general. Tecnología nuclear. Electrotecnia. Maquinaria::621.3 - Ingeniería eléctrica. Electrotecnia. Telecomunicacioneses_ES
dc.titleEfficient probability-oriented feature matching using wide field-of-view imaginges_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherversionhttps://doi.org/10.1016/j.engappai.2021.104539es_ES
Aparece en las colecciones:
Artículos Ingeniería Comunicaciones


thumbnail_pdf
Ver/Abrir:
 Efficient probability-oriented feature matching.pdf

2,88 MB
Adobe PDF
Compartir:


Creative Commons La licencia se describe como: Atribución-NonComercial-NoDerivada 4.0 Internacional.