Por favor, use este identificador para citar o enlazar este ítem:
https://hdl.handle.net/11000/36841
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Alfaro, Marcos | - |
dc.contributor.author | Cabrera, Juan José | - |
dc.contributor.author | Jiménez, Luis Miguel | - |
dc.contributor.author | Reinoso, Óscar | - |
dc.contributor.author | Payá, Luis | - |
dc.contributor.other | Departamentos de la UMH::Ingeniería de Sistemas y Automática | es_ES |
dc.date.accessioned | 2025-07-11T11:54:16Z | - |
dc.date.available | 2025-07-11T11:54:16Z | - |
dc.date.created | 2024 | - |
dc.identifier.citation | 21st International Conference on Informatics in Control, Automation and Robotics (Porto, Portugal, 18-20 November, 2024) Volume 2, pp. 166-173 | es_ES |
dc.identifier.isbn | 978-989-758-717-7 | - |
dc.identifier.issn | 2184-2809 | - |
dc.identifier.uri | https://hdl.handle.net/11000/36841 | - |
dc.description.abstract | Triplet networks are composed of three identical convolutional neural networks that function in parallel and share their weights. These architectures receive three inputs simultaneously and provide three different outputs, and have demonstrated to have a great potential to tackle visual localization. Therefore, this paper presents an exhaustive study of the main factors that influence the training of a triplet network, which are the choice of the triplet loss function, the selection of samples to include in the training triplets and the batch size. To do that, we have adapted and retrained a network with omnidirectional images, which have been captured in an indoor environment with a catadioptric camera and have been converted into a panoramic format. The experiments conducted demonstrate that triplet networks improve substantially the performance in the visual localization task. However, the right choice of the studied factors is of great importance to fully exploit the potential of such architectures | es_ES |
dc.format | application/pdf | es_ES |
dc.format.extent | 12 | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | SCITEPRESS – Science and Technology Publications, Lda. | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | Robot Localization | es_ES |
dc.subject | Panoramic Images | es_ES |
dc.subject | Triplet Loss | es_ES |
dc.subject.other | CDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnología | es_ES |
dc.title | Triplet Neural Networks for the Visual Localization of Mobile Robots | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.relation.publisherversion | 10.5220/0000193700003822 | es_ES |

Ver/Abrir:
2024-ICINCO-TripletNeuralNetworks.pdf
1,93 MB
Adobe PDF
Compartir:
La licencia se describe como: Atribución-NonComercial-NoDerivada 4.0 Internacional.