Please use this identifier to cite or link to this item: https://hdl.handle.net/11000/38396
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFerrero, Laura-
dc.contributor.authorOrtiz, Mario-
dc.contributor.authorQuiles, Vicente-
dc.contributor.authorIañez, Eduardo-
dc.contributor.authorAzorín, José M.-
dc.contributor.otherDepartamentos de la UMH::Ingeniería Mecánica y Energíaes_ES
dc.date.accessioned2025-11-24T08:37:46Z-
dc.date.available2025-11-24T08:37:46Z-
dc.date.created2021-03-26-
dc.identifier.citationIEEE Access, 9, 49121-49130.es_ES
dc.identifier.issn2169-3536-
dc.identifier.urihttps://hdl.handle.net/11000/38396-
dc.description.abstractMotor imagery (MI) is one of the most common paradigms used in brain-computer interfaces (BCIs). This mental process is defined as the imagination of movement without any motion. In some lower-limb exoskeletons controlled by BCIs, users have to perform MI continuously in order to move the exoskeleton. This makes it difficult to design a closed-loop control BCI, as it cannot be assured that the analyzed activity is not related to motion instead of imagery. A possible solution would be the employment of virtual reality (VR). During VR training phase, subjects could focus on MI avoiding any distraction. This could help the subject to create a robust model of the BCI classifier that would be used later to control the exoskeleton. This paper analyzes if gait MI can be improved when VR feedback is provided to subjects instead of visual feedback by a screen. Additionally, both types of visual feedback are analyzed while subjects are seated or standing up. From the analysis, visual feedback by VR was related to higher performances in the majority of cases, not being relevant the differences between standing and being seated. The paper also presents a case of study for the closed-loop control of the BCI in a virtual reality environment. Subjects had to perform gait MI or to be in a relaxation state and based on the output of the BCI, the immersive first person view remained static or started to move. Experiments showed an accuracy of issued commands of 91.0 ± 6.7, being a very satisfactory result.es_ES
dc.formatapplication/pdfes_ES
dc.format.extent10es_ES
dc.language.isoenges_ES
dc.publisherInstitute of Electrical and Electronics Engineerses_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectBrain–computer interfacees_ES
dc.subjectEEGes_ES
dc.subjectmotor imageryes_ES
dc.subjectcommon spatial patternses_ES
dc.subjectvirtual realityes_ES
dc.subject.otherCDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnologíaes_ES
dc.titleImproving Motor Imagery of Gait on a Brain–Computer Interface by Means of Virtual Reality: A Case of Studyes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherversionhttps://doi.org/10.1109/ACCESS.2021.3068929es_ES
Appears in Collections:
Artículos Ingeniería Mecánica y Energía


thumbnail_pdf
View/Open:
 Improving Motor Imagery of Gait on a BrainComputer Interface by Means of Virtual Reality A Case of Study.pdf

1,73 MB
Adobe PDF
Share:


Creative Commons ???jsp.display-item.text9???