Please use this identifier to cite or link to this item: https://hdl.handle.net/11000/37213
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRos Arlanzón, Pablo-
dc.contributor.authorPérez Sempere, Ángel-
dc.contributor.otherDepartamentos de la UMH::Medicina Clínicaes_ES
dc.date.accessioned2025-09-05T07:49:34Z-
dc.date.available2025-09-05T07:49:34Z-
dc.date.created2024-11-
dc.identifier.citationJMIR Med Educ . 2024 Nov 14:10:e56762es_ES
dc.identifier.issn2369-3762-
dc.identifier.urihttps://hdl.handle.net/11000/37213-
dc.description.abstractBackground: With the rapid advancement of artificial intelligence (AI) in various fields, evaluating its application in specialized medical contexts becomes crucial. ChatGPT, a large language model developed by OpenAI, has shown potential in diverse applications, including medicine. Methods: We conducted a comparative analysis using the 2022 neurology specialist examination results from 120 neurologists and responses generated by ChatGPT versions 3.5 and 4. The examination consisted of 80 multiple-choice questions, with a focus on clinical neurology and health legislation. Questions were classified according to Bloom's Taxonomy. Statistical analysis of performance, including the κ coefficient for response consistency, was performed. Results: Human participants exhibited a median score of 5.91 (IQR: 4.93-6.76), with 32 neurologists failing to pass. ChatGPT-3.5 ranked 116th out of 122, answering 54.5% of questions correctly (score 3.94). ChatGPT-4 showed marked improvement, ranking 17th with 81.8% of correct answers (score 7.57), surpassing several human specialists. No significant variations were observed in the performance on lower-order questions versus higher-order questions. Additionally, ChatGPT-4 demonstrated increased interrater reliability, as reflected by a higher κ coefficient of 0.73, compared to ChatGPT-3.5's coefficient of 0.69. Conclusions: This study underscores the evolving capabilities of AI in medical knowledge assessment, particularly in specialized fields. ChatGPT-4's performance, outperforming the median score of human participants in a rigorous neurology examination, represents a significant milestone in AI development, suggesting its potential as an effective tool in specialized medical education and assessment.es_ES
dc.formatapplication/pdfes_ES
dc.format.extent8es_ES
dc.language.isoenges_ES
dc.publisherJMIR Publicationses_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectartificial intelligencees_ES
dc.subjectChatGPTes_ES
dc.subjectclinical decision-makinges_ES
dc.subjectOpenAIes_ES
dc.titleEvaluating AI Competence in Specialized Medicine: Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spaines_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publisherversion10.2196/56762es_ES
Appears in Collections:
Artículos Medicina Clínica


Thumbnail

View/Open:
 Evaluating AI Competence in Specialized Medicine Comparative Analysis of ChatGPT and Neurologists in a Neurology Specialist Examination in Spain.pdf

312,87 kB
Adobe PDF
Share:


Creative Commons ???jsp.display-item.text9???