Texto completo
|
PDF (Portable Document Format)
- Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (697kB) |
ORCID: https://orcid.org/0000-0002-5304-0626, Reviriego Vasallo, Pedro
ORCID: https://orcid.org/0000-0003-2540-5234, Mayor Rocher, Marina
ORCID: https://orcid.org/0000-0002-4177-7559 and Grandury González, María
(2025).
Spanish and LLM Benchmarks: Is MMLU Lost in Translation?.
En: "2025 2nd International Generative AI and Computational Language Modelling Conference (GACLM)", 18-21 Ago 2025, Valencia, Spain. ISBN 979-8-3315-9406-0. pp. 104-108.
https://doi.org/10.1109/GACLM67198.2025.11232191.
| Título: | Spanish and LLM Benchmarks: Is MMLU Lost in Translation? |
|---|---|
| Autor/es: |
|
| Tipo de Documento: | Ponencia en Congreso o Jornada (Artículo) |
| Título del Evento: | 2025 2nd International Generative AI and Computational Language Modelling Conference (GACLM) |
| Fechas del Evento: | 18-21 Ago 2025 |
| Lugar del Evento: | Valencia, Spain |
| Título del Libro: | 2nd International Generative AI and Computational Language Modelling Conference |
| Fecha: | Agosto 2025 |
| ISBN: | 979-8-3315-9406-0 |
| Materias: | |
| ODS: | |
| Palabras Clave Informales: | Adaptation models ; Translation ; Generative AI ; Large language models ; Computational modeling ; Benchmark testing ; LLM ; Evaluation ; Benchmarks ; Spanish |
| Escuela: | E.T.S.I. Telecomunicación (UPM) |
| Departamento: | Ingeniería de Sistemas Telemáticos |
| Grupo Investigación UPM: | Internet de Nueva Generación |
| Licencias Creative Commons: | Ninguna |
|
PDF (Portable Document Format)
- Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (697kB) |
The evaluation of Large Language Models (LLMs) is a key element in their continuous improvement process and many benchmarks have been developed to assess the performance of LLMs in different tasks and topics. As LLMs become adopted worldwide, evaluating them in languages other than English is increasingly important. However, most LLM benchmarks are simply translated using an automated tool and then run in the target language. This means that the results depend not only on the LLM performance in that language but also on the quality of the translation. In this paper, we consider the case of the well-known Massive Multitask Language Understanding (MMLU) benchmark. Selected categories of the benchmark are translated into Spanish using Azure Translator and ChatGPT4 and run on ChatGPT4. Next, the results are processed to identify the test items that produce different answers in Spanish and English. Those are then analyzed manually to understand if the automatic translation caused the change. The results show that a significant fraction of the failing items can be attributed to mistakes in the translation of the benchmark. These results make a strong case for improving benchmarks in languages other than English by at least revising the translations of the items and preferably by adapting the tests to the target language by experts.
| ID de Registro: | 92045 |
|---|---|
| Identificador DC: | https://oa.upm.es/92045/ |
| Identificador OAI: | oai:oa.upm.es:92045 |
| Identificador DOI: | 10.1109/GACLM67198.2025.11232191 |
| URL Oficial: | https://ieeexplore.ieee.org/document/11232191 |
| Depositado por: | Javier Conde Díaz |
| Depositado el: | 26 Nov 2025 20:42 |
| Ultima Modificación: | 26 Nov 2025 20:42 |
Publicar en el Archivo Digital desde el Portal Científico