Spanish and LLM Benchmarks: Is MMLU Lost in Translation?

Plaza Ortiz, Irene, Melero Carrasco, María Inmaculada, Pozo, Cristina del, Conde Díaz, Javier ORCID: https://orcid.org/0000-0002-5304-0626, Reviriego Vasallo, Pedro ORCID: https://orcid.org/0000-0003-2540-5234, Mayor Rocher, Marina ORCID: https://orcid.org/0000-0002-4177-7559 and Grandury González, María (2025). Spanish and LLM Benchmarks: Is MMLU Lost in Translation?. En: "2025 2nd International Generative AI and Computational Language Modelling Conference (GACLM)", 18-21 Ago 2025, Valencia, Spain. ISBN 979-8-3315-9406-0. pp. 104-108. https://doi.org/10.1109/GACLM67198.2025.11232191.

Descripción

Título: Spanish and LLM Benchmarks: Is MMLU Lost in Translation?
Autor/es:
Tipo de Documento: Ponencia en Congreso o Jornada (Artículo)
Título del Evento: 2025 2nd International Generative AI and Computational Language Modelling Conference (GACLM)
Fechas del Evento: 18-21 Ago 2025
Lugar del Evento: Valencia, Spain
Título del Libro: 2nd International Generative AI and Computational Language Modelling Conference
Fecha: Agosto 2025
ISBN: 979-8-3315-9406-0
Materias:
ODS:
Palabras Clave Informales: Adaptation models ; Translation ; Generative AI ; Large language models ; Computational modeling ; Benchmark testing ; LLM ; Evaluation ; Benchmarks ; Spanish
Escuela: E.T.S.I. Telecomunicación (UPM)
Departamento: Ingeniería de Sistemas Telemáticos
Grupo Investigación UPM: Internet de Nueva Generación
Licencias Creative Commons: Ninguna

Texto completo

[thumbnail of GACLM_2025_paper_1963_sent_2.pdf] PDF (Portable Document Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (697kB)

Resumen

The evaluation of Large Language Models (LLMs) is a key element in their continuous improvement process and many benchmarks have been developed to assess the performance of LLMs in different tasks and topics. As LLMs become adopted worldwide, evaluating them in languages other than English is increasingly important. However, most LLM benchmarks are simply translated using an automated tool and then run in the target language. This means that the results depend not only on the LLM performance in that language but also on the quality of the translation. In this paper, we consider the case of the well-known Massive Multitask Language Understanding (MMLU) benchmark. Selected categories of the benchmark are translated into Spanish using Azure Translator and ChatGPT4 and run on ChatGPT4. Next, the results are processed to identify the test items that produce different answers in Spanish and English. Those are then analyzed manually to understand if the automatic translation caused the change. The results show that a significant fraction of the failing items can be attributed to mistakes in the translation of the benchmark. These results make a strong case for improving benchmarks in languages other than English by at least revising the translations of the items and preferably by adapting the tests to the target language by experts.

Proyectos asociados

Tipo
Código
Acrónimo
Responsable
Título
Gobierno de España
PID2022-136684OB-C22
FUN4DATE
Sin especificar
Sin especificar
Gobierno de España
PCI2024-153434
SMARTY
Sin especificar
Sin especificar
Horizonte Europa
101140087
SMARTY
Sin especificar
Sin especificar

Más información

ID de Registro: 92045
Identificador DC: https://oa.upm.es/92045/
Identificador OAI: oai:oa.upm.es:92045
Identificador DOI: 10.1109/GACLM67198.2025.11232191
URL Oficial: https://ieeexplore.ieee.org/document/11232191
Depositado por: Javier Conde Díaz
Depositado el: 26 Nov 2025 20:42
Ultima Modificación: 26 Nov 2025 20:42