keyboard_arrow_up
Do You Speak Basquenglish? Assessing Low-Resource Multilingual Proficiency of Pretrained Language Models

Authors

Inigo Parra, University of Alabama, USA

Abstract

Multilingual language models’ have democratized access to information and artificial intelligence (AI). Still, low-resource languages (LRL) remain underrepresented. This study compares the performance of GPT-4, LlaMa (7B), and PaLM 2 when asked to reproduce English-Basque code-switched outputs. The study uses code-switching as a test to argue for the multilingual capabilities of each model and compares and studies their cross-lingual understanding. All models were tested using 84 prompts (N = 252), with their responses subjected to qualitative and quantitative analysis. This study compares the naturalness of the outputs, code-switching competence (CSness), and the frequency of hallucinations. Results of pairwise comparisons show statistically significant differences in naturalness and the ability to produce grammatical code- switched output across models. This study underscores the critical role of linguistic representation in large language models (LLMs) and the necessity for improvement in handling LRLs.

Keywords

Basque, code-switching, low-resource languages, multilingual models

Full Text  Volume 13, Number 22