EVALUATING LARGE LANGUAGE MODELS FOR MACHINE TRANSLATION ON INDIAN LANGUAGES.

dc.contributor.advisorKate, Rohit J
dc.creatorAkula, Geetha Syam Sai
dc.date.accessioned2025-02-19T23:26:38Z
dc.date.available2025-02-19T23:26:38Z
dc.date.issued2024-12
dc.description.abstractThis study assesses how well Large Language Models (LLMs), such as LLaMA-v3 and GPT-3.5, perform while translating English into Indian languages. For three Indian languages, translations from English were evaluated by humans and were found to be fairly good. Automated measures like BLEU, METEOR, and BERTScore were then evaluated by comparing them to human evaluation scores. Translations from English obtained by LLMs were then automatically evaluated on eleven Indian languages using the Samanantar dataset. The results show that while LLaMA has significant advantages in terms of fluency and semantic accuracy, LLMs are prone to errors related to language specific conventions. As part of the study, the impact of prompt engineering on improving translation quality was also examined.
dc.identifier.urihttp://digital.library.wisc.edu/1793/89248
dc.subjectComputer science
dc.subjectLarge language models
dc.subjectMachine Translation
dc.titleEVALUATING LARGE LANGUAGE MODELS FOR MACHINE TRANSLATION ON INDIAN LANGUAGES.
dc.typethesis
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of Wisconsin-Milwaukee
thesis.degree.nameMaster of Science

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Akula_uwm_0263M_13960.pdf
Size:
725.53 KB
Format:
Adobe Portable Document Format
Description:
Main File