Skip to content

[COLM2025] Repository contains codebase for "ALOPE" framework which can help train efficient LoRA adapters for Quality Estimation of Machine Translated Text

Notifications You must be signed in to change notification settings

surrey-nlp/ALOPE

Repository files navigation

ALOPE

ALOPE is an adaptive layer-optimization framework that enhances quality estimation (QE) for machine translation using large language models (LLMs). It restructures Transformer representations through layer-wise adaptation and integrates low-rank adapters (LoRA) with regression heads, enabling improved regression-based prediction, especially for low-resource languages. ALOPE also introduces dynamic weighting and multi-head regression strategies, adaptively combining information from multiple Transformer layers. The framework is designed to be easily integrated into existing LLMs, enabling robust reference-less quality estimation.

Installation

1. Clone the repository

git clone https://github.com/ArchSid/ALOPE.git
cd ALOPE

2.Create a new Conda virtual environment

conda create -n myenv python=3.10
conda activate myenv
pip install -r requirements.txt

Our fine-tuned models with ALOPE framework can be found in the HuggingFace repository:

https://huggingface.co/collections/ArchSid

About

[COLM2025] Repository contains codebase for "ALOPE" framework which can help train efficient LoRA adapters for Quality Estimation of Machine Translated Text

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages