← All publications

WorkshopBLP2025

LP-FT-LORA: A Three-Stage PEFT Framework for Efficient Domain Adaptation in Bangla NLP Tasks

Tasnimul Hossain Tomal, Anam Borhan Uddin, Intesar Tahmid, Mir Sazzat Hossain, Md Fahim, Md Farhad Alam Bhuiyan

In Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025), pages 212–222, Mumbai, India. Association for Computational Linguistics.

Abstract

Adapting large pre-trained language models (LLMs) to downstream tasks typically requires fine-tuning, but fully updating all parameters is computationally prohibitive. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) reduce this cost by updating a small subset of parameters. However, the standard approach of jointly training LoRA adapters and a new classifier head from a cold start can lead to training instability, as the classifier chases shifting feature representations. To address this, we propose LP-FT-LoRA, a novel three-stage training framework that decouples head alignment from representation learning to enhance stability and performance. Our framework first aligns the classifier head with the frozen backbone via linear probing, then trains only the LoRA adapters to learn task-specific features, and finally performs a brief joint refinement of the head and adapters. We conduct extensive experiments on five Bangla NLP benchmarks across four open-weight compact transformer models. The results demonstrate that LP-FT-LoRA consistently outperforms standard LoRA fine-tuning and other baselines, achieving state-of-the-art average performance and showing improved generalization on out-of-distribution datasets.

Cite

@inproceedings{tomal-etal-2025-lp,
    title = "{LP}-{FT}-{L}o{RA}: A Three-Stage {PEFT} Framework for Efficient Domain Adaptation in {B}angla {NLP} Tasks",
    author = "Tomal, Tasnimul Hossain  and
      Uddin, Anam Borhan  and
      Tahmid, Intesar  and
      Hossain, Mir Sazzat  and
      Fahim, Md  and
      Bhuiyan, Md Farhad Alam",
    editor = "Alam, Firoj  and
      Kar, Sudipta  and
      Chowdhury, Shammur Absar  and
      Hassan, Naeemul  and
      Prince, Enamul Hoque  and
      Tasnim, Mohiuddin  and
      Rony, Md Rashad Al Hasan  and
      Rahman, Md Tahmid Rahman",
    booktitle = "Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)",
    month = dec,
    year = "2025",
    address = "Mumbai, India",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.banglalp-1.17/",
    doi = "10.18653/v1/2025.banglalp-1.17",
    pages = "212--222",
    ISBN = "979-8-89176-314-2",
    abstract = "Adapting large pre-trained language models (LLMs) to downstream tasks typically requires fine-tuning, but fully updating all parameters is computationally prohibitive. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) reduce this cost by updating a small subset of parameters. However, the standard approach of jointly training LoRA adapters and a new classifier head from a cold start can lead to training instability, as the classifier chases shifting feature representations. To address this, we propose LP-FT-LoRA, a novel three-stage training framework that decouples head alignment from representation learning to enhance stability and performance. Our framework first aligns the classifier head with the frozen backbone via linear probing, then trains only the LoRA adapters to learn task-specific features, and finally performs a brief joint refinement of the head and adapters. We conduct extensive experiments on five Bangla NLP benchmarks across four open-weight compact transformer models. The results demonstrate that LP-FT-LoRA consistently outperforms standard LoRA fine-tuning and other baselines, achieving state-of-the-art average performance and showing improved generalization on out-of-distribution datasets."
}