keyboard_arrow_up
A Study to Evaluate the Impact of LoRA Fine-Tuning on the Performance of Non-Functional Requirements Classification

Authors

Xia Li, Allen Kim, Kennesaw State University, USA

Abstract

Classifying Non-Functional Requirements (NFRs) in software development life cycle is critical. Inspired by the theory of transfer learning, researchers apply powerful pre-trained models for NFR classification. However, full fine-tuning by updating all parameters of the pre-trained models is often impractical due to the huge number of parameters involved (e.g., 175 billion trainable parameters in GPT-3). In this paper, we apply Low-Rank Adaptation (LoRA) fine-tuning approach into NFR classification based on prompt-based learning to investigate its impact. The experiments show that LoRA can significantly reduce the execution cost (up to 68% reduction) without too much loss of effectiveness in classification (only 2%-3% decrease). The results show that LoRA can be practical in more complicated classification cases with larger dataset and pre-trained models.

Keywords

Non-functional requirements classification, low-rank adaptation (LoRA), pre-trained models, fine-tuning

Full Text  Volume 15, Number 4