Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2024, Vol. 60 ›› Issue (1): 1-12.DOI: 10.13209/j.0479-8023.2023.071

Previous Articles     Next Articles

Enhanced Prompt Learning for Few-shot Text Classification Method

LI Ruifan1,2,3,†, WEI Zhiyu1, FAN Yuantao1, YE Shuqin1, ZHANG Guangwei2,4   

  1. 1. School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876 2. Engineering Research Center of Information Networks, Ministry of Education, Beijing 100876 3. Key Laboratory of Interactive Technology and Experience System, Ministry of Culture and Tourism, Beijing 100876 4. School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876
  • Received:2023-05-18 Revised:2023-08-30 Online:2024-01-20 Published:2024-01-20
  • Contact: LI Ruifan, E-mail: rfli(at)


李睿凡1,2,3,†, 魏志宇1, 范元涛1, 叶书勤1, 张光卫2,4   

  1. 1. 北京邮电大学人工智能学院, 北京 100876 2. 教育部信息网络工程研究中心, 北京 100876 3. 交互技术与体验系统文化和旅游部重点实验室, 北京 100876 4. 北京邮电大学计算机学院, 北京 100876
  • 通讯作者: 李睿凡, E-mail: rfli(at)
  • 基金资助:


An enhanced prompt learning method (EPL4FTC) for few-shot text classification task is proposed. This algorithm first converts the text classification task into the form of prompt learning based on natural language inference. Thus, the implicit data enhancement is achieved based on the prior knowledge of pre-training language models and the algorithm is optimized by two losses with different granularities. Moreover, to capture the category information of specific downstream tasks, the triple loss is used for joint optimization. The masked-language model is incorporated as a regularizer to improve the generalization ability. Through the evaluation on four Chinese and three English text classification datasets, the experimental results show that the classification accuracy of the proposed EPL4FTC is significantly better than the other compared baselines.

Key words: pre-trained language model, few-shot learning, text classification, prompt learning, triplet loss


针对少样本文本分类任务, 提出提示学习增强的分类算法(EPL4FTC)。该算法将文本分类任务转换成基于自然语言推理的提示学习形式, 在利用预训练语言模型先验知识的基础上实现隐式数据增强, 并通过两种粒度的损失进行优化。为捕获下游任务中含有的类别信息, 采用三元组损失联合优化方法, 并引入掩码语言模型任务作为正则项, 提升模型的泛化能力。在公开的4个中文文本和3个英文文本分类数据集上进行实验评估, 结果表明EPL4FTC方法的准确度明显优于所对比的基线方法。

关键词: 预训练语言模型, 少样本学习, 文本分类, 提示学习, 三元组损失