美文网首页
不需要大规模预训练的NLP模型

不需要大规模预训练的NLP模型

作者: Valar_Morghulis | 来源:发表于2022-05-20 10:34 被阅读0次

    NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

    作者:Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang

    原文:https://arxiv.org/abs/2111.04130

    发表于2021.11.7

    https://github.com/yaoxingcheng/TLM

    Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM, that does not rely on large-scale pretraining. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch. On eight classification datasets in four domains, TLM achieves results better than or similar to pretrained language models (e.g., RoBERTa-Large) while reducing the training FLOPs by two orders of magnitude. With high accuracy and efficiency, we hope TLM will contribute to democratizing NLP and expediting its development.

    预训练语言模型由于其强大的性能,已成为许多NLP任务的标准方法,但训练成本非常高。我们提出了一个简单有效的学习框架TLM,它不依赖于大规模的预训练。给定一些带标签的任务数据和一个大型通用语料库,TLM使用任务数据作为查询来检索通用语料库的一小部分,并从头开始联合优化任务目标和语言建模目标。在四个领域的八个分类数据集上,TLM取得了优于或类似于预训练语言模型(如RoBERTa Large)的结果,同时将训练失败次数减少了两个数量级。我们希望TLM能够以高精度和高效率为NLP的民主化和加速其发展做出贡献。

    相关文章

      网友评论

          本文标题:不需要大规模预训练的NLP模型

          本文链接:https://www.haomeiwen.com/subject/arhfprtx.html