美文网首页NLP
NLP 数据增强

NLP 数据增强

作者: Quincy_baf0 | 来源:发表于2018-09-11 10:39 被阅读175次

    在机器学习领域,个人觉得有一个大前提:数据是永远不够的。虽然现在有很多吹嘘大数据,在自然语言处理领域,标注数据尤其匮乏,而且标注的质量也非常难控制。在这种情况下,数据增强是非常必要的,这对于模型的robustness和generalization都非常重要。

    在不同NLP领域都有一些特定的数据增强的方法. 

    Task-independent data augmentation for NLP

    Data augmentation aims to create additional training data by producing variations of existing training examples through transformations, which can mirror those encountered in the real world. In Computer Vision (CV), common augmentation techniques aremirroring, random cropping, shearing, etc. Data augmentation is super useful in CV. For instance, it has been used to great effect in AlexNet (Krizhevsky et al., 2012) [1] to combat overfitting and in most state-of-the-art models since. In addition, data augmentation makes intuitive sense as it makes the training data more diverse and should thus increase a model’s generalization ability.

    However, in NLP, data augmentation is not widely used. In my mind, this is for two reasons:

    Data in NLP is discrete. This prevents us from applying simple transformations directly to the input data. Most recently proposed augmentation methods in CV focus on such transformations, e.g. domain randomization (Tobin et al., 2017) [2].

    Small perturbations may change the meaning. Deleting a negation may change a sentence’s sentiment, while modifying a word in a paragraph might inadvertently change the answer to a question about that paragraph. This is not the case in CV where perturbing individual pixels does not change whether an image is a cat or dog and even stark changes such as interpolation of different images can be useful (Zhang et al., 2017) [3].

    Existing approaches that I am aware of are either rule-based (Li et al., 2017) [5] or task-specific, e.g. for parsing (Wang and Eisner, 2016) [6] or zero-pronoun resolution (Liu et al., 2017) [7]. Xie et al. (2017) [39] replace words with samples from different distributions for language modelling and Machine Translation. Recent work focuses on creating adversarial examples either by replacing words or characters (Samanta and Mehta, 2017; Ebrahimi et al., 2017) [8,9], concatenation (Jia and Liang, 2017) [11], or adding adversarial perturbations (Yasunaga et al., 2017) [10]. An adversarial setup is also used by Li et al. (2017) [16] who train a system to produce sequences that are indistinguishable from human-generated dialogue utterances.

    Back-translation (Sennrich et al., 2015; Sennrich et al., 2016) [12,13] is a common data augmentation method in Machine Translation (MT) that allows us to incorporate monolingual training data. For instance, when training a EN→FR system, monolingual French text is translated to English using an FR→EN system; the synthetic parallel data can then be used for training. Back-translation can also be used for paraphrasing (Mallinson et al., 2017) [14]. Paraphrasing has been used for data augmentation for QA (Dong et al., 2017) [15], but I am not aware of its use for other tasks.

    back translation.Translate the targeted sentence into source sentence and then use synthetic sentence pairs as additional training data. Improving Neural Machine Translation Models with Monolingual Data

    Joint Learning. Joint Training for Neural Machine Translation Models with Monolingual Data  

    Dual Learning. Dual Learning for Machine Translation

    Another method that is close to paraphrasing is generating sentences from a continuous space using a variational autoencoder (Bowman et al., 2016; Guu et al., 2017) [17,19]. If the representations are disentangled as in (Hu et al., 2017) [18], then we are also not too far from style transfer (Shen et al., 2017) [20].

    There are a few research directions that would be interesting to pursue:

    Evaluation study:Evaluate a range of existing data augmentation methods as well as techniques that have not been widely used for augmentation such as paraphrasing and style transfer on a diverse range of tasks including text classification and sequence labelling. Identify what types of data augmentation are robust across task and which are task-specific. This could be packaged as a software library to make future benchmarking easier (thinkCleverHansfor NLP).

    Data augmentation with style transfer:Investigate if style transfer can be used to modify various attributes of training examples for more robust learning.

    Learn the augmentation:Similar to Dong et al. (2017) we could learn either to paraphrase or to generate transformations for a particular task.

    Learn a word embedding space for data augmentation:A typical word embedding space clusters synonyms and antonyms together; using nearest neighbours in this space for replacement is thus infeasible. Inspired by recent work (Mrkšić et al., 2017) [21], we could specialize the word embedding space to make it more suitable for data augmentation.

    Adversarial data augmentation:Related to recent work in interpretability (Ribeiro et al., 2016) [22], we could change the most salient words in an example, i.e. those that a model depends on for a prediction. This still requires a semantics-preserving replacement method, however.

    Tutorial

    Robust, Unbiased Natural Language Processing

    (未完待续...)

    相关文章

      网友评论

        本文标题:NLP 数据增强

        本文链接:https://www.haomeiwen.com/subject/dhhsgftx.html