介绍
2018年是迁移学习模型在NLP领域大放异彩的一年。像Allen AI的ELMO,Open AI的GPT和Google的BERT模型,研究人员通过对这些模型进行微调(fine-tuning)不断刷新了NLP的多项benchmarks。
为什么使用BERT获取词向量?
在本教程中,我们将使用BERT从文本数据中提取特征,即单词和句子嵌入向量。这些单词和句子的嵌入向量可以做什么?首先,这些嵌入可用于关键字/搜索扩展,语义搜索和信息检索。例如,如果您想将客户问题或搜索结果与已回答的问题或有据可查的搜索结果进行匹配,即使没有关键字或词组重叠,这些表示形式也可以帮助您准确地检索出符合客户意图和上下文含义的结果。
其次,也许更重要的是,这些向量被用作下游模型的高质量特征输入。 NLP模型(例如LSTM或CNN)需要以向量形式输入,这通常意味着将诸如词汇和语音部分之类的特征转换为数字表示。过去,单词被表示为唯一索引值(one-hot编码),或者更有用地表示为神经词嵌入,其中词汇词与诸如Word2Vec或Fasttext之类的模型生成的固定长度特征嵌入相匹配。 BERT提供了优于Word2Vec之类的模型的优势,因为尽管每个单词在Word2Vec下都具有固定的表示形式,而不管该单词出现的上下文如何,但BERT都会根据周围的单词动态地产生单词表示形式。例如,给出两个句子:
“The man was accused of robbing a bank.”
“The man went fishing by the bank of the river.”
Word2Vec将在两个句子中为单词“ bank”嵌入相同的单词,而在BERT下,每个单词中“ bank”嵌入的单词将不同。除了捕获诸如多义性之类的明显差异外,上下文通知的单词嵌入还捕获其他形式的信息,这些信息可产生更准确的特征表示,从而带来更好的模型性能。
从学习的角度来看,仔细检查BERT单词嵌入是学习使用BERT及其迁移学习模型系列的好方法,它为我们提供了一些实践知识和背景知识,可以更好地理解该模型的内部细节。
环境准备
pip install pytorch-pretrained-bert
conda install pytorch-cpu torchvision-cpu -c pytorch
输入格式化
BERT是预训练的模型,它期望的输入应该是有特定格式的。接口为我们处理好了一部分的输入规范。
完整代码
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
import logging
import matplotlib.pyplot as plt
import os
UNCASED = './bert-base-uncased'
VOCAB = 'bert-base-uncased-vocab.txt'
#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained(os.path.join(UNCASED, VOCAB))
text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
marked_text = "[CLS] " + text + " [SEP]"
tokenized_text = tokenizer.tokenize(marked_text)
print (tokenized_text)
list(tokenizer.vocab.keys())[5000:5020]
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
for tup in zip(tokenized_text, indexed_tokens):
print (tup)
segments_ids = [1] * len(tokenized_text)
print (segments_ids)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# model = BertModel.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained(UNCASED)
model.eval()
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
print ("Number of layers:", len(encoded_layers))
layer_i = 0
print ("Number of batches:", len(encoded_layers[layer_i]))
batch_i = 0
print ("Number of tokens:", len(encoded_layers[layer_i][batch_i]))
token_i = 0
print ("Number of hidden units:", len(encoded_layers[layer_i][batch_i][token_i]))
token_i = 5
layer_i = 5
vec = encoded_layers[layer_i][batch_i][token_i]
# Plot the values as a histogram to show their distribution.
plt.figure(figsize=(10,10))
plt.hist(vec, bins=200)
plt.show()
# Convert the hidden state embeddings into single token vectors
# Holds the list of 12 layer embeddings for each token
# Will have the shape: [# tokens, # layers, # features]
token_embeddings = []
# For each token in the sentence...
for token_i in range(len(tokenized_text)):
# Holds 12 layers of hidden states for each token
hidden_layers = []
# For each of the 12 layers...
for layer_i in range(len(encoded_layers)):
# Lookup the vector for `token_i` in `layer_i`
vec = encoded_layers[layer_i][batch_i][token_i]
hidden_layers.append(vec)
token_embeddings.append(hidden_layers)
# Sanity check the dimensions:
print ("Number of tokens in sequence:", len(token_embeddings))
print ("Number of layers per token:", len(token_embeddings[0]))
concatenated_last_4_layers = [torch.cat((layer[-1], layer[-2], layer[-3], layer[-4]), 0) for layer in token_embeddings] # [number_of_tokens, 3072]
summed_last_4_layers = [torch.sum(torch.stack(layer)[-4:], 0) for layer in token_embeddings] # [number_of_tokens, 768]
sentence_embedding = torch.mean(encoded_layers[11], 1)
print ("First fifteen values of 'bank' as in 'bank robber':")
summed_last_4_layers[10][:15]
print ("First fifteen values of 'bank' as in 'bank vault':")
summed_last_4_layers[6][:15]
from sklearn.metrics.pairwise import cosine_similarity
# Compare "bank" as in "bank robber" to "bank" as in "river bank"
different_bank = cosine_similarity(summed_last_4_layers[10].reshape(1,-1), summed_last_4_layers[19].reshape(1,-1))[0][0]
# Compare "bank" as in "bank robber" to "bank" as in "bank vault"
same_bank = cosine_similarity(summed_last_4_layers[10].reshape(1,-1), summed_last_4_layers[6].reshape(1,-1))[0][0]
(持续更新中...)
网友评论