简介:
记录使用中文语料,测试实践,序列标注任务,主要使用 huggingface 提供的一系列 数据加载库:datasets, 预训练模型等
1. 数据准备:
这个任务跟只是跟命名实体识别类似,我的目标是要识别出句子中的标识出来的,主语,谓语和宾语,像 text1 标识出来的那样,将数据处理成 CSV 文件: datatext1:中国#1实施 从明年起 将 实施|v 第九个 五年计划#2实施
text2:从明年起将实施第九个五年计划
label:B-S,B-I,O,O,O,O,O,B-V,I-V,O,O,O,B-O,I-O,I-O,I-O
2. huggingface datasets库:
使用datasets读取数据集,huggingface 开源 了上百种公开的数据集,使用 datasets 库可以很方便的下载并且使用这些数据,下面记录官方教程以下载 conll2003 数据集为例:
- 首先是加载包,然后下载数据集
from datasets import load_dataset, load_metric
datasets = load_dataset("conll2003")
print(datasets )
>>
DatasetDict({
train: Dataset({
features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
num_rows: 14041
})
validation: Dataset({
features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
num_rows: 3250
})
test: Dataset({
features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
num_rows: 3453
})
})
这里还可以指定读取多少:
datasets = load_dataset("conll2003",split="train[:100])
>>
Dataset({
features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
num_rows: 100
})
- 可以加载本地的数据集:
datasets 可以加载 CSV,txt,json 等格式的本地数据,具体参考见:Loading a Dataset — datasets 1.5.0 documentation (huggingface.co)
train_dataset = load_dataset('csv', encoding='utf-8',data_files=r'train.csv')
valid_dataset = load_dataset('csv', encoding='utf-8',data_files=r'valid.csv')
#通过以下的方式访问数据:
train_dataset
train_dataset ['train']
train_dataset ['train'][0]
train_dataset['train']['text']
>>
DatasetDict({
train: Dataset({
features: ['id', 'text', 'ner_tags'],
num_rows: 10000
})
})
>>
Dataset({
features: ['id', 'text', 'ner_tags'],
num_rows: 10000
})
>>
{'id': 1, 'text': '本报讯记者周晓燕赴京参加中共十四届五中全会刚刚回到厦门的中共中央候补委员中共福建省委常委厦门市委书记石兆彬昨天在厦门市委召开的全市领导干部大会上传达了这次会议的主要精神', 'ner_tags': 'O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,O,B-V,I-V,O,O,O,O,O,O,O,O,B-O,I-O'}
>>
2. 准备模型
- 导入 tokenizer,model,trainer 等
from transformers import AutoTokenizer
import transformers
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)#The assertion ensures that our tokenizer is a fast tokenizers
model_checkpoint="bert-base-chinese"
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))#model_checkpoint = "distilbert-base-uncased"
-下面需要将 label 转换为 索引 Id 并且对应 经过 bert tokenizer 之后的词,因为有的词会切分成子词,这样的话需要把原本的 label 也对应到各个子词上:
其中有几个函数注意:
- word_ids():
经过了tokenizer()之后有的单词会被拆分成子单词,跟原来的lable等idx无法对应,通过函数word_ids()可以对应到每个单词(包括子单词)属于第几个单词,然后通过这个索引获得对应的label索引 - label_all_tokens=ture/false:
这是一个开关,有两种处理子词 对应label 的策略:
true:表示特殊标记记作-100,其他则为其来自的单词id,无论子单词还是别的
false:则处理成单词开头词为其单词id,子单词也为-100,这里特殊标记是指 bert 打上的 'CLS' 和 'SEP' 标记等
label_list=['O', 'B-V', 'I-V', 'B-O', 'I-O', 'B-S', 'I-S']
label_all_tokens = True
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["text"], truncation=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
label=[label_list.index(item) for item in label.strip().split(',')]
word_ids = tokenized_inputs.word_ids(batch_index=i)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx:
label_ids.append(label[word_idx])
else:
label_ids.append(label[word_idx] if label_all_tokens else -100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
- 使用 datasets 的map 方法,这会把函数应用到数据集中所有需要处理的数据中,训练集,验证集,测试集等会在这一条命令中被处理:
- 可以看到处理之后的数据集,features 字段增加了 "input_ids" ,"attention_mask","labels" 等新的数据结构
train_tokenized_datasets = train_dataset.map(tokenize_and_align_labels, batched=True)
valid_tokenized_datasets = valid_dataset.map(tokenize_and_align_labels, batched=True)
print(train_tokenized_datasets)
print(valid_tokenized_datasets)
>>
DatasetDict({
train: Dataset({
features: ['attention_mask', 'id', 'input_ids', 'labels', 'ner_tags', 'text', 'token_type_ids'],
num_rows: 10000
})
})
DatasetDict({
train: Dataset({
features: ['attention_mask', 'id', 'input_ids', 'labels', 'ner_tags', 'text', 'token_type_ids'],
num_rows: 3000
})
})
3. 开始训练,定义 TrainingArguments,trainer 等:
args = TrainingArguments(
f"test-{task}",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,# # batch size per device during training
per_device_eval_batch_size=batch_size,## batch size for evaluation
num_train_epochs=3,
weight_decay=0.01,
)
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer)
- 关于 data_collator 的使用说明:
Then we will need a data collator that will batch our processed examples together while applying padding to make them all the same size (each pad will be padded to the length of its longest example). There is a data collator for this task in the Transformers library, that not only pads the inputs, but also the labels:
- 定义度量 函数:
测试使用:
metric = load_metric("seqeval")
labels = [ 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'O', 'O', 'O', 'B-PER', 'I-PER']
metric.compute(predictions=[labels], references=[labels])
>>
{'LOC': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2},
'ORG': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},
'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},
'overall_precision': 1.0,
'overall_recall': 1.0,
'overall_f1': 1.0,
'overall_accuracy': 1.0}
import numpy as np
def compute_metrics(p):
predictions, labels = p
predictions = np.argmax(predictions, axis=2)
# Remove ignored index (special tokens)
true_predictions = [
[label_list[p] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
true_labels = [
[label_list[l] for (p, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(predictions, labels)
]
results = metric.compute(predictions=true_predictions, references=true_labels)
return {
"precision": results["overall_precision"],
"recall": results["overall_recall"],
"f1": results["overall_f1"],
"accuracy": results["overall_accuracy"],
}
- 定义 trainer:
print(train_tokenized_datasets)
print(valid_tokenized_datasets['train'])
trainer = Trainer(
model,## the instantiated 🤗 Transformers model to be trained
args,# # training arguments, defined above
train_dataset=train_tokenized_datasets['train'],
eval_dataset=valid_tokenized_datasets['train'],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
>>
- 开始训练:
trainer.train()
- 验证集测试:
trainer.evaluate()
下面是一些实验记录,修改一些参数:
将全部长度padding 成 512(我的数据最长的是503左右):稍微
tokenized_inputs = tokenizer(examples["text"], padding='max_length')
网友评论