Skip to content

【开源实习】albert模型微调 #1981

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 112 additions & 0 deletions llm/finetune/albert/Albert的20newspaper微调.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# Albert的20Newspaper微调

## 硬件

资源规格:NPU: 1*Ascend-D910B(显存: 64GB), CPU: 24, 内存: 192GB

智算中心:武汉智算中心

镜像:mindspore_2_5_py311_cann8

torch训练硬件资源规格:Nvidia 3090

## 模型与数据集

模型:"albert/albert-base-v1"

数据集:"SetFit/20_newsgroups"

## 训练与评估损失

由于训练的损失过长,只取最后十五个loss展示

### mindspore+mindNLP

| Epoch | Loss | Eval Loss |
| ----- | ------ | --------- |
| 2.9 | 1.5166 | |
| 2.91 | 1.3991 | |
| 2.92 | 1.4307 | |
| 2.93 | 1.3694 | |
| 2.93 | 1.3242 | |
| 2.94 | 1.4505 | |
| 2.95 | 1.4278 | |
| 2.95 | 1.3563 | |
| 2.96 | 1.4091 | |
| 2.97 | 1.5412 | |
| 2.98 | 1.2831 | |
| 2.98 | 1.4771 | |
| 2.99 | 1.3773 | |
| 3.0 | 1.2446 | |
| 3.0 | | 1.5597 |

### Pytorch+transformers

| Epoch | Loss | Eval Loss |
| ----- | ------ | --------- |
| 2.26 | 1.1111 | |
| 2.32 | 1.1717 | |
| 2.37 | 1.1374 | |
| 2.43 | 1.1496 | |
| 2.49 | 1.1221 | |
| 2.54 | 1.0484 | |
| 2.6 | 1.1230 | |
| 2.66 | 1.0793 | |
| 2.71 | 1.1685 | |
| 2.77 | 1.0825 | |
| 2.82 | 1.1835 | |
| 2.88 | 1.0519 | |
| 2.94 | 1.0824 | |
| 2.99 | 1.1310 | |
| 3.0 | | 1.2418 |

## 对话分类测试

问题来自评估数据集,正确标签如表格

* 问题输入:

| 序号 | text | text的正确标签 |
| ---- | ------------------------------------------------------------ | --------------------- |
| 1 | I am a little confused on all of the models of the 88-89 bonnevilles.I have heard of the LE SE LSE SSE SSEI. Could someone tell me thedifferences are far as features or performance. I am also curious toknow what the book value is for prefereably the 89 model. And how muchless than book value can you usually get them for. In other words howmuch are they in demand this time of year. I have heard that the mid-springearly summer is the best time to buy. | rec.autos |
| 2 | I\'m not familiar at all with the format of these X-Face:thingies, butafter seeing them in some folks\' headers, I\'ve *got* to *see* them (andmaybe make one of my own)!I\'ve got dpg-viewon my Linux box (which displays uncompressed X-Faces)and I\'ve managed to compile [un]compface too... but now that I\'m *looking*for them, I can\'t seem to find any X-Face:\'s in anyones news headers! :-(Could you, would you, please send me your X-Face:headerI know* I\'ll probably get a little swamped, but I can handle it.\t...I hope. | comp.windows.x |
| 3 | In a word, yes. | alt.atheism |
| 4 | They were attacking the Iraqis to drive them out of Kuwait,a country whose citizens have close blood and business tiesto Saudi citizens. And me thinks if the US had not helped outthe Iraqis would have swallowed Saudi Arabia, too (or at least the eastern oilfields). And no Muslim country was doingmuch of anything to help liberate Kuwait and protect SaudiArabia; indeed, in some masses of citizens were demonstratingin favor of that butcher Saddam (who killed lotsa Muslims),just because he was killing, raping, and looting relativelyrich Muslims and also thumbing his nose at the West.So how would have *you* defended Saudi Arabia and rolledback the Iraqi invasion, were you in charge of Saudi Arabia???I think that it is a very good idea to not have governments have anofficial religion (de facto or de jure), because with human naturelike it is, the ambitious and not the pious will always be theones who rise to power. There are just too many people in thisworld (or any country) for the citizens to really know if a leader is really devout or if he is just a slick operator.You make it sound like these guys are angels, Ilyess. (In yourclarinet posting you edited out some stuff; was it the following???)Friday's New York Times reported that this group definitely ismore conservative than even Sheikh Baz and his followers (whothink that the House of Saud does not rule the country conservativelyenough). The NYT reported that, besides complaining that thegovernment was not conservative enough, they have:\t- asserted that the (approx. 500,000) Shiites in the Kingdom\t are apostates, a charge that under Saudi (and Islamic) law\t brings the death penalty. \t Diplomatic guy (Sheikh bin Jibrin), isn't he Ilyess?\t- called for severe punishment of the 40 or so women who\t drove in public a while back to protest the ban on\t women driving. The guy from the group who said this,\t Abdelhamoud al-Toweijri, said that these women should\t be fired from their jobs, jailed, and branded as\t prostitutes.\t Is this what you want to see happen, Ilyess? I've\t heard many Muslims say that the ban on women driving\t has no basis in the Qur'an, the ahadith, etc.\t Yet these folks not only like the ban, they want\t these women falsely called prostitutes? \t If I were you, I'd choose my heroes wisely,\t Ilyess, not just reflexively rally behind\t anyone who hates anyone you hate.\t- say that women should not be allowed to work.\t- say that TV and radio are too immoral in the Kingdom.Now, the House of Saud is neither my least nor my most favorite governmenton earth; I think they restrict religious and political reedom a lot, amongother things. I just think that the most likely replacementsfor them are going to be a lot worse for the citizens of the country.But I think the House of Saud is feeling the heat lately. In thelast six months or so I've read there have been stepped up harassingby the muttawain (religious police---*not* government) of Western womennot fully veiled (something stupid for women to do, IMO, because itsends the wrong signals about your morality). And I've read thatthey've cracked down on the few, home-based expartiate religiousgatherings, and even posted rewards in (government-owned) newspapersoffering money for anyone who turns in a group of expartiates whodare worship in their homes or any other secret place. So thegovernment has grown even more intolerant to try to take some ofthe wind out of the sails of the more-conservative opposition.As unislamic as some of these things are, they're just a smalltaste of what would happen if these guys overthrow the House ofSaud, like they're trying to in the long run.Is this really what you (and Rached and others in the generalwest-is-evil-zionists-rule-hate-west-or-you-are-a-puppet crowd)want, Ilyess? | talk.politics.mideast |

* mindnlp未微调前的回答:

| 序号 | 预测结果 | 是否正确 |
| ---- | ----------- | --------- |
| 1 | alt.atheism | Incorrect |
| 2 | alt.atheism | Incorrect |
| 3 | alt.atheism | Correct |
| 4 | alt.atheism | Incorrect |



* mindnlp微调后的回答:

| 序号 | 预测结果 | 是否正确 |
| ---- | --------------------- | --------- |
| 1 | misc.forsale | Incorrect |
| 2 | comp.windows.x | Correct |
| 3 | talk.politics.misc | Incorrect |
| 4 | talk.politics.mideast | Correct |

* torch微调前的回答:

| 序号 | 预测结果 | 是否正确 |
| ---- | --------- | --------- |
| 1 | sci.space | Incorrect |
| 2 | sci.space | Incorrect |
| 3 | sci.space | Incorrect |
| 4 | sci.space | Incorrect |

* torch微调后的回答:

| 序号 | 预测结果 | 是否正确 |
| ---- | --------------------- | --------- |
| 1 | rec.autos | Correct |
| 2 | comp.windows.x | Correct |
| 3 | talk.religion.misc | Incorrect |
| 4 | talk.politics.mideast | Correct |
160 changes: 160 additions & 0 deletions llm/finetune/albert/mindNLPAlbert.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
import os
import mindspore
from mindnlp.transformers import AutoTokenizer,AlbertTokenizer, AlbertForSequenceClassification
from mindnlp.engine import Trainer, TrainingArguments
from datasets import load_dataset, load_from_disk
import os

mindspore.set_context(device_target='Ascend', device_id=0, pynative_synchronize=True)
# 加载预训练模型和分词器
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
model_name = "albert/albert-base-v1"
tokenizer = AlbertTokenizer.from_pretrained(model_name)
model = AlbertForSequenceClassification.from_pretrained(model_name,num_labels=20)
labels = [
"alt.atheism",
"comp.graphics",
"comp.os.ms-windows.misc",
"comp.sys.ibm.pc.hardware",
"comp.sys.mac.hardware",
"comp.windows.x",
"misc.forsale",
"rec.autos",
"rec.motorcycles",
"rec.sport.baseball",
"rec.sport.hockey",
"sci.crypt",
"sci.electronics",
"sci.med",
"sci.space",
"soc.religion.christian",
"talk.politics.guns",
"talk.politics.mideast",
"talk.politics.misc",
"talk.religion.misc"
]
# 定义推理函数
def predict(text,tokenizer,model, true_label=None):
# 对输入文本进行编码
inputs = tokenizer(text, return_tensors="ms", padding=True, truncation=True, max_length=512)
# 模型推理
outputs = model(**inputs)
logits = outputs.logits

# 获取预测结果
predicted_class_id = mindspore.mint.argmax(logits, dim=-1).item()
predicted_label = labels[predicted_class_id]

# 判断预测是否正确
is_correct = "Correct" if true_label is not None and predicted_label == true_label else "Incorrect"
return predicted_label, is_correct
# 测试样例(包含真实标签)
test_data = [
{"text": "I am a little confused on all of the models of the 88-89 bonnevilles.I have heard of the LE SE LSE SSE SSEI. Could someone tell me thedifferences are far as features or performance. I am also curious toknow what the book value is for prefereably the 89 model. And how muchless than book value can you usually get them for. In other words howmuch are they in demand this time of year. I have heard that the mid-springearly summer is the best time to buy."
, "true_label": "rec.autos"},
{"text": "I\'m not familiar at all with the format of these X-Face:thingies, butafter seeing them in some folks\' headers, I\'ve *got* to *see* them (andmaybe make one of my own)!I\'ve got dpg-viewon my Linux box (which displays uncompressed X-Faces)and I\'ve managed to compile [un]compface too... but now that I\'m *looking*for them, I can\'t seem to find any X-Face:\'s in anyones news headers! :-(Could you, would you, please send me your X-Face:headerI know* I\'ll probably get a little swamped, but I can handle it.\t...I hope."
, "true_label": "comp.windows.x"},
{"text": "In a word, yes."
, "true_label": "alt.atheism"},
{"text": "They were attacking the Iraqis to drive them out of Kuwait,a country whose citizens have close blood and business tiesto Saudi citizens. And me thinks if the US had not helped outthe Iraqis would have swallowed Saudi Arabia, too (or at least the eastern oilfields). And no Muslim country was doingmuch of anything to help liberate Kuwait and protect SaudiArabia; indeed, in some masses of citizens were demonstratingin favor of that butcher Saddam (who killed lotsa Muslims),just because he was killing, raping, and looting relativelyrich Muslims and also thumbing his nose at the West.So how would have *you* defended Saudi Arabia and rolledback the Iraqi invasion, were you in charge of Saudi Arabia???I think that it is a very good idea to not have governments have anofficial religion (de facto or de jure), because with human naturelike it is, the ambitious and not the pious will always be theones who rise to power. There are just too many people in thisworld (or any country) for the citizens to really know if a leader is really devout or if he is just a slick operator.You make it sound like these guys are angels, Ilyess. (In yourclarinet posting you edited out some stuff; was it the following???)Friday's New York Times reported that this group definitely ismore conservative than even Sheikh Baz and his followers (whothink that the House of Saud does not rule the country conservativelyenough). The NYT reported that, besides complaining that thegovernment was not conservative enough, they have:\t- asserted that the (approx. 500,000) Shiites in the Kingdom\t are apostates, a charge that under Saudi (and Islamic) law\t brings the death penalty. \t Diplomatic guy (Sheikh bin Jibrin), isn't he Ilyess?\t- called for severe punishment of the 40 or so women who\t drove in public a while back to protest the ban on\t women driving. The guy from the group who said this,\t Abdelhamoud al-Toweijri, said that these women should\t be fired from their jobs, jailed, and branded as\t prostitutes.\t Is this what you want to see happen, Ilyess? I've\t heard many Muslims say that the ban on women driving\t has no basis in the Qur'an, the ahadith, etc.\t Yet these folks not only like the ban, they want\t these women falsely called prostitutes? \t If I were you, I'd choose my heroes wisely,\t Ilyess, not just reflexively rally behind\t anyone who hates anyone you hate.\t- say that women should not be allowed to work.\t- say that TV and radio are too immoral in the Kingdom.Now, the House of Saud is neither my least nor my most favorite governmenton earth; I think they restrict religious and political reedom a lot, amongother things. I just think that the most likely replacementsfor them are going to be a lot worse for the citizens of the country.But I think the House of Saud is feeling the heat lately. In thelast six months or so I've read there have been stepped up harassingby the muttawain (religious police---*not* government) of Western womennot fully veiled (something stupid for women to do, IMO, because itsends the wrong signals about your morality). And I've read thatthey've cracked down on the few, home-based expartiate religiousgatherings, and even posted rewards in (government-owned) newspapersoffering money for anyone who turns in a group of expartiates whodare worship in their homes or any other secret place. So thegovernment has grown even more intolerant to try to take some ofthe wind out of the sails of the more-conservative opposition.As unislamic as some of these things are, they're just a smalltaste of what would happen if these guys overthrow the House ofSaud, like they're trying to in the long run.Is this really what you (and Rached and others in the generalwest-is-evil-zionists-rule-hate-west-or-you-are-a-puppet crowd)want, Ilyess?"
, "true_label": "talk.politics.mideast"}
]
# 对测试文本进行预测
for data in test_data:
text = data["text"]
true_label = data["true_label"]
predicted_label, is_correct = predict(text, tokenizer,model,true_label)
# print(f"Text: {text}")
print(f"True Label: {true_label}")
print(f"Predicted Label: {predicted_label}")
print(f"Prediction: {is_correct}\n")
# 加载数据集
dataset = load_dataset("SetFit/20_newsgroups",trust_remote_code=True)
print("dataset:",dataset)
# 定义数据集保存路径
# 数据预处理函数
def preprocess_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True, max_length=512)

# 对数据集进行预处理
encoded_dataset = dataset.map(preprocess_function, batched=True)
# 分割训练集和验证集
train_dataset = encoded_dataset['train']
eval_dataset = encoded_dataset['test']
print("encoded_dataset:",encoded_dataset)
# print("train_dataset:",train_dataset)
# print("eval_dataset:",eval_dataset)
# print("eval_dataset[0]:",eval_dataset[0])
import numpy as np
def data_generator(dataset):
for item in dataset:
yield (
np.array(item["input_ids"], dtype=np.int32), # input_ids
np.array(item["attention_mask"], dtype=np.int32), # attention_mask
np.array(item["label"], dtype=np.int32) # label
)
import mindspore.dataset as ds
# 将训练集和验证集转换为 MindSpore 数据集,注意forward函数中label要改成labels
def create_mindspore_dataset(dataset, shuffle=True):
return ds.GeneratorDataset(
source=lambda: data_generator(dataset), # 使用 lambda 包装生成器
column_names=["input_ids", "attention_mask", "labels"],
shuffle=shuffle
)
train_dataset = create_mindspore_dataset(train_dataset, shuffle=True)
eval_dataset = create_mindspore_dataset(eval_dataset, shuffle=False)
print(train_dataset.create_dict_iterator())

# 定义训练参数
training_args = TrainingArguments(
output_dir='./results', # 输出目录
evaluation_strategy="epoch", # 每个epoch结束后进行评估
learning_rate=2e-5, # 学习率
per_device_train_batch_size=8, # 每个设备的训练批次大小
per_device_eval_batch_size=8, # 每个设备的评估批次大小
num_train_epochs=3, # 训练epoch数
weight_decay=0.01, # 权重衰减
logging_dir='./logs', # 日志目录
logging_steps=10, # 每10步记录一次日志
save_strategy="epoch", # 每个epoch结束后保存模型
save_total_limit=2, # 最多保存2个模型
load_best_model_at_end=True, # 训练结束后加载最佳模型
)
# 初始化Trainer
trainer = Trainer(
model=model, # 模型
args=training_args, # 训练参数
train_dataset=train_dataset, # 训练集
eval_dataset=eval_dataset, # 验证集
tokenizer=tokenizer
)
# 开始训练
trainer.train()
eval_results = trainer.evaluate()
print(f"Evaluation results: {eval_results}")
# 保存模型
model.save_pretrained("./fine-tuned-albert-20newsgroups")
tokenizer.save_pretrained("./fine-tuned-albert-20newsgroups")
fine_tuned_model = AlbertForSequenceClassification.from_pretrained("./fine-tuned-albert-20newsgroups")
fine_tuned_tokenizer = AlbertTokenizer.from_pretrained("./fine-tuned-albert-20newsgroups")
# 测试样例
test_texts = [
{"text": "I am a little confused on all of the models of the 88-89 bonnevilles.I have heard of the LE SE LSE SSE SSEI. Could someone tell me thedifferences are far as features or performance. I am also curious toknow what the book value is for prefereably the 89 model. And how muchless than book value can you usually get them for. In other words howmuch are they in demand this time of year. I have heard that the mid-springearly summer is the best time to buy."
, "true_label": "rec.autos"},
{"text": "I\'m not familiar at all with the format of these X-Face:thingies, butafter seeing them in some folks\' headers, I\'ve *got* to *see* them (andmaybe make one of my own)!I\'ve got dpg-viewon my Linux box (which displays uncompressed X-Faces)and I\'ve managed to compile [un]compface too... but now that I\'m *looking*for them, I can\'t seem to find any X-Face:\'s in anyones news headers! :-(Could you, would you, please send me your X-Face:headerI know* I\'ll probably get a little swamped, but I can handle it.\t...I hope."
, "true_label": "comp.windows.x"},
{"text": "In a word, yes."
, "true_label": "alt.atheism"},
{"text": "They were attacking the Iraqis to drive them out of Kuwait,a country whose citizens have close blood and business tiesto Saudi citizens. And me thinks if the US had not helped outthe Iraqis would have swallowed Saudi Arabia, too (or at least the eastern oilfields). And no Muslim country was doingmuch of anything to help liberate Kuwait and protect SaudiArabia; indeed, in some masses of citizens were demonstratingin favor of that butcher Saddam (who killed lotsa Muslims),just because he was killing, raping, and looting relativelyrich Muslims and also thumbing his nose at the West.So how would have *you* defended Saudi Arabia and rolledback the Iraqi invasion, were you in charge of Saudi Arabia???I think that it is a very good idea to not have governments have anofficial religion (de facto or de jure), because with human naturelike it is, the ambitious and not the pious will always be theones who rise to power. There are just too many people in thisworld (or any country) for the citizens to really know if a leader is really devout or if he is just a slick operator.You make it sound like these guys are angels, Ilyess. (In yourclarinet posting you edited out some stuff; was it the following???)Friday's New York Times reported that this group definitely ismore conservative than even Sheikh Baz and his followers (whothink that the House of Saud does not rule the country conservativelyenough). The NYT reported that, besides complaining that thegovernment was not conservative enough, they have:\t- asserted that the (approx. 500,000) Shiites in the Kingdom\t are apostates, a charge that under Saudi (and Islamic) law\t brings the death penalty. \t Diplomatic guy (Sheikh bin Jibrin), isn't he Ilyess?\t- called for severe punishment of the 40 or so women who\t drove in public a while back to protest the ban on\t women driving. The guy from the group who said this,\t Abdelhamoud al-Toweijri, said that these women should\t be fired from their jobs, jailed, and branded as\t prostitutes.\t Is this what you want to see happen, Ilyess? I've\t heard many Muslims say that the ban on women driving\t has no basis in the Qur'an, the ahadith, etc.\t Yet these folks not only like the ban, they want\t these women falsely called prostitutes? \t If I were you, I'd choose my heroes wisely,\t Ilyess, not just reflexively rally behind\t anyone who hates anyone you hate.\t- say that women should not be allowed to work.\t- say that TV and radio are too immoral in the Kingdom.Now, the House of Saud is neither my least nor my most favorite governmenton earth; I think they restrict religious and political reedom a lot, amongother things. I just think that the most likely replacementsfor them are going to be a lot worse for the citizens of the country.But I think the House of Saud is feeling the heat lately. In thelast six months or so I've read there have been stepped up harassingby the muttawain (religious police---*not* government) of Western womennot fully veiled (something stupid for women to do, IMO, because itsends the wrong signals about your morality). And I've read thatthey've cracked down on the few, home-based expartiate religiousgatherings, and even posted rewards in (government-owned) newspapersoffering money for anyone who turns in a group of expartiates whodare worship in their homes or any other secret place. So thegovernment has grown even more intolerant to try to take some ofthe wind out of the sails of the more-conservative opposition.As unislamic as some of these things are, they're just a smalltaste of what would happen if these guys overthrow the House ofSaud, like they're trying to in the long run.Is this really what you (and Rached and others in the generalwest-is-evil-zionists-rule-hate-west-or-you-are-a-puppet crowd)want, Ilyess?"
, "true_label": "talk.politics.mideast"}
]

# 对测试文本进行预测
for data in test_texts:
text = data["text"]
true_label = data["true_label"]
predicted_label, is_correct = predict(text, fine_tuned_tokenizer,fine_tuned_model,true_label)
print(f"Text: {text}")
print(f"True Label: {true_label}")
print(f"Predicted Label: {predicted_label}")
print(f"Prediction: {is_correct}")
994 changes: 994 additions & 0 deletions llm/finetune/albert/mindnlplog.txt

Large diffs are not rendered by default.

25 changes: 25 additions & 0 deletions llm/finetune/bigbird_pagesus/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# bigbird_pegasus模型微调对比
## train loss

对比微调训练的loss变化

| epoch | mindnlp+mindspore | transformer+torch(4060) |transformer+torch(4060,another time) |
| ----- | ----------------- | ------------------------- |------------------------- |
| 1 | 2.0958 | 8.7301 |5.4650 |
| 2 | 1.969 | 8.1557 |4.6890 |
| 3 | 1.8755 | 7.7516 |4.2572 |
| 4 | 1.8264 | 7.5017 |4.0263 |
| 5 | 1.7349 | 7.2614 |3.9444 |
| 6 | 1.678 | 7.0559 |3.8428 |
| 7 | 1.6937 | 6.8405 |3.7187 |
| 8 | 1.654 | 6.7297 |3.7192 |
| 9 | 1.6365 | 6.7136 |3.5434 |
| 10 | 1.7003 | 6.6279 |3.5881 |

## eval loss

对比评估得分

| epoch | mindnlp+mindspore | transformer+torch(4060) | transformer+torch(4060) |
| ----- | ------------------ | ------------------------- |------------------------- |
| 1 | 2.1257965564727783 | 6.3235931396484375 |4.264792442321777 |
1,095 changes: 1,095 additions & 0 deletions llm/finetune/bigbird_pagesus/mindNLPDatatricksAuto.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Blenderbot_Small的Synthetic-Persona-Chat微调

## 硬件

资源规格:NPU: 1*Ascend-D910B(显存: 64GB), CPU: 24, 内存: 192GB

智算中心:武汉智算中心

镜像:mindspore_2_5_py311_cann8

torch训练硬件资源规格:Nvidia 3090

## 模型与数据集

模型:"facebook/blenderbot_small-90M"

数据集:"google/Synthetic-Persona-Chat"

## 训练损失

| trainloss | mindspore+mindnlp | Pytorch+transformers |
| --------- | ----------------- | -------------------- |
| 1 | 0.1737 | 0.2615 |
| 2 | 0.1336 | 0.1269 |
| 3 | 0.1099 | 0.0987 |

## 评估损失

| eval loss | mindspore+mindnlp | Pytorch+transformers |
| --------- | ------------------- | -------------------- |
| 1 | 0.16312436759471893 | 0.160710409283638 |
| 2 | 0.15773458778858185 | 0.15692724287509918 |
| 3 | 0.15398454666137695 | 0.1593361645936966 |
| 4 | 0.15398454666137695 | 0.1593361645936966 |

## 对话测试

* 问题输入:

Nice to meet you too. What are you interested in?

* mindnlp未微调前的回答:

i ' m not really sure . i ' ve always wanted to go back to school , but i don ' t know what i want to do yet .

* mindnlp微调后的回答:

user 2: i'm interested in a lot of things, but my main interests are music, art, and music. i also like to play video games, go to the movies, and spend time with my friends and family. my favorite video games are the legend of zelda series, and my favorite game is the witcher 3. name) what breath my his their i they ] include yes when philip boarity

* torch微调前的回答:
i ' m not really sure . i ' ve always wanted to go back to school , but i don ' t know what i want to do yet .

* torch微调后的回答:

user 2: i ' m interested in a lot of things , but my favorite ones are probably history and language . what do you like to do for fun ? hades is one of my favorite characters . hades is also my favorite character . hades namegardenblem pola litz strönape ception ddie ppon plata yder foundry patel fton darted sler bbins vili atsu ović endra scoe barons
80 changes: 80 additions & 0 deletions llm/finetune/blenderbot_small/Blenderbot_Small的coqa微调.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Blenderbot_Small的coqa微调

## 硬件

资源规格:NPU: 1*Ascend-D910B(显存: 64GB), CPU: 24, 内存: 192GB

智算中心:武汉智算中心

镜像:mindspore_2_5_py311_cann8

torch训练硬件资源规格:Nvidia 3090

## 模型与数据集

模型:"facebook/blenderbot_small-90M"

数据集:"google/Synthetic-Persona-Chat"

## 训练损失

| trainloss | mindspore+mindnlp | Pytorch+transformers |
| --------- | ----------------- | -------------------- |
| 1 | 0.0117 | 0.3391 |
| 2 | 0.0065 | 0.0069 |
| 3 | 0.0041 | 0.0035 |
| 4 | 0.0027 | |
| 5 | 0.0017 | |
| 6 | 0.0012 | |
| 7 | 0.0007 | |
| 8 | 0.0005 | |
| 9 | 0.0003 | |
| 10 | 0.0002 | |

## 评估损失

| eval loss | mindspore+mindnlp | Pytorch+transformers |
| --------- | -------------------- | -------------------- |
| 1 | 0.010459424927830696 | 0.010080045089125633 |
| 2 | 0.010958473198115826 | 0.008667134679853916 |
| 3 | 0.011061458848416805 | 0.00842051301151514 |
| 4 | 0.011254088021814823 | 0.00842051301151514 |
| 5 | 0.011891312897205353 | |
| 6 | 0.012321822345256805 | |
| 7 | 0.012598296627402306 | |
| 8 | 0.01246054656803608 | |
| 9 | 0.0124361552298069 | |
| 10 | 0.01264810748398304 | |

## 对话测试

问题来自评估数据集的第一个问题,微调后看起来效果不太好。

* 问题输入:

The Vatican Apostolic Library, more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, located in Vatican City. Formally established in 1475, although it is much older, it is one of the oldest libraries in the world and contains one of the most significant collections of historical texts. It has 75,000 codices from throughout history, as well as 1.1 million printed books, which include some 8,500 incunabula.

The Vatican Library is a research library for history, law, philosophy, science and theology. The Vatican Library is open to anyone who can document their qualifications and research needs. Photocopies for private study of pages from books published between 1801 and 1990 can be requested in person or by mail.

In March 2014, the Vatican Library began an initial four-year project of digitising its collection of manuscripts, to be made available online.

The Vatican Secret Archives were separated from the library at the beginning of the 17th century; they contain another 150,000 items.

Scholars have traditionally divided the history of the library into five periods, Pre-Lateran, Lateran, Avignon, Pre-Vatican and Vatican.

The Pre-Lateran period, comprising the initial days of the library, dated from the earliest days of the Church. Only a handful of volumes survive from this period, though some are very significant.When was the Vat formally opened?

* mindnlp未微调前的回答:

wow , that ' s a lot of information ! i ' ll have to check it out !

* mindnlp微调后的回答:

it was formally established in 1475 remarked wang commenced baxter vii affiliate xii ) detained amid xvi scarcely spokesman murmured pradesh condemned himweekriedly upheld kilometers ywood longitude reportedly unarmed sworth congressional quarreandrea according monsieur constituent zhang smiled ɪfellows combe mitt

* torch微调前的回答:
wow , that ' s a lot of information ! i ' ll have to check it out !

* torch微调后的回答:

1475 monsieur palermo pradesh ˈprincipality pali turbines constituent gallagher xii ɪxv odi pauline ɒgregory coefficient julien deutsche sbury roberto henrietta əenko militants gmina podium hya taliban hague ːkensington poole inmate livery habsburg longitude reid lieu@@
145 changes: 145 additions & 0 deletions llm/finetune/blenderbot_small/mindNLPBlenderbotsmallCopa.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
from mindnlp.transformers import BlenderbotSmallForConditionalGeneration, BlenderbotSmallTokenizer
from mindnlp.engine import Trainer, TrainingArguments
from datasets import load_dataset, load_from_disk
import mindspore as ms
import os

# 设置运行模式和设备
ms.set_context(mode=ms.PYNATIVE_MODE, device_target="Ascend")

# 设置 HF_ENDPOINT 环境变量
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
# 加载模型和分词器
print("加载模型和分词器")
model_name = "facebook/blenderbot_small-90M"
tokenizer = BlenderbotSmallTokenizer.from_pretrained(model_name)
model = BlenderbotSmallForConditionalGeneration.from_pretrained(model_name)
print("模型和分词器加载完成")
# 测试原始模型的输出
input = "The Vatican Apostolic Library, more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, located in Vatican City. Formally established in 1475, although it is much older, it is one of the oldest libraries in the world and contains one of the most significant collections of historical texts. It has 75,000 codices from throughout history, as well as 1.1 million printed books, which include some 8,500 incunabula. \n\nThe Vatican Library is a research library for history, law, philosophy, science and theology. The Vatican Library is open to anyone who can document their qualifications and research needs. Photocopies for private study of pages from books published between 1801 and 1990 can be requested in person or by mail. \n\nIn March 2014, the Vatican Library began an initial four-year project of digitising its collection of manuscripts, to be made available online. \n\nThe Vatican Secret Archives were separated from the library at the beginning of the 17th century; they contain another 150,000 items. \n\nScholars have traditionally divided the history of the library into five periods, Pre-Lateran, Lateran, Avignon, Pre-Vatican and Vatican. \n\nThe Pre-Lateran period, comprising the initial days of the library, dated from the earliest days of the Church. Only a handful of volumes survive from this period, though some are very significant.When was the Vat formally opened?"
print("input question:", input)
input_tokens = tokenizer([input], return_tensors="ms")
output_tokens = model.generate(**input_tokens)
print("output answer:", tokenizer.batch_decode(output_tokens, skip_special_tokens=True)[0])

# # 设置填充标记(BlenderbotSmall默认无pad_token)
# # tokenizer.pad_token = tokenizer.eos_token # 用eos_token作为填充标记
# # model.config.pad_token_id = tokenizer.eos_token_id

print("加载数据集")
# 定义数据集保存路径
dataset_path = "./dataset_valid_preprocessed"
# 检查是否存在处理好的数据集
if os.path.exists(dataset_path):
# 加载预处理后的数据集
dataset_train = load_from_disk("./dataset_train_preprocessed")
dataset_valid = load_from_disk("./dataset_valid_preprocessed")
else:
dataset = load_dataset("stanfordnlp/coqa")
print("dataset finished\n")
print("dataset:", dataset)
print("\ndataset[train][0]:", dataset["train"][0])
print("\ndataset[validation][0]:", dataset["validation"][0])
dataset_train = dataset["train"]
dataset_valid = dataset["validation"]
# 数据预处理,coqa数据集是一个sotry和多个问题和多个答案的数据集,这里只取出第一个问题和第一个答案,sotry和问题拼接作为模型的输入,第一个答案作为模型的输出
def preprocess_function(examples):
# 取出第一个问题的文本
first_question = examples['questions'][0]
# 取出第一个答案的文本
first_answer = examples['answers']['input_text'][0]
# 将故事和第一个问题拼接成模型的输入格式
inputs = examples['story'] + " " + first_question
# 删除多余的引号
inputs = inputs.replace('"', '')
# 将第一个答案作为模型的输出
labels = first_answer
# 删除多余的引号
labels = labels.replace('"', '')
return {'input_ids': inputs, 'labels': labels}

def tokenize_function(examples):
# 对输入进行分词
model_inputs = tokenizer(examples['input_ids'], max_length=512, truncation=True, padding="max_length")
# 对标签进行分词
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples['labels'], max_length=512, truncation=True, padding="max_length")
model_inputs["labels"] = labels["input_ids"]
return model_inputs
# 应用预处理函数
dataset_train = dataset_train.map(preprocess_function, batched=False)
dataset_train = dataset_train.map(tokenize_function, batched=True)
dataset_train = dataset_train.remove_columns(["source", "story", "questions", "answers"])

dataset_valid = dataset_valid.map(preprocess_function, batched=False)
dataset_valid = dataset_valid.map(tokenize_function, batched=True)
dataset_valid = dataset_valid.remove_columns(["source", "story", "questions", "answers"])

dataset_train.save_to_disk("./dataset_train_preprocessed")
dataset_valid.save_to_disk("./dataset_valid_preprocessed")
print("dataset_train_tokenizerd:", dataset_train)

print("转化为mindspore格式数据集")
import numpy as np
def data_generator(dataset):
for item in dataset:
yield (
np.array(item["input_ids"], dtype=np.int32),
np.array(item["attention_mask"], dtype=np.int32),
np.array(item["labels"], dtype=np.int32)
)
import mindspore.dataset as ds
def create_mindspore_dataset(dataset, shuffle=True):
return ds.GeneratorDataset(
source=lambda: data_generator(dataset), # 使用 lambda 包装生成器
column_names=["input_ids", "attention_mask", "labels"],
shuffle=shuffle,
num_parallel_workers=1
)
dataset_train_tokenized = create_mindspore_dataset(dataset_train, shuffle=True)
dataset_valid_tokenized = create_mindspore_dataset(dataset_valid, shuffle=False)

TOKENS = 20
EPOCHS = 10
BATCH_SIZE = 4
training_args = TrainingArguments(
output_dir='./MindNLPblenderbot_coqa_finetuned',
overwrite_output_dir=True,
num_train_epochs=EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
save_steps=500, # Save checkpoint every 500 steps
save_total_limit=2, # Keep only the last 2 checkpoints
logging_dir="./mindsporelogs", # Directory for logs
logging_steps=100, # Log every 100 steps
logging_strategy="epoch",
evaluation_strategy="epoch",
eval_steps=500, # Evaluation frequency
warmup_steps=100,
learning_rate=5e-5,
weight_decay=0.01, # Weight decay
)

trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset_train_tokenized,
eval_dataset=dataset_valid_tokenized
)
# 开始训练
print("开始训练")
trainer.train()
eval_results = trainer.evaluate()
print(f"Evaluation results: {eval_results}")
model.save_pretrained("./blenderbot_coqa_finetuned")
tokenizer.save_pretrained("./blenderbot_coqa_finetuned")
fine_tuned_model = BlenderbotSmallForConditionalGeneration.from_pretrained("./blenderbot_coqa_finetuned")
fine_tuned_tokenizer = BlenderbotSmallTokenizer.from_pretrained("./blenderbot_coqa_finetuned")


print("再次测试对话")
input = "The Vatican Apostolic Library, more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, located in Vatican City. Formally established in 1475, although it is much older, it is one of the oldest libraries in the world and contains one of the most significant collections of historical texts. It has 75,000 codices from throughout history, as well as 1.1 million printed books, which include some 8,500 incunabula. \n\nThe Vatican Library is a research library for history, law, philosophy, science and theology. The Vatican Library is open to anyone who can document their qualifications and research needs. Photocopies for private study of pages from books published between 1801 and 1990 can be requested in person or by mail. \n\nIn March 2014, the Vatican Library began an initial four-year project of digitising its collection of manuscripts, to be made available online. \n\nThe Vatican Secret Archives were separated from the library at the beginning of the 17th century; they contain another 150,000 items. \n\nScholars have traditionally divided the history of the library into five periods, Pre-Lateran, Lateran, Avignon, Pre-Vatican and Vatican. \n\nThe Pre-Lateran period, comprising the initial days of the library, dated from the earliest days of the Church. Only a handful of volumes survive from this period, though some are very significant.When was the Vat formally opened?"
print("input question:", input)
input_tokens = fine_tuned_tokenizer([input], return_tensors="ms")
output_tokens = fine_tuned_model.generate(**input_tokens)
print("output answer:", fine_tuned_tokenizer.batch_decode(output_tokens, skip_special_tokens=True)[0])
165 changes: 165 additions & 0 deletions llm/finetune/blenderbot_small/mindNLPBlenderbotsmallPersona.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
# !pip install mindnlp
# !pip install mindspore==2.4
# !export LD_PRELOAD=$LD_PRELOAD:/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/torch.libs/libgomp-74ff64e9.so.1.0.0
# !yum install libsndfile
from mindnlp.transformers import BlenderbotSmallForConditionalGeneration, BlenderbotSmallTokenizer
from mindnlp.engine import Trainer, TrainingArguments
from datasets import load_dataset, load_from_disk
import mindspore as ms
import os
# 设置运行模式和设备
ms.set_context(mode=ms.PYNATIVE_MODE, device_target="Ascend")
# 设置 HF_ENDPOINT 环境变量
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
# 加载模型和分词器
print("加载模型和分词器")
model_name = "facebook/blenderbot_small-90M"
tokenizer = BlenderbotSmallTokenizer.from_pretrained(model_name)
model = BlenderbotSmallForConditionalGeneration.from_pretrained(model_name)
print("模型和分词器加载完成")
# 测试原始模型的输出
input = "Nice to meet you too. What are you interested in?"
print("input question:", input)
input_tokens = tokenizer([input], return_tensors="ms")
output_tokens = model.generate(**input_tokens)
print("output answer:", tokenizer.batch_decode(output_tokens, skip_special_tokens=True)[0])
# 设置填充标记(BlenderbotSmall默认无pad_token)
# tokenizer.pad_token = tokenizer.eos_token # 用eos_token作为填充标记
# model.config.pad_token_id = tokenizer.eos_token_id
print("加载数据集")
# 加载 Persona-Chat 数据集
# 定义数据集保存路径
dataset_path = "./dataset_valid_preprocessed"
# 检查是否存在处理好的数据集
if os.path.exists(dataset_path):
# 加载预处理后的数据集
dataset_train = load_from_disk("./dataset_train_preprocessed")
dataset_valid = load_from_disk("./dataset_valid_preprocessed")
else:
dataset = load_dataset("google/Synthetic-Persona-Chat")
print("dataset finished")

print("dataset:", dataset)
print("dataset['train'][0]:", dataset["train"][0])
dataset_train = dataset["train"]
dataset_valid = dataset["validation"]
print("dataset_train:", dataset_train)
print("dataset_train['Best Generated Conversation'][0]:\n",
dataset_train["Best Generated Conversation"][0])
print("dataset_train['user 1 personas'][0]:",
dataset_train["user 1 personas"][0])
print("dataset_train['user 2 personas'][0]:",
dataset_train["user 2 personas"][0])
print("dataset_train.column_names:",
dataset_train.column_names)
# 数据预处理:将对话格式化为上下文-回复对
def format_dialogue(examples):
inputs, targets = [], []
for conversation in examples["Best Generated Conversation"]:
# 将对话按行拆分
lines = conversation.split("\n")
# 将对话拆分为上下文和回复
# print("lines_range:", len(lines) - 1)
for i in range(len(lines) - 1):
context = "\n".join(lines[:i+1]) # 上下文是当前行及之前的所有行
reply = lines[i+1] # 下一行是回复
context = context.replace("User 1: ", "")
inputs.append(context.strip())
context = context.replace("User 2: ", "")
targets.append(reply.strip())
# print(f"Best Generated Conversation: {len(examples['Best Generated Conversation'])}")
# print(f"user 1 personas: {len(examples['user 1 personas'])}")
# print(f"inputs length: {len(inputs)}, targets length: {len(targets)}")
return {"input": inputs, "target": targets}

# 应用预处理函数
dataset_train = dataset_train.map(format_dialogue, batched=True
, remove_columns=["user 1 personas"
, "user 2 personas"
, "Best Generated Conversation"])
dataset_valid = dataset_valid.map(format_dialogue, batched=True
, remove_columns=["user 1 personas"
, "user 2 personas"
, "Best Generated Conversation"])
# 保存预处理后的数据集
dataset_train.save_to_disk("./dataset_train_preprocessed")
dataset_valid.save_to_disk("./dataset_valid_preprocessed")
print("tokenizer数据集")
# 定义数据集保存路径
dataset_path = "./datasetTokenized_train_preprocessed"
# 检查是否存在处理好的数据集
if os.path.exists(dataset_path):
# 加载预处理后的数据集
dataset_train_tokenized = load_from_disk("./datasetTokenized_train_preprocessed")
dataset_valid_tokenized= load_from_disk("./datasetTokenized_valid_preprocessed")
else:
# 分词处理
def tokenize_function(examples):
model_inputs = tokenizer(
examples["input"],
max_length=128,
truncation=True,
padding="max_length",
)
with tokenizer.as_target_tokenizer():
labels = tokenizer(
examples["target"],
max_length=128,
truncation=True,
padding="max_length",
)
model_inputs["labels"] = labels["input_ids"]#获得"labels" "input_ids" "attention_mask"
return model_inputs

dataset_train_tokenized = dataset_train.map(tokenize_function, batched=True)
dataset_valid_tokenized = dataset_valid.map(tokenize_function, batched=True)
# 保存预处理后的数据集
dataset_train_tokenized.save_to_disk("./datasetTokenized_train_preprocessed")
dataset_valid_tokenized.save_to_disk("./datasetTokenized_valid_preprocessed")
# 训练参数
TOKENS = 20
EPOCHS = 10
BATCH_SIZE = 4
# 定义训练参数
training_args = TrainingArguments(
output_dir='./Mindsporeblenderbot_persona_finetuned',
overwrite_output_dir=True,
num_train_epochs=EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,

save_steps=500, # Save checkpoint every 500 steps
save_total_limit=2, # Keep only the last 2 checkpoints
logging_dir="./mindsporelogs", # Directory for logs
logging_steps=100, # Log every 100 steps
logging_strategy="epoch",
evaluation_strategy="epoch",
eval_steps=500, # Evaluation frequency
warmup_steps=100,
learning_rate=5e-5,
weight_decay=0.01, # Weight decay
)

# 训练器
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset_train_tokenized,
eval_dataset=dataset_valid_tokenized
)
# 开始训练
trainer.train()
eval_results = trainer.evaluate()
print(f"Evaluation results: {eval_results}")
# 保存模型
model.save_pretrained("./blenderbot_dialogue_finetuned")
tokenizer.save_pretrained("./blenderbot_dialogue_finetuned")
fine_tuned_model = BlenderbotSmallForConditionalGeneration.from_pretrained("./blenderbot_dialogue_finetuned")
fine_tuned_tokenizer = BlenderbotSmallTokenizer.from_pretrained("./blenderbot_dialogue_finetuned")
# 再次测试对话
print("再次测试对话")
input = "Nice to meet you too. What are you interested in?"
print("input question:", input)
input_tokens = fine_tuned_tokenizer([input], return_tensors="ms")
output_tokens = fine_tuned_model.generate(**input_tokens)
print("output answer:", fine_tuned_tokenizer.batch_decode(output_tokens, skip_special_tokens=True)[0])
138 changes: 138 additions & 0 deletions llm/finetune/blenderbot_small/mindNLPCopaLog.txt

Large diffs are not rendered by default.

74 changes: 74 additions & 0 deletions llm/finetune/blenderbot_small/mindNLPPersonaTenLog.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
(MindSpore) [ma-user work]$python OldmindNLPBlenderbotsmall.py
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 1.252 seconds.
Prefix dict has been built successfully.
加载模型和分词器
/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindnlp/transformers/tokenization_utils_base.py:1526: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted, and will be then set to `False` by default.
warnings.warn(
BlenderbotSmallForConditionalGeneration has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`.`PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
模型和分词器加载完成
input question: Nice to meet you too. What are you interested in?
output answer: i ' m not really sure . i ' ve always wanted to go back to school , but i don ' t know what i want to do yet .
加载数据集
tokenizer数据集
dataset_train_tokenized: Dataset({
features: ['input', 'target', 'input_ids', 'attention_mask', 'labels'],
num_rows: 23589
})
dataset_valid_tokenized: Dataset({
features: ['input', 'target', 'input_ids', 'attention_mask', 'labels'],
num_rows: 2687
})
开始训练
0%| | 0/8847 [00:00<?, ?it/s] 6%|█▌ | 500/8847 [04:00<1:05:14, 2.13it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
11%|███ | 1000/8847 [08:29<1:05:43, 1.99it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
17%|████▉ | 1500/8847 [12:52<57:16, 2.14it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
23%|██████▌ | 2000/8847 [17:22<56:59, 2.00it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
28%|████████▏ | 2500/8847 [21:44<53:00, 2.00it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
{'loss': 0.1737, 'learning_rate': 3.371441637132731e-05, 'epoch': 1.0}
33%|█████████▋ | 2949/8847 [25:40<49:59, 1.97it/s]{'eval_loss': 0.16312436759471893, 'eval_runtime': 25.1526, 'eval_samples_per_second': 13.358, 'eval_steps_per_second': 1.67, 'epoch': 1.0}
34%|█████████▊ | 3000/8847 [26:30<48:38, 2.00it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
40%|███████████▍ | 3500/8847 [30:47<43:32, 2.05it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
45%|█████████████ | 4000/8847 [35:04<39:51, 2.03it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
51%|██████████████▊ | 4500/8847 [39:22<42:30, 1.70it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
57%|████████████████▍ | 5000/8847 [43:37<31:08, 2.06it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
62%|██████████████████ | 5500/8847 [47:54<27:22, 2.04it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
{'loss': 0.1336, 'learning_rate': 1.6857208185663656e-05, 'epoch': 2.0}
{'eval_loss': 0.15773458778858185, 'eval_runtime': 22.8097, 'eval_samples_per_second': 14.731, 'eval_steps_per_second': 1.841, 'epoch': 2.0}
68%|███████████████████▋ | 6000/8847 [52:33<23:08, 2.05it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
73%|█████████████████████▎ | 6500/8847 [56:50<19:13, 2.03it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
79%|█████████████████████▎ | 7000/8847 [1:01:21<19:25, 1.58it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
85%|███████████████████████████████████████████████████████████████████████████████████████▎ | 7500/8847 [1:07:00<15:30, 1.45it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
90%|█████████████████████████████████████████████████████████████████████████████████████████████▏ | 8000/8847 [1:12:37<08:42, 1.62it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
96%|██████████████████████████████████████████████████████████████████████████████████████████████████▉ | 8500/8847 [1:18:12<03:45, 1.54it/s]Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
{'loss': 0.1099, 'learning_rate': 0.0, 'epoch': 3.0}
{'eval_loss': 0.15398454666137695, 'eval_runtime': 32.5334, 'eval_samples_per_second': 10.328, 'eval_steps_per_second': 1.291, 'epoch': 3.0}
{'train_runtime': 4966.7027, 'train_samples_per_second': 14.25, 'train_steps_per_second': 1.781, 'train_loss': 0.1390861510368098, 'epoch': 3.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 8847/8847 [1:22:46<00:00, 1.78it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 336/336 [00:29<00:00, 11.44it/s]
Evaluation results: {'eval_loss': 0.15398454666137695, 'eval_runtime': 29.6095, 'eval_samples_per_second': 11.348, 'eval_steps_per_second': 1.418, 'epoch': 3.0}
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file instead.
Non-default generation parameters: {'max_length': 128, 'min_length': 20, 'num_beams': 10, 'length_penalty': 0.65, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
再次测试对话
input question: Nice to meet you too. What are you interested in?
output answer: user 2: i'm interested in a lot of things, but my main interests are music, art, and music. i also like to play video games, go to the movies, and spend time with my friends and family. my favorite video games are the legend of zelda series, and my favorite game is the witcher 3. name) what breath my his their i they ] include yes when philip boarity