We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traceback (most recent call last): File "run.py", line 28, in train_data, dev_data, test_data = build_dataset(config) File "D:\PYTHON_PROGRAMME\Bert-Chinese-Text-Classification\utils.py", line 36, in build_dataset train = load_dataset(config.train_path, config.pad_size) File "D:\PYTHON_PROGRAMME\Bert-Chinese-Text-Classification\utils.py", line 20, in load_dataset token = config.tokenizer.tokenize(content) AttributeError: 'NoneType' object has no attribute 'tokenize' 这个报错怎么解决?
The text was updated successfully, but these errors were encountered:
我也出现这个错误,你解决了吗
Sorry, something went wrong.
没有诶,你现在解决了吗?
@wangling6666 @Cgetier520990 我刚刚拉完代码也有这个问题,从huggingface把vocab.txt下载后放到模型文件夹就行
No branches or pull requests
Traceback (most recent call last):
File "run.py", line 28, in
train_data, dev_data, test_data = build_dataset(config)
File "D:\PYTHON_PROGRAMME\Bert-Chinese-Text-Classification\utils.py", line 36, in build_dataset
train = load_dataset(config.train_path, config.pad_size)
File "D:\PYTHON_PROGRAMME\Bert-Chinese-Text-Classification\utils.py", line 20, in load_dataset
token = config.tokenizer.tokenize(content)
AttributeError: 'NoneType' object has no attribute 'tokenize'
这个报错怎么解决?
The text was updated successfully, but these errors were encountered: