Unable to finetune transformer based ner model after initial tuning #13394
Replies: 2 comments 2 replies
-
Hi! It would be helpful if you could provide some more information so we can actually replicate what you're doing. Can you share both the config files, the commands you ran, and the output you got, including the full error stack trace? (please copy/paste and don't screenshot). That will help us try to help you ;-) |
Beta Was this translation helpful? Give feedback.
-
Hello, [system] [nlp] [components] [components.ner] [components.ner.model] [components.ner.model.tok2vec] [components.transformer] [components.transformer.model] [components.transformer.model.get_spans] [components.transformer.model.grad_scaler_config] [components.transformer.model.tokenizer_config] [components.transformer.model.transformer_config] [corpora] [corpora.dev] [corpora.train] [training] [training.batcher] [training.logger] [training.optimizer] [training.optimizer.learn_rate] [training.score_weights] [pretraining] [initialize] [initialize.components] [initialize.tokenizer] Here's the CLI: =========================== Initializing pipeline =========================== |
Beta Was this translation helpful? Give feedback.
-
How to reproduce the behaviour
Create a transformer ner model
Train it on data using the cfg and cli which auto-saves it
Create a new cfg file that points to your existing model
Try triggering the training using the CLI
You will get a missing config.json error
Your Environment
Beta Was this translation helpful? Give feedback.
All reactions