You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Morning,
I used your notebook Speech Emotion Recognition (Wav2Vec 2.0) with another dataset and I got an error during the training...
Could you help me please, the code and error are just below .
The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForSpeechClassification.forward` and have been ignored: language, audio_name, path.
***** Running training *****
Num examples = 10769
Num Epochs = 50
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 1
Total optimization steps = 134650
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448278899/work/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
Attempted to log scalar metric loss:
0.6984
Attempted to log scalar metric learning_rate:
9.999257333828444e-05
Attempted to log scalar metric epoch:
0.0
The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForSpeechClassification.forward` and have been ignored: language, audio_name, path.
***** Running Evaluation *****
Num examples = 5799
Batch size = 4
Attempted to log scalar metric eval_loss:
0.4974852204322815
Attempted to log scalar metric eval_accuracy:
0.8134161233901978
Attempted to log scalar metric eval_runtime:
296.3331
Attempted to log scalar metric eval_samples_per_second:
19.569
Attempted to log scalar metric eval_steps_per_second:
4.893
Attempted to log scalar metric epoch:
0.0
Saving model checkpoint to MODEL/wav2vec2-xlsr-speech-emotion-recognition_dropout-0.5_3/checkpoint-10
Configuration saved in MODEL/wav2vec2-xlsr-speech-emotion-recognition_dropout-0.5_3/checkpoint-10/config.json
Model weights saved in MODEL/wav2vec2-xlsr-speech-emotion-recognition_dropout-0.5_3/checkpoint-10/pytorch_model.bin
Configuration saved in MODEL/wav2vec2-xlsr-speech-emotion-recognition_dropout-0.5_3/checkpoint-10/preprocessor_config.json
---------------------------------------------------------------------------ValueErrorTraceback (mostrecentcalllast)
<ipython-input-32-3435b262f1ae>in<module>---->1trainer.train()
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/transformers/trainer.pyintrain(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1330tr_loss_step=self.training_step(model, inputs)
1331else:
->1332tr_loss_step=self.training_step(model, inputs)
13331334if (
<ipython-input-29-878b4353167f>intraining_step(self, model, inputs)
43ifself.use_amp:
44withautocast():
--->45loss=self.compute_loss(model, inputs)
46else:
47loss=self.compute_loss(model, inputs)
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/transformers/trainer.pyincompute_loss(self, model, inputs, return_outputs)
1921else:
1922labels=None->1923outputs=model(**inputs)
1924# Save past state if it exists1925# TODO: this needs to be fixed and made cleaner later./anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/torch/nn/modules/module.pyin_call_impl(self, *input, **kwargs)
1049ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks1050or_global_forward_hooksor_global_forward_pre_hooks):
->1051returnforward_call(*input, **kwargs)
1052# Do not call functions when jit is used1053full_backward_hooks, non_full_backward_hooks= [], []
<ipython-input-16-dd9fe3ea0f13>inforward(self, input_values, attention_mask, output_attentions, output_hidden_states, return_dict, labels)
70 ):
71return_dict=return_dictifreturn_dictisnotNoneelseself.config.use_return_dict--->72outputs=self.wav2vec2(
73input_values,
74attention_mask=attention_mask,
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/torch/nn/modules/module.pyin_call_impl(self, *input, **kwargs)
1049ifnot (self._backward_hooksorself._forward_hooksorself._forward_pre_hooksor_global_backward_hooks1050or_global_forward_hooksor_global_forward_pre_hooks):
->1051returnforward_call(*input, **kwargs)
1052# Do not call functions when jit is used1053full_backward_hooks, non_full_backward_hooks= [], []
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.pyinforward(self, input_values, attention_mask, mask_time_indices, output_attentions, output_hidden_states, return_dict)
12851286hidden_states, extract_features=self.feature_projection(extract_features)
->1287hidden_states=self._mask_hidden_states(
1288hidden_states, mask_time_indices=mask_time_indices, attention_mask=attention_mask1289 )
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.pyin_mask_hidden_states(self, hidden_states, mask_time_indices, attention_mask)
1228hidden_states[mask_time_indices] =self.masked_spec_embed.to(hidden_states.dtype)
1229elifself.config.mask_time_prob>0andself.training:
->1230mask_time_indices=_compute_mask_indices(
1231 (batch_size, sequence_length),
1232mask_prob=self.config.mask_time_prob,
/anaconda/envs/azureml_py38_pytorch/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.pyin_compute_mask_indices(shape, mask_prob, mask_length, attention_mask, min_masks)
240241# get random indices to mask-->242spec_aug_mask_idx=np.random.choice(
243np.arange(input_length- (mask_length-1)), num_masked_span, replace=False244 )
mtrand.pyxinnumpy.random.mtrand.RandomState.choice()
ValueError: Cannottakealargersamplethanpopulationwhen'replace=False'
The text was updated successfully, but these errors were encountered:
Morning,
I used your notebook Speech Emotion Recognition (Wav2Vec 2.0) with another dataset and I got an error during the training...
Could you help me please, the code and error are just below .
The text was updated successfully, but these errors were encountered: