Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add masked MSE loss #245

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Add masked MSE loss #245

wants to merge 1 commit into from

Conversation

njellinas
Copy link

I noticed that you mask the outputs of the decoder, so there was no need to apply any more 'masking code'. Just in the calculation of the MSE I added a counter of the non-padded elements, changed the loss reduction to 'sum' and then divided with the counter.

Signed-off-by: njellinas [email protected]

Signed-off-by: njellinas <[email protected]>
@hadaev8
Copy link

hadaev8 commented Jul 26, 2019

Just tried, get error TypeError: forward() missing 1 required positional argument: 'output_lengths'

@njellinas
Copy link
Author

In which point did it output this error? I put an extra argument in the Tacotron2Loss forward function, but I give it as x[-1] which are the output_lengths from parse_batch.

@hadaev8
Copy link

hadaev8 commented Jul 26, 2019

Trace

TypeError                                 Traceback (most recent call last)

<ipython-input-12-1564558022d7> in <module>()
    307 create_mels()
    308 train(output_directory, log_directory, checkpoint_path,
--> 309       n_gpus, rank, group_name, hparams, log_directory2, checkpoint_path_vanilla)

1 frames

<ipython-input-12-1564558022d7> in train(output_directory, log_directory, checkpoint_path, n_gpus, rank, group_name, hparams, log_directory2, checkpoint_path_vanilla)
    244             y_pred = model(x)
    245 
--> 246             loss = criterion(y_pred, y)
    247             if hparams.distributed_run:
    248                 reduced_loss = reduce_tensor(loss.data, n_gpus).item()

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

TypeError: forward() missing 1 required positional argument: 'output_lengths'

@njellinas
Copy link
Author

I guess you have a different codebase because this line 307 create_mels() does not exist in the master branch. And wherever there is a criterion function I have put the last argument.
Or should I pull from a different branch than master?

@hadaev8
Copy link

hadaev8 commented Jul 27, 2019

Yes, I have my own modification (minor) of this repo.
(and I'm not a contributor or something, just tried to use your loss function).
Maybe couz I use fp16 train.
Here is the code.

https://colab.research.google.com/drive/1tBOXMBNbAkS-zHDIvP3hfFEUNf-kGosd

@njellinas
Copy link
Author

Oh OK then that makes sense. Look at the changes in the diffs, and apply them to your code. You are missing the last argument at criterion which must be the Mel lengths.

@hadaev8
Copy link

hadaev8 commented Jul 27, 2019

Oh, sorry, somehow I missed what i copy-pasted train file and just pulled your changes to my repo.

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Dec 12, 2019

@njellinas Does adding the masked MSE loss improve the model?
For example, time to learn attention or lower validation loss?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants