Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on the code of masked cross entropy loss #11

Open
LuckyJinging opened this issue Jul 13, 2022 · 0 comments
Open

Question on the code of masked cross entropy loss #11

LuckyJinging opened this issue Jul 13, 2022 · 0 comments

Comments

@LuckyJinging
Copy link

When I use the Player2Vec algorithm, I am confused by the masked cross entropy loss. mask/tf.reduce_sum(mask) has taken the average of items which are equal to 1. Why does it need to do another global average (tf.reduce_mean(loss)) instead of summing (tf.reduce_sum(loss))?

def masked_softmax_cross_entropy(preds: tf.Tensor, labels: tf.Tensor,
                                 mask: tf.Tensor) -> tf.Tensor:
    """
    Softmax cross-entropy loss with masking.
    :param preds: the last layer logits of the input data
    :param labels: the labels of the input data
    :param mask: the mask for train/val/test data
    """
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=preds, labels=labels)
    mask = tf.cast(mask, dtype=tf.float32)
    mask /= tf.maximum(tf.reduce_sum(mask), tf.constant([1.]))
    loss *= mask
    return tf.reduce_mean(loss)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant