Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QST] Questions about topk_model.evaluate() #1243

Closed
ZhanqiuHu opened this issue Jun 6, 2024 · 2 comments
Closed

[QST] Questions about topk_model.evaluate() #1243

ZhanqiuHu opened this issue Jun 6, 2024 · 2 comments

Comments

@ZhanqiuHu
Copy link

❓ Questions & Help

Details

I was looking at this tutorial and came across these two lines of code. I'm curious about what evaluation and sampling methods are used to compute the topk metrics, and how evaluation batch size and shuffling affect the computed metrics.

eval_loader = mm.Loader(valid, batch_size=1024).map(mm.ToTarget(schema, "item_id"))
metrics = topk_model.evaluate(eval_loader, return_dict=True)

Thank!

@sararb
Copy link
Contributor

sararb commented Jul 4, 2024

The top-k model uses a brute force evaluation where each user query scores against the whole catalog of items and by default shuffling is set to False for eval and inference mode.

@rnyak
Copy link
Contributor

rnyak commented Jul 15, 2024

closing due to low activity, please reopen if you have further questions.

@rnyak rnyak closed this as completed Jul 15, 2024
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants