Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[馃挕SUG] Performance at different levels of activity: "cold-start" vs "hot" #671

Open
deklanw opened this issue Jan 11, 2021 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@deklanw
Copy link
Contributor

deklanw commented Jan 11, 2021

It would be convenient to disentangle performance for "cold-start" and "hot" situations for both users and items.

Interesting example for the user case from "Noise Contrastive Estimation for One-Class Collaborative Filtering" by Ga Wu et al.

image

@deklanw deklanw added the enhancement New feature or request label Jan 11, 2021
@batmanfly
Copy link
Member

It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later. While as model-level improvement, we will read the the recommended paper first.

@deklanw
Copy link
Contributor Author

deklanw commented Jan 13, 2021

It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later.

Sounds good.

To be clear, I was just using the paper as an example of this kind of metric appearing in a paper. I have already coded up the model in the paper, btw #670

Of course, cold-start performance metrics is common to find in papers

@batmanfly
Copy link
Member

It is a nice suggestion. I think a simple improvement would be reporting the results according to different levels of activity (besides the overall performance), which can be on schedule if we had more hand later.

Sounds good.

To be clear, I was just using the paper as an example of this kind of metric appearing in a paper. I have already coded up the model in the paper, btw #670

Of course, cold-start performance metrics is common to find in papers

Got it! That would be with new metrics and new result presentation ways. We will schedule this point.

@2017pxy 2017pxy self-assigned this Jan 15, 2021
@deklanw
Copy link
Contributor Author

deklanw commented Jan 21, 2021

Another example of these metrics in a paper. See #696

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants