Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Citation Request and Contribution Acknowledgment #1

Open
yueyang130 opened this issue Oct 26, 2023 · 0 comments
Open

Citation Request and Contribution Acknowledgment #1

yueyang130 opened this issue Oct 26, 2023 · 0 comments

Comments

@yueyang130
Copy link

I would like to extend my sincere appreciation for the remarkable work you have conducted in your paper.

Upon careful review of your work, I notice the relevance of our previous research endeavors in relation to the topic you have explored. In our studies [1, 2], , we primarily focuses on scenarios where the dataset consists predominantly of sub-optimal trajectories. In such cases, a straightforward application of the policy might inadvertently lead to the imitation of suboptimal actions. To address this, we proposed a sampling strategy designed as a plug-in mechanism. This approach effectively constrains the policy, ensuring it is guided by “good data” rather than uniform sampling.

Given the apparent synergy between our works, I kindly request the inclusion of citations to our papers in your publication. I believe that acknowledging these references will enrich the context of your work and provide a comprehensive perspective to your readers.

[1] Yue, Yang, et al. "Boosting Offline Reinforcement Learning via Data Rebalancing." NIPS 2022, Offline RL Workshop.
[2] Yue, Yang, et al. "Offline Prioritized Experience Replay." arXiv preprint arXiv:2306.05412 (2023).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant