You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to extend my sincere appreciation for the remarkable work you have conducted in your paper.
Upon careful review of your work, I notice the relevance of our previous research endeavors in relation to the topic you have explored. In our studies [1, 2], , we primarily focuses on scenarios where the dataset consists predominantly of sub-optimal trajectories. In such cases, a straightforward application of the policy might inadvertently lead to the imitation of suboptimal actions. To address this, we proposed a sampling strategy designed as a plug-in mechanism. This approach effectively constrains the policy, ensuring it is guided by “good data” rather than uniform sampling.
Given the apparent synergy between our works, I kindly request the inclusion of citations to our papers in your publication. I believe that acknowledging these references will enrich the context of your work and provide a comprehensive perspective to your readers.
[1] Yue, Yang, et al. "Boosting Offline Reinforcement Learning via Data Rebalancing." NIPS 2022, Offline RL Workshop.
[2] Yue, Yang, et al. "Offline Prioritized Experience Replay." arXiv preprint arXiv:2306.05412 (2023).
The text was updated successfully, but these errors were encountered:
I would like to extend my sincere appreciation for the remarkable work you have conducted in your paper.
Upon careful review of your work, I notice the relevance of our previous research endeavors in relation to the topic you have explored. In our studies [1, 2], , we primarily focuses on scenarios where the dataset consists predominantly of sub-optimal trajectories. In such cases, a straightforward application of the policy might inadvertently lead to the imitation of suboptimal actions. To address this, we proposed a sampling strategy designed as a plug-in mechanism. This approach effectively constrains the policy, ensuring it is guided by “good data” rather than uniform sampling.
Given the apparent synergy between our works, I kindly request the inclusion of citations to our papers in your publication. I believe that acknowledging these references will enrich the context of your work and provide a comprehensive perspective to your readers.
[1] Yue, Yang, et al. "Boosting Offline Reinforcement Learning via Data Rebalancing." NIPS 2022, Offline RL Workshop.
[2] Yue, Yang, et al. "Offline Prioritized Experience Replay." arXiv preprint arXiv:2306.05412 (2023).
The text was updated successfully, but these errors were encountered: