-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Device does not exist / is not supported error with neuralchat deploy_chatbot_on_xpu notebook #1276
Comments
Thanks for reporting this issue, it can be re-produced. “Device does not exist” dues to this line: https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/models/model_utils.py#L441 “Device is not supported” when setting device=’xpu:0’ because only "xpu" is considered. Will fix above 2 issues asap! |
@brent-elliott seems that removing libstdc++* from |
Thank you. Removing these files resolved the issue of torch.xpu.is_available returning False. Do you know if this is already captured somewhere in the documentation or notebook to remove these files that I missed? |
Thank you! I have shifted to using xpu in my scripts in the meantime. |
The conflict of libstdc++.so version between conda environment and OS is a known issue that you can find in IPEX documentation: https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/performance_tuning/known_issues.html In the While this time the issue appears in a different way, still figuring out the root cause...... |
Problem Summary and status of similar tests
I am having trouble getting neuralchat to work with my Intel Data Center Flex 170 GPU. Below is my procedure with the build_chatbot_on_xpu Jupyter notebook with a clean environment. I have tried this procedure multiple times and also attempted to follow different instructions from different sources but have the same outcome each time. When I get to the point of running the inference, I get either “Device does not exist” when I stick with the default device reference xpu or “Device is not supported” if I use xpu:0. I have tried this with several different Python versions, but use 3.9 below.
I have BigDL operational on this XPU and system (in a separate environment and not running during these tests below). I have also successfully used the deploy_chatbot_on_icx notebook (again in a separate environment and not running at the same time) using similar tweaks as outlined below to address missing dependencies in requirements.txt in my environment.
I also tried to get deploy_chatbot_on_xpu working (below I focus on build_chatbot_on_xpu). As long as I bring over the code from deploy_chatbot_on_cpu (to address the error relating to asyncio), I can successfully run the server but again get the error related to Device does not exist with device=’xpu’ and Device is not supported with device=’xpu:0’.
I am hoping to get feedback on what I am doing wrong so that I can operate neural chat and successfully employ the OpenAI APIs.
Installation Procedure
The text was updated successfully, but these errors were encountered: