-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: object of type 'NoneType' has no len() #946
Comments
I have had similar errors before, especially with Blablador. It seems that the AI did not respond at all or not in the correct format. Please do some checks:
Then, we can go from there |
I am really excited to use the AI feature and will be promoting it at my workplace. Thanks for working on this. |
This is a little bit of a mystery: The first step of the AI topic search seems to work, as shown by the screenshot ("Found 99 chunks of data..."). The interaction with the AI works also, as you have tested with the general chat. But putting both together in the AI topic chat seems to fail. Langchain must be installed correctly, otherwise the general chat would not work. Knowing the exact versions would still be interesting, if the problem persists. Running |
I think you are right - when I shifted to using OpenAI the Topic Chat started working. So, I guess it is an issue with the Blablador server since the error is still there with Blablador. Langchain is installed - I checked without activating the virtual environment. Here is the list of versions: Thanks for troubleshooting this for me. |
Your langchain packages are actually newer than mine. I have updated my system to see if the new versions break anything, but they work perfectly fine (on Windows, in my case). I am running out of ideas of what to test, especially since it works with GPT-4 now. We'll have to see if other people on Linux run into problems also. I still think it might have to do with temporary glitches on the Blablador server. What I have seen over the last couple of days was that not all the models were up and running all the time. Maybe Alexandre, the maintainer of Blablador, is testing out some new configurations. If the server needs restarting, it can take up to 40 Minutes until all the models are loaded into the GPU-memory and are ready to be used. Unfortunately, there is no redundancy. The whole project runs on a single server with 8 GPUs. If you want to check if the system is up and running, you can test this with the Blablador chat here: https://helmholtz-blablador.fz-juelich.de/ Select the model "2 - Mixtral-8x7B-Instruct-v0.1 Slower with higher quality", this is used by QualCoder. In general, however, I would recommend using GPT-4. The results are so much better compared with the rather small models running on Blablador. I am still looking for other options to access larger open source models on academic hardware. |
Maybe on the instructions page 07b. This could be described as a limitation of blabador. |
Yes, it was a server issue. When I tried yesterday, Blabador worked. I am trying the same queries with GPT-4 and Blabador, - responses from GPT-4 are more detailed and at times have additional information that Blablador is not providing. |
Great, happy to hear that. I try to catch errors returned by the server. But in this case, the server did not return an error but an empty or somehow malformed response. It is very hard to account for such temporary problems because they are so difficult to reproduce.
Yes, that's my experience too. Also, the interpretations of the empirical sources are more nuanced and do also account for more implicit meaning (to some extent). |
Describe the bug :
Error while trying the AI chat feature with Topic Chat (see below)
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_llm.py", line 278, in _ai_async_error
raise exception_type(value).with_traceback(tb_obj) # Re-raise
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_async_worker.py", line 115, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_llm.py", line 318, in _ai_async_stream
for chunk in llm.stream(messages):
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 420, in stream
raise e
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 400, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 640, in _stream
generation_chunk = _convert_chunk_to_generation_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 297, in _convert_chunk_to_generation_chunk
if len(choices) == 0:
^^^^^^^^^^^^
TypeError: object of type 'NoneType' has no len()
To Reproduce :
While using the Topic Chat within AI Chat the error shows up after the initial part (loading of chunks) is completed.
Expected behavior :
Not sure - trying feature for first time.
Screenshots :
Desktop (please complete the following information):
Additional context :
Using Blablador
The text was updated successfully, but these errors were encountered: