Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: object of type 'NoneType' has no len() #946

Open
amru39 opened this issue Oct 15, 2024 · 8 comments
Open

TypeError: object of type 'NoneType' has no len() #946

amru39 opened this issue Oct 15, 2024 · 8 comments

Comments

@amru39
Copy link

amru39 commented Oct 15, 2024

Describe the bug :


Error while trying the AI chat feature with Topic Chat (see below)

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_llm.py", line 278, in _ai_async_error
raise exception_type(value).with_traceback(tb_obj) # Re-raise
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_async_worker.py", line 115, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/qualcoder/ai_llm.py", line 318, in _ai_async_stream
for chunk in llm.stream(messages):

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 420, in stream
raise e

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 400, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 640, in _stream
generation_chunk = _convert_chunk_to_generation_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/rahul/Downloads/QualCoder/qualcoder/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 297, in _convert_chunk_to_generation_chunk
if len(choices) == 0:
^^^^^^^^^^^^

TypeError: object of type 'NoneType' has no len()

To Reproduce :


While using the Topic Chat within AI Chat the error shows up after the initial part (loading of chunks) is completed.

Expected behavior :


Not sure - trying feature for first time.

Screenshots :


image

Desktop (please complete the following information):
image

Additional context :


Using Blablador

@kaixxx
Copy link
Collaborator

kaixxx commented Oct 15, 2024

I have had similar errors before, especially with Blablador. It seems that the AI did not respond at all or not in the correct format. Please do some checks:

  • Go to "AI Chat > New > New general chat" and ask anything, just to see if you can interact with the AI at all.
  • Please check the version of the langchain-packages you have installed. On Linux, you can do pip list | grep langchain

Then, we can go from there

@amru39
Copy link
Author

amru39 commented Oct 16, 2024

  1. The General Chat feature is working - it queries the project memo and gives a response.
  2. I think langchain has not been installed - when I try pip list | grep langchain I don't see the version show up. Maybe that is the issue?

I am really excited to use the AI feature and will be promoting it at my workplace. Thanks for working on this.

@kaixxx
Copy link
Collaborator

kaixxx commented Oct 16, 2024

This is a little bit of a mystery: The first step of the AI topic search seems to work, as shown by the screenshot ("Found 99 chunks of data..."). The interaction with the AI works also, as you have tested with the general chat. But putting both together in the AI topic chat seems to fail.
Maybe it was a temporary problem of the Blablador server? Can you test the topic search again?

Langchain must be installed correctly, otherwise the general chat would not work. Knowing the exact versions would still be interesting, if the problem persists. Running pip list should give you a list of all the python packages installed. If you have created a virtual environment for QualCoder, you must ensure to run pip list whitin this environment.

@amru39
Copy link
Author

amru39 commented Oct 16, 2024

I think you are right - when I shifted to using OpenAI the Topic Chat started working. So, I guess it is an issue with the Blablador server since the error is still there with Blablador.

Langchain is installed - I checked without activating the virtual environment. Here is the list of versions:
langchain 0.3.3
langchain-chroma 0.1.4
langchain-community 0.3.2
langchain-core 0.3.10
langchain-openai 0.2.2
langchain-text-splitters 0.3.0
langsmith 0.1.135

Thanks for troubleshooting this for me.

@kaixxx
Copy link
Collaborator

kaixxx commented Oct 18, 2024

Your langchain packages are actually newer than mine. I have updated my system to see if the new versions break anything, but they work perfectly fine (on Windows, in my case).

I am running out of ideas of what to test, especially since it works with GPT-4 now. We'll have to see if other people on Linux run into problems also.

I still think it might have to do with temporary glitches on the Blablador server. What I have seen over the last couple of days was that not all the models were up and running all the time. Maybe Alexandre, the maintainer of Blablador, is testing out some new configurations. If the server needs restarting, it can take up to 40 Minutes until all the models are loaded into the GPU-memory and are ready to be used. Unfortunately, there is no redundancy. The whole project runs on a single server with 8 GPUs. If you want to check if the system is up and running, you can test this with the Blablador chat here: https://helmholtz-blablador.fz-juelich.de/ Select the model "2 - Mixtral-8x7B-Instruct-v0.1 Slower with higher quality", this is used by QualCoder.

In general, however, I would recommend using GPT-4. The results are so much better compared with the rather small models running on Blablador. I am still looking for other options to access larger open source models on academic hardware.

@ccbogel
Copy link
Owner

ccbogel commented Oct 18, 2024

Maybe on the instructions page 07b. This could be described as a limitation of blabador.

@amru39
Copy link
Author

amru39 commented Oct 21, 2024

Your langchain packages are actually newer than mine. I have updated my system to see if the new versions break anything, but they work perfectly fine (on Windows, in my case).

I am running out of ideas of what to test, especially since it works with GPT-4 now. We'll have to see if other people on Linux run into problems also.

I still think it might have to do with temporary glitches on the Blablador server. What I have seen over the last couple of days was that not all the models were up and running all the time. Maybe Alexandre, the maintainer of Blablador, is testing out some new configurations. If the server needs restarting, it can take up to 40 Minutes until all the models are loaded into the GPU-memory and are ready to be used. Unfortunately, there is no redundancy. The whole project runs on a single server with 8 GPUs. If you want to check if the system is up and running, you can test this with the Blablador chat here: https://helmholtz-blablador.fz-juelich.de/ Select the model "2 - Mixtral-8x7B-Instruct-v0.1 Slower with higher quality", this is used by QualCoder.

In general, however, I would recommend using GPT-4. The results are so much better compared with the rather small models running on Blablador. I am still looking for other options to access larger open source models on academic hardware.

Yes, it was a server issue. When I tried yesterday, Blabador worked. I am trying the same queries with GPT-4 and Blabador, - responses from GPT-4 are more detailed and at times have additional information that Blablador is not providing.

@kaixxx
Copy link
Collaborator

kaixxx commented Oct 21, 2024

Yes, it was a server issue. When I tried yesterday, Blabador worked.

Great, happy to hear that. I try to catch errors returned by the server. But in this case, the server did not return an error but an empty or somehow malformed response. It is very hard to account for such temporary problems because they are so difficult to reproduce.

responses from GPT-4 are more detailed

Yes, that's my experience too. Also, the interpretations of the empirical sources are more nuanced and do also account for more implicit meaning (to some extent).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants