Skip to content

Commit 0a0ede3

Browse files
Ollama LLM: Added TypeError exception to _get_response_token_counts (run-llama#17150)
* added TypeError exception handling on Ollama llm on _get_response_token_counts * vbump --------- Co-authored-by: Logan Markewich <[email protected]>
1 parent e9c69ce commit 0a0ede3

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

llama-index-integrations/llms/llama-index-llms-ollama/llama_index/llms/ollama/base.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,6 +203,8 @@ def _get_response_token_counts(self, raw_response: dict) -> dict:
203203
total_tokens = prompt_tokens + completion_tokens
204204
except KeyError:
205205
return {}
206+
except TypeError:
207+
return {}
206208
return {
207209
"prompt_tokens": prompt_tokens,
208210
"completion_tokens": completion_tokens,

llama-index-integrations/llms/llama-index-llms-ollama/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
2727
license = "MIT"
2828
name = "llama-index-llms-ollama"
2929
readme = "README.md"
30-
version = "0.4.1"
30+
version = "0.4.2"
3131

3232
[tool.poetry.dependencies]
3333
python = ">=3.9,<4.0"

0 commit comments

Comments
 (0)