You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm getting the below error whenever I try to analyze any code using gemini api. I've tried it with different prompts and codes, it's throwing the same error.
raise ResponseValidationError(
vertexai.generative_models._generative_models.ResponseValidationError: The model response did not complete successfully.
Finish reason: 2.
Finish message: .
Safety ratings: [category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
probability_score: 0.388671875
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.188476562
, category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
probability_score: 0.3359375
severity: HARM_SEVERITY_LOW
severity_score: 0.28125
, category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
probability_score: 0.388671875
severity: HARM_SEVERITY_LOW
severity_score: 0.2734375
, category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
probability_score: 0.255859375
severity: HARM_SEVERITY_LOW
severity_score: 0.249023438
].
I've added the below safety settings, still I'm facing the same issue.
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
The text was updated successfully, but these errors were encountered:
shashank-v-gowda
changed the title
Unable to run a function calling prompt
vertexai.generative_models._generative_models.ResponseValidationError: The model response did not complete successfully.
Aug 27, 2024
Description of the bug:
I'm getting the below error whenever I try to analyze any code using gemini api. I've tried it with different prompts and codes, it's throwing the same error.
raise ResponseValidationError(
vertexai.generative_models._generative_models.ResponseValidationError: The model response did not complete successfully.
Finish reason: 2.
Finish message: .
Safety ratings: [category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
probability_score: 0.388671875
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.188476562
, category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
probability_score: 0.3359375
severity: HARM_SEVERITY_LOW
severity_score: 0.28125
, category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
probability_score: 0.388671875
severity: HARM_SEVERITY_LOW
severity_score: 0.2734375
, category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
probability_score: 0.255859375
severity: HARM_SEVERITY_LOW
severity_score: 0.249023438
].
I've added the below safety settings, still I'm facing the same issue.
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
The text was updated successfully, but these errors were encountered: