You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been using the "MobileVLMv2-1.7B" model for a task where I need to classify images as either "NSFW" or "SFW".
However, the model consistently classifies all images as NSFW, regardless of their actual content.
This behavior persists even with various prompt modifications and different image inputs.
Reproduce:
prompt = "Is this picture sfw or nsfw?\nAnswer the question using a single word of nsfw or sfw"
Expected Behavior:
The model should classify images accurately as SFW or NSFW based on the content.
Actual Behavior:
All images are classified as NSFW, even those that are clearly SFW.
I've been using the "MobileVLMv2-1.7B" model for a task where I need to classify images as either "NSFW" or "SFW".
However, the model consistently classifies all images as NSFW, regardless of their actual content.
This behavior persists even with various prompt modifications and different image inputs.
Reproduce:
prompt = "Is this picture sfw or nsfw?\nAnswer the question using a single word of nsfw or sfw"
Expected Behavior:
The model should classify images accurately as SFW or NSFW based on the content.
Actual Behavior:
All images are classified as NSFW, even those that are clearly SFW.
I made some changes to the script
including encapsulating functionalities within a class and ...
However, the overall workflow and logic of the code remain consistent with the original version.
https://github.com/Meituan-AutoML/MobileVLM/blob/main/scripts/inference.py
my code :
output :
The text was updated successfully, but these errors were encountered: