-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Precision and Recall and F1-Score > 1 ? #41
Comments
Hi, @hachreak, thank you for posting the issue. Are the returned values also bigger than 1? |
Hi @ybubnov thanks for reply. model.compile(
optimizer=opt.Adam(lr=1e-4),
loss=losses,
metrics=[km.binary_f1_score()]
) It works well until the end of the epoch.. it's very strange. 😄 |
@hachreak, I see, if it is possible could you show runnable sample of code and data you feed to the model, this will much help troubleshooting. Most common thing why is this happening: there is an issue with the data being feed to the model. |
I made a new CNN and new training.
I was checking the code of precision and recall. |
The are two possible ways to get negative false positive counter:
class false_positive(layer):
# ...
def __call__(self, y_true, y_pred):
y_true, y_pred = self.cast(y_true, y_pred)
neg_y_true = 1 - y_true # <- if y_true is out of [0, 1] range value can be negative.
fp = K.sum(neg_y_true * y_pred)
# ...
|
Hi everybody,
I was using the library on my training and everything looks good.
This is an example:
Until it arrives to the end of the epoch where it has some weird behaviors:
The precision / recall / f1-score for the validation look good, but for the training they have a value bigger than 1.
They should remain always less then 1, is it?
Thanks
The text was updated successfully, but these errors were encountered: