Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision and Recall and F1-Score > 1 ? #41

Open
hachreak opened this issue Jun 25, 2019 · 5 comments
Open

Precision and Recall and F1-Score > 1 ? #41

hachreak opened this issue Jun 25, 2019 · 5 comments

Comments

@hachreak
Copy link

Hi everybody,
I was using the library on my training and everything looks good.
This is an example:

22262/39088 [================>.............] - ETA: 1:02:06 - loss: 5.7241 - acc: 0.9064 - precision: 0.7208 - recall: 0.9263 - f1_score: 0.8108

Until it arrives to the end of the epoch where it has some weird behaviors:

Epoch 1/100
39088/39088 [==============================] - 8798s 225ms/step - loss: 9.6921 - acc: 0.8581 - precision: 2.9728 - recall: 1.2057 - f1_score: 1.7156 - val_loss: 7.8221 - val_acc: 0.8764 - val_precision: 0.6613 - val_recall: 0.9001 - val_f1_score: 0.7624

The precision / recall / f1-score for the validation look good, but for the training they have a value bigger than 1.
They should remain always less then 1, is it?
Thanks

@ybubnov
Copy link
Member

ybubnov commented Jun 25, 2019

Hi, @hachreak, thank you for posting the issue. Are the returned values also bigger than 1?

@hachreak
Copy link
Author

Hi @ybubnov thanks for reply.
What do you mean with the returned values?
I have only configured:

model.compile(
    optimizer=opt.Adam(lr=1e-4),
    loss=losses,
    metrics=[km.binary_f1_score()]
)

It works well until the end of the epoch.. it's very strange. 😄

@ybubnov
Copy link
Member

ybubnov commented Jun 25, 2019

@hachreak, I see, if it is possible could you show runnable sample of code and data you feed to the model, this will much help troubleshooting.

Most common thing why is this happening: there is an issue with the data being feed to the model.

@hachreak
Copy link
Author

I made a new CNN and new training.
This time, after ~200 images, it's the precision and f1-score that are going negative!

269/39088 [..............................]   conv2d_1_acc: 0.6118 - conv2d_1_precision: -0.0089 - conv2d_1_recall: 0.5345 - conv2d_1_f1_score: -0.0181 - conv2d_1_false_positive: -1186967987.0000

I was checking the code of precision and recall.
The only difference is the use of false positive instead of false positive.
From the code of false positive, the only way to go negative looks when y_true is bigger than 1.
But I checked my code and doesn't looks the case because I'm forcing to be 0 or 1.
Am I doing something wrong?
Any suggestion is really appreciated? Thanks 😄

@ybubnov
Copy link
Member

ybubnov commented Jun 26, 2019

The are two possible ways to get negative false positive counter:

  1. Feed training data where the output (actual Y value) is not binary, so this code strikes:
class false_positive(layer):
    # ...

    def __call__(self, y_true, y_pred):
        y_true, y_pred = self.cast(y_true, y_pred)
        neg_y_true = 1 - y_true  # <- if y_true is out of [0, 1] range value can be negative.

        fp = K.sum(neg_y_true * y_pred)
        # ...
  1. There is some issue with data conversion that causes self.cast(y_true, y_pred) return incorrect result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants