Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"FailedPreconditionError" upon using precision and recall metrics #21

Open
michael-ziedalski opened this issue Dec 12, 2018 · 6 comments

Comments

@michael-ziedalski
Copy link

I wanted to run a simple model with precision and recall reported using keras-metrics, since keras itself had removed them, but I get a pretty drastic error because of them. This is the code I am using. I hope I am not using precision and recall in a wrong way somehow?

%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'

import numpy as np
import pandas as pd

import tensorflow as tf
from tensorflow import keras
import sklearn as sk

import keras_metrics as km

import seaborn as sns
sns.set_style("darkgrid")

## Spiral data generation
def twospirals(n_points, noise=.5):
    """
     Returns the two spirals dataset.
    """
    n = np.sqrt(np.random.rand(n_points,1)) * 780 * (2*np.pi)/360
    d1x = -np.cos(n)*n + np.random.rand(n_points,1) * noise
    d1y = np.sin(n)*n + np.random.rand(n_points,1) * noise
    return (np.vstack((np.hstack((d1x,d1y)),np.hstack((-d1x,-d1y)))), 
            np.hstack((np.zeros(n_points),np.ones(n_points))))

x, y = twospirals(1000)

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(2,)),
    keras.layers.Dense(3, activation='sigmoid'),
    keras.layers.Dense(3, activation='sigmoid'),
    keras.layers.Dense(1, activation=keras.activations.sigmoid)
])

## Choosing my optimizer algorithm and loss function
grad_opt = tf.train.GradientDescentOptimizer(learning_rate=.003)
mse = keras.losses.mean_squared_error

model.compile(optimizer=grad_opt, 
              loss=mse,
              metrics=[km.recall(), km.precision()])

model.fit(x=x, y=y, epochs=300)
@michael-ziedalski michael-ziedalski changed the title FailedPreconditionError upon using precision() and recall() as metrics "FailedPreconditionError" upon using precision and recall metrics Dec 12, 2018
@ybubnov ybubnov added this to the 0.0.6 version milestone Dec 21, 2018
@ybubnov ybubnov modified the milestones: 0.0.6, 0.0.7 Jan 12, 2019
@ybubnov
Copy link
Member

ybubnov commented Jan 24, 2019

Hi @michael-ziedalski, thank you for the question. I can't reproduce an error with keras-metrics version 0.0.7, what version are you using?

In the provided code I don't see any incorrect usages of the metrics, everything is just fine.

@proever
Copy link

proever commented Feb 4, 2019

Just wanted to say that I'm running into the same error on 0.0.7 with Keras 2.2.4 and tensorflow 1.12.0. I was able to reproduce it with the code in the initial issue, commenting out the inline sections. The exact error is:

tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Variable_1 [[{{node Variable_1/read}} = Identity[T=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_1)]]

I've attached a requirements.txt. I was running Python 3.6.7 in a clean virtualenv.

Here's the code:

import matplotlib
import matplotlib.pyplot as plt

import numpy as np
import pandas as pd

import tensorflow as tf
from tensorflow import keras
import sklearn as sk

import keras_metrics as km

import seaborn as sns
sns.set_style("darkgrid")

## Spiral data generation
def twospirals(n_points, noise=.5):
    """
     Returns the two spirals dataset.
    """
    n = np.sqrt(np.random.rand(n_points,1)) * 780 * (2*np.pi)/360
    d1x = -np.cos(n)*n + np.random.rand(n_points,1) * noise
    d1y = np.sin(n)*n + np.random.rand(n_points,1) * noise
    return (np.vstack((np.hstack((d1x,d1y)),np.hstack((-d1x,-d1y)))),
            np.hstack((np.zeros(n_points),np.ones(n_points))))

x, y = twospirals(1000)

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(2,)),
    keras.layers.Dense(3, activation='sigmoid'),
    keras.layers.Dense(3, activation='sigmoid'),
    keras.layers.Dense(1, activation=keras.activations.sigmoid)
])

## Choosing my optimizer algorithm and loss function
grad_opt = tf.train.GradientDescentOptimizer(learning_rate=.003)
mse = keras.losses.mean_squared_error

model.compile(optimizer=grad_opt,
              loss=mse,
              metrics=[km.recall(), km.precision()])

model.fit(x=x, y=y, epochs=300)

@aronhoff
Copy link
Contributor

aronhoff commented Feb 6, 2019

I had the same issue. The problem is that you are using the TensorFlow version of Keras (from tensorflow import keras), while the library uses the standalone version (import keras). They are incompatible on several points (internal type checks fail for example). Importing both versions in my own code and setting the same tf.Session as their session was not sufficient.

I need to use the TensorFlow version, so I got past the error by tricking keras-metrics into importing that:

from tensorflow.python import keras
import sys
sys.modules['keras'] = keras
sys.modules['keras.backend'] = keras.backend
import keras_metrics

Still, all of the metrics are 0.0 when I run them, the variables are possibly not updated for some reason.

@ybubnov
Copy link
Member

ybubnov commented Feb 6, 2019

Ah, you are all using keras package from tensorflow, I didn't notice that initially.
So basically, the fix is to wrap the fit call with session that initializes global variables like this:

with tf.Session() as s:
   s.run(tf.global_variables_initializer())
   model.fit(x=x, y=y, epochs=300)

@ybubnov
Copy link
Member

ybubnov commented Feb 6, 2019

And I get non-zero metrics in the result:

...
Epoch 297/300
2000/2000 [==============================] - 0s 29us/sample - loss: 0.2461 - recall: 0.5432 - precision: 0.6111
Epoch 298/300
2000/2000 [==============================] - 0s 31us/sample - loss: 0.2461 - recall: 0.5369 - precision: 0.6035
Epoch 299/300
2000/2000 [==============================] - 0s 29us/sample - loss: 0.2461 - recall: 0.5329 - precision: 0.6066
Epoch 300/300

@aronhoff
Copy link
Contributor

aronhoff commented Feb 7, 2019

That works for me too. I realised that it gave me 0.0 because my label tensor was sparse (as opposed to one-hot)

@ybubnov ybubnov removed this from the 0.0.8 milestone Feb 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants