Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No OpKernel was registered to support Op 'Slice' with these attrs exception #3

Open
lankastersky opened this issue Jul 25, 2018 · 1 comment

Comments

@lankastersky
Copy link

Crashes with the frozen graph mobilenetv2_coco_voc_trainaug from Deeplab github (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md):

FATAL EXCEPTION: ModernAsyncTask #3
Process: com.dailystudio.deeplab, PID: 4546
java.lang.RuntimeException: An error occurred while executing doInBackground()
at android.support.v4.content.ModernAsyncTask$3.done(ModernAsyncTask.java:161)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383)
at java.util.concurrent.FutureTask.setException(FutureTask.java:252)
at java.util.concurrent.FutureTask.run(FutureTask.java:271)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Slice' with these attrs. Registered devices: [CPU], Registered kernels:
device='CPU'; T in [DT_BOOL]
device='CPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_INT32]

	 [[Node: SemanticPredictions = Slice[Index=DT_INT32, T=DT_INT64](ArgMax, SemanticPredictions/begin, SemanticPredictions/size)]]
    at org.tensorflow.Session.run(Native Method)
    at org.tensorflow.Session.access$100(Session.java:48)
    at org.tensorflow.Session$Runner.runHelper(Session.java:298)
    at org.tensorflow.Session$Runner.runAndFetchMetadata(Session.java:260)
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:220)
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:197)
    at com.dailystudio.deeplab.ml.DeeplabModel.segment(DeeplabModel.java:117)
    at com.dailystudio.deeplab.SegmentBitmapsLoader.loadInBackground(SegmentBitmapsLoader.java:92)
    at com.dailystudio.deeplab.SegmentBitmapsLoader.loadInBackground(SegmentBitmapsLoader.java:27)
    at android.support.v4.content.AsyncTaskLoader.onLoadInBackground(AsyncTaskLoader.java:306)
    at android.support.v4.content.AsyncTaskLoader$LoadTask.doInBackground(AsyncTaskLoader.java:59)
    at android.support.v4.content.AsyncTaskLoader$LoadTask.doInBackground(AsyncTaskLoader.java:47)
    at android.support.v4.content.ModernAsyncTask$2.call(ModernAsyncTask.java:138)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	... 3 more
@dailystudio
Copy link
Owner

Yes, the official pre-trained model will not be directly used on the device. That is why I created this project.

Refer to step 3 in "Preparing the models" section, you need to modify the export scripts to cast INT64 to INT32:

semantic_predictions = tf.slice(
         predictions[common.OUTPUT_TYPE],
         [0, 0, 0],
         [1, resized_image_size[0], resized_image_size[1]])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants