Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predict on images that are different size than training data #29

Open
PranavMaddula opened this issue Nov 14, 2019 · 5 comments
Open

Predict on images that are different size than training data #29

PranavMaddula opened this issue Nov 14, 2019 · 5 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@PranavMaddula
Copy link

PranavMaddula commented Nov 14, 2019

From the steps in the 'DeepPoseKit Step 3 - Train a model' I have trained a model for locust pose mapping. However, the data used in the annotation set is 160x160. When following the steps in the 'DeepPoseKit Step 4b - Predict on new data' notebook, I get the following issue:

reader = VideoReader(HOME + '/Data/crop.mp4', batch_size=50, gray=True) predictions = model.predict(reader, verbose=1) reader.close()

`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in
1 reader = VideoReader(HOME + '/Data/crop.mp4', batch_size=50, gray=True)
----> 2 predictions = model.predict(reader, verbose=1)
3 reader.close()

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
907 max_queue_size=max_queue_size,
908 workers=workers,
--> 909 use_multiprocessing=use_multiprocessing)
910
911 def reset_metrics(self):

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_generator.py in predict(self, model, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
646 max_queue_size=max_queue_size,
647 workers=workers,
--> 648 use_multiprocessing=use_multiprocessing)
649
650

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_generator.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
263
264 is_deferred = not model._is_compiled
--> 265 batch_outs = batch_function(*batch_data)
266 if not isinstance(batch_outs, list):
267 batch_outs = [batch_outs]

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_generator.py in predict_on_batch(x, y, sample_weights)
533 # 1, 2, or 3-tuples from generator
534 def predict_on_batch(x, y=None, sample_weights=None): # pylint: disable=unused-argument
--> 535 return model.predict_on_batch(x)
536
537 f = predict_on_batch

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in predict_on_batch(self, x)
1142 # Validate and standardize user data.
1143 inputs, _, _ = self._standardize_user_data(
-> 1144 x, extract_tensors_from_dataset=True)
1145 # If self._distribution_strategy is True, then we are in a replica context
1146 # at this point.

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2470 feed_input_shapes,
2471 check_batch_axis=False, # Don't enforce the batch size.
-> 2472 exception_prefix='input')
2473
2474 # Get typespecs for the input data and sanitize it if necessary.

~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
572 ': expected ' + names[i] + ' to have shape ' +
573 str(shape) + ' but got array with shape ' +
--> 574 str(data_shape))
575 return data
576

ValueError: Error when checking input: expected input_1 to have shape (160, 160, 1) but got array with shape (512, 512, 1)`

I assume that this is because the model was trained on 160x160 pixel images, while I would like to predict on 512x512 images. I have tried to resize the densenet model with Keras using the following methods:

from kerassurgeon.operations import delete_layer, insert_layer my_input_tensor = Input(input_shape=(512, 512, 1)) from tfkerassurgeon import delete_layer, insert_layer model = delete_layer(model.layers[0]) model = insert_layer(model.layers[0], my_input_tensor)
along with keras.add, model.layers, and model.inputs to no avail.

Is there an easy way to resize the model for inference/prediction? Or will I have to re-train the model on 512x512 images?

Thanks!

@jgraving
Copy link
Owner

Hi, currently the API only supports predicting on images that are the same size as the training images, but this will be changed in a coming update. However, this is not hard to do using the Keras API. Here is some code that should accomplish what you want:

from deepposekit.models import load_model
import tensorflow as tf

model = load_model("/path/to/saved/model.h5")
predict_model = model.predict_model
predict_model.layers.pop(0) # remove current input layer

inputs = tf.keras.layers.Input((512, 512, 1))
outputs = predict_model(inputs)
predict_model = tf.keras.Model(inputs, outputs)

x = np.random.randint(0, 255, (16, 512, 512, 1), dtype=np.uint8) 
prediction = predict_model.predict(x, verbose=True)

@jgraving jgraving reopened this Nov 15, 2019
@jgraving jgraving changed the title Changing model training/prediction size Predict on images that are different size than training data Nov 15, 2019
@jgraving jgraving added enhancement New feature or request pinned Pinned as important question Further information is requested and removed pinned Pinned as important labels Nov 15, 2019
@PranavMaddula
Copy link
Author

PranavMaddula commented Nov 15, 2019

Thanks!
Setting predict_model = model.predict_model worked perfectly!

Another question, however,
Is there an easy way to change the dimensions of the poster overlay video generation function? Right now it only appears to work with frames that are of the same size as the training set (frame size of DataGenerator)

I have tried writer = VideoWriter(HOME + '/Data/posture.mp4', (512*2,512*2), 'MP4V', 30.0) to have the correct dimensions for the video I am aiming to process (in my case 512x512)
Doing this allows me to successfully write a viewable video, however, the posture overlay does not appear.

I have also tried to scale the predictions predictions *= 4.2*2, as my training data is 160x160 and my video is 512x512. I also changed the resized_shape to resized_shape = (int(data_generator.image_shape[0]*3.2*2), int(data_generator.image_shape[1]*3.2*2)), however, this also does not generate the posture overlay on the video.

Any ideas or advice would be much appreciated!
Thanks!

@PranavMaddula
Copy link
Author

Update: I got it working now.
scaling resized_shape and updating the writer parameters to the video size appears to have done it. Not sure why It did not work the first time I ran it, but it works now.

@PranavMaddula
Copy link
Author

This brings me a new question: Is it possible to scale already annotated images so that the images that are categorized as outliers can be added to the initial annotation set so that the model can be easily retrained?

Thanks again!

@jgraving
Copy link
Owner

Hi, I'm not sure exactly what you mean, but currently all of the images in the training set must be the same size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants