Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image segmentation - artifacts on the output classification image #76

Open
HardRock4Life opened this issue Feb 18, 2022 · 3 comments
Open
Labels
question Further information is requested

Comments

@HardRock4Life
Copy link

Hello!

I'm stuck with the image segmentation.

I've preprocessed the images, then extracted patches like this:

image
image
image
image

The CNN model training runs fine

image

For inference, I give a normalized image as as input layer. The final result I get is the following:

image

image

The way I understand it, the U-NET model is supposed to delete those artifacts in this piece of code?

def myModel(x):

  depth = 16
  
  # Encoding
  conv1   = _conv(x,        1*depth)         #  64 x 64 --> 32 x 32 (31 x 31)
  conv2   = _conv(conv1,    2*depth)         #  32 x 32 --> 16 x 16 (15 x 15)
  conv3   = _conv(conv2,    4*depth)         #  16 x 16 -->  8 x  8 ( 7 x  7)
  conv4   = _conv(conv3,    4*depth)         #   8 x  8 -->  4 x  4 ( 3 x  3)
  
  # Decoding (with skip connections)
  deconv1 = _dconv(conv4,           4*depth) #  4  x  4 -->  8 x  8 ( 5 x  5)
  deconv2 = _dconv(deconv1 + conv3, 2*depth) #  8  x  8 --> 16 x 16 ( 9 x  9)
  deconv3 = _dconv(deconv2 + conv2, 1*depth) # 16  x 16 --> 32 x 32 (17 x 17)
  deconv4 = _dconv(deconv3 + conv1, 1*depth) # 32  x 32 --> 64 x 64 (33 x 33)
  
  # Neurons for classes
  estimated = tf.layers.dense(inputs=deconv4, units=nclasses, activation=None)
  
  return estimated

Or should it be done differently? Thank you!

@HardRock4Life
Copy link
Author

HardRock4Life commented Feb 18, 2022

It works.

Solution: Don't forget to delete existing models and re-run the model generation command!
The ones you create earlier are never overwriting even when you give them the same name.

@HardRock4Life
Copy link
Author

Ok, I'm reopening this issue, rather more to improve the current result.

Here is the preprocessed label image:
image

And an output classification from U-Net:

image

It is seen that the network has classified the road as an actual building. Can this mean that model is overfit?

@HardRock4Life HardRock4Life reopened this Feb 18, 2022
@remicres
Copy link
Owner

Hi @HardRock4Life ,

Overfitting is when the network learns perfectly on the training data, but process poorly on validation data. This can happen when you have not enough training data for instance.
You can check classification metrics of your network on training and validation datasets to see if this is related to overfitting or not (I cannot tell from the provided illustration if it is the case).

Generally, the more training samples you have, the best is the result (if your have quality data). The tutorial in the book is an insight to semantic segmentation, and in real life you will definitely seek for large amount of training data to train the best model.

Maybe the model has mapped road as "buildings" because it hasn't seen enough training examples containing roads?

@remicres remicres added the question Further information is requested label Feb 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants