Repository for my thesis work
"Improving Image‐sensor performance by developing on‐chip machine‐learning algorithms"
The following repository holds all the code used for testing, prototyping and creating the architecture and datasets for the project.
Images and graphs were created using the following sources.
- Illustrations: https://github.com/HarisIqbal88/PlotNeuralNet
- Images used for training and inference were taken from the Celeb-A dataset ref: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (citation provided in thesis)
For the project an 'encoder-decoder' style fully convolutional network was created and trained in a GAN arrangement.
Two objective functions were created for the project. The first was a local defect loss, which compared and penalized the model around the local area were the defect was located. This was necessary as the defects are small enough for the model to often score better if it learns to ignore them if there is no extra local penalty. The second objective function was a latent loss function, which was meant to make the model retain high frequency detail. The inpainter would be penalized if its latent (bottleneck) layer was different to the same network trained as an autoencoder. This helped reduce the blurring sometimes introduced when using regularization schemes like Spectral or batch normalization.

For the project I developed my own algorithm to add synthetic defect onto almost any image. The intention was to create defects similar to those in image sensors. In that way defects could be white- or black-out, they would have a random gradient to simulate non-linearity, as well as a range of random sizes and and number of defects. the results look like the image below.

The project used the SSIM and PSNR metric to give a quantitative measurement of model performance. Below is a table showing several model prototypes and their performance, averaged over 500 images. Compared to a more conventional averaging filter.

Below are some subjective results of the different models compared to ground truth.


