diff --git a/guide/14-deep-learning/how-superresolution-works.ipynb b/guide/14-deep-learning/how-superresolution-works.ipynb index 4238341924..20d9b50545 100644 --- a/guide/14-deep-learning/how-superresolution-works.ipynb +++ b/guide/14-deep-learning/how-superresolution-works.ipynb @@ -1 +1,145 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# How SuperResolution works?\n", "\n", "SuperResolution is an image transformation technique with the help of which we can improve the quality of image and recover high resolution image from a given low resolution image as shown in Figure 1. It allows us to remove the compression artifacts and transform the blurred images to sharper images by modifying the pixels."]}, {"cell_type": "markdown", "metadata": {}, "source": [""]}, {"cell_type": "markdown", "metadata": {"toc": true}, "source": ["

Table of Contents

\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["
Figure 1. Recovering high resolution image from low resolution
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["This model uses deep learning to add texture and detail to low resolution satellite imagery and turn it into higher resolution imagery. The model training requires pairs of high and low resolution imagery of the same area. In order to train the model, we only require high resolution imagery, and `prepare_data` in `arcgis.learn` will degrade the high resolution imagery in order to simulate low resolution image for training the model. "]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Model Architecture\n", "\n", "\n", "\n", "
Figure 2. Overview of SuperResolution architecture [1]
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\n", "We are using Unet as our image transformation network and VGG-16 as our network for feature loss.\n", "\n", "1. **Image transformation network (Unet)**: This network is parameterized by weights and takes the input images, transforms them by modifying pixels and generate the output image. To learn about Unet, you can refer to our guide [How Unet works?](https://developers.arcgis.com/python/guide/how-unet-works/).\n", "\n", "2. **Loss Network (VGG-16)**: This network is pretrained on ImageNet data in which weights remain fixed during the training process. We use feature layers of this network to generate loss, which is known as perceptual loss.\n", " \n", "#### Perceptual Loss\n", "The model with per pixel loss alone try to match exactly each pixel of the generated and the target image. The two images look similar in perspective, but they might have different per-pixels values hence it gives a blurry kind of image. To improve on that, we use **Perceptual Loss**. It combines per pixel loss and the feature loss from the different layers of Loss Network, which captures both per pixel difference and high-level image feature representations extracted from pretrained CNN.\n", "\n", "In the whole process, the low resolution image is fed into the image transformation network, which does the prediction $\\hat{y}$ as a high resolution image. The predicted images $\\hat{y}$ and the ground truth images $y$ are then fed into the loss network, where the perceptual loss between the two images is calculated.\n"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## SuperResolution implementation in `arcgis.learn`"]}, {"cell_type": "markdown", "metadata": {}, "source": ["First, we have to create a databunch with prepare_data function in arcgis.learn\n", "\n", " data = arcgis.learn.prepare_data(path=r\"path/to/exported/data\", downsample_factor=4, dataset_type=\"superres\")\n", "\n", "The important parameters to be passed are:\n", "\n", "* The `path` to the Data directory. The directory should contain high resolution or both high and low resolution paired images.\n", "* The `downsample factor` to generate labels for training. It takes high resolution images and uses methods such as bilinear interpolation to reduce the size and degrade the quality of the image. For example: Image of dimensions 256x256 is converted to 64x64 with downsample factor of 4.\n", "\n", "We can then continue with basic arcgis.learn workflow. To learn more about the workflow of SuperResolution model, you can refer to the [sample notebook](https://developers.arcgis.com/python/sample-notebooks/increase-image-resolution-using-superresolution/).\n", "\n", "For more information about the API & model, please go to the [API reference](https://developers.arcgis.com/python/api-reference/arcgis.learn.toc.html)."]}, {"cell_type": "markdown", "metadata": {}, "source": ["## References"]}, {"cell_type": "markdown", "metadata": {}, "source": ["[1] J. Johnson, A. Alahi, and L. Fei-Fei, \u201cPerceptual losses for realtime style transfer and super-resolution\u201d, 2016; [arXiv:1603.08155](https://arxiv.org/abs/1603.08155)."]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2"}, "toc": {"base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": {}, "toc_section_display": true, "toc_window_display": true}}, "nbformat": 4, "nbformat_minor": 4} \ No newline at end of file +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# How SuperResolution works?\n", + "\n", + "SuperResolution is an image transformation technique with the help of which we can improve the quality of image and recover high resolution image from a given low resolution image as shown in Figure 1. It allows us to remove the compression artifacts and transform the blurred images to sharper images by modifying the pixels." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "toc": true + }, + "source": [ + "

Table of Contents

\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
Figure 1. Recovering high resolution image from low resolution
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This model uses deep learning to add texture and detail to low resolution satellite imagery and turn it into higher resolution imagery. The model training requires pairs of high and low resolution imagery of the same area. In order to train the model, we only require high resolution imagery, and `prepare_data` in `arcgis.learn` will degrade the high resolution imagery in order to simulate low resolution image for training the model. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Model Architecture\n", + "\n", + "\n", + "\n", + "
Figure 2. Overview of SuperResolution architecture [1]
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "We are using Unet as our image transformation network and VGG-16 as our network for feature loss.\n", + "\n", + "1. **Image transformation network (Unet)**: This network is parameterized by weights and takes the input images, transforms them by modifying pixels and generate the output image. To learn about Unet, you can refer to our guide [How Unet works?](https://developers.arcgis.com/python/guide/how-unet-works/).\n", + "\n", + "2. **Loss Network (VGG-16)**: This network is pretrained on ImageNet data in which weights remain fixed during the training process. We use feature layers of this network to generate loss, which is known as perceptual loss.\n", + " \n", + "#### Perceptual Loss\n", + "The model with per pixel loss alone try to match exactly each pixel of the generated and the target image. The two images look similar in perspective, but they might have different per-pixels values hence it gives a blurry kind of image. To improve on that, we use **Perceptual Loss**. It combines per pixel loss and the feature loss from the different layers of Loss Network, which captures both per pixel difference and high-level image feature representations extracted from pretrained CNN.\n", + "\n", + "In the whole process, the low resolution image is fed into the image transformation network, which does the prediction $\\hat{y}$ as a high resolution image. The predicted images $\\hat{y}$ and the ground truth images $y$ are then fed into the loss network, where the perceptual loss between the two images is calculated.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## SuperResolution implementation in `arcgis.learn`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, we have to create a databunch with prepare_data function in arcgis.learn\n", + "\n", + " data = arcgis.learn.prepare_data(path=r\"path/to/exported/data\", downsample_factor=4)\n", + "\n", + "The important parameters to be passed are:\n", + "\n", + "* The `path` to the Data directory. The directory should contain high resolution or both high and low resolution paired images.\n", + "* The `downsample factor` to generate labels for training. It takes high resolution images and uses methods such as bilinear interpolation to reduce the size and degrade the quality of the image. For example: Image of dimensions 256x256 is converted to 64x64 with downsample factor of 4.\n", + "\n", + "We can then continue with basic arcgis.learn workflow. To learn more about the workflow of SuperResolution model, you can refer to the [sample notebook](https://developers.arcgis.com/python/sample-notebooks/increase-image-resolution-using-superresolution/).\n", + "\n", + "For more information about the API & model, please go to the [API reference](https://developers.arcgis.com/python/api-reference/arcgis.learn.toc.html)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[1] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for realtime style transfer and super-resolution”, 2016; [arXiv:1603.08155](https://arxiv.org/abs/1603.08155)." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.8" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": true, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/samples/04_gis_analysts_data_scientists/increase-image-resolution-using-superresolution.ipynb b/samples/04_gis_analysts_data_scientists/increase-image-resolution-using-superresolution.ipynb index 702dfb09a5..163bff7fdb 100644 --- a/samples/04_gis_analysts_data_scientists/increase-image-resolution-using-superresolution.ipynb +++ b/samples/04_gis_analysts_data_scientists/increase-image-resolution-using-superresolution.ipynb @@ -281,8 +281,7 @@ "outputs": [], "source": [ "data = prepare_data(data_path, \n", - " batch_size=8, \n", - " dataset_type=\"superres\", \n", + " batch_size=8, \n", " downsample_factor=8)" ] }, @@ -839,7 +838,7 @@ "notebookRuntimeVersion": "" }, "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -853,7 +852,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.11" + "version": "3.11.8" }, "toc": { "base_numbering": 1,