Skip to content

This repository contains the first model which I tried on cloud dataset from sentinel satellite for cloud segmentation. The model was UNET and the loss function used was BinaryCrossEntropy with Logits. For, validation IoU was used. Accuracy of 81.39% could be achieved on the validation set using this model.

Notifications You must be signed in to change notification settings

vedantk-b/Cloud-Segmentation-from-Satellite-Imagery

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cloud-Segmentation on satellite imagery data from the Sentinel-2 mission.

Problem Description

To obtain adequate analytical results from multi-spectral satellite imagery, it is essential to precisely detect clouds and mask them out from any Earth surface as they obscure important ground-level features in satellite images, complicating their use in wide variety of applications from disaster management and recovery, to agriculture, to military intelligence. Thus, Improving methods of identifying clouds can unlock the potential of an unlimited range of satellite imagery use cases, enabling faster, more efficient, and more accurate image-based research.

Dataset

  • The challenge used publicly available satellite data from the Sentinel-2 mission, which captures wide-swath, high-resolution, multi-spectral imaging. There are four images associated with each chip. Each image within a chip captures light from a different range of wavelengths, or "band".
Band Description Center wavelength
B02 Blue visible light 497 nm
B03 Green visible light 560 nm
B04 Red visible light 665 nm
B08 Near infrared light 835 nm

Getting Started

  • Creating new conda environment:

      conda create -n cloudncloud -f environment.yml
      conda activate cloudncloud
      pip install segmentation_models_pytorch
  • Set number of epochs, batch size, optimizer, loss function, model, transformation to be applied on data by changing them in config.py

  • To use Unet with inceptionv4 as backbone

    model = smp.Unet(
                 encoder_name="inceptionv4",
                 in_channels=4,
                 classes=2
                )
  • To use DeepLabV3 with resnet101 as backbone

    model = smp.DeepLabV3(
               encoder_name="resnet101",
               in_channels=4,
               classes=2
            )
  • Training and validation loops can be customised by editing

Results

Model Name Public mIoU Score Private mIoU Score
DeepLabV3Plus with ResNet101 as backbone 0.8805 0.8775
Unet with InceptionV4 as backbone 0.8776 0.8749
DeepLabV3 with ResNet101 as backbone 0.8299 0.8340

The best accuracy was achieved with DeepLabV3Plus with ResNet101 as backbone.

1st Image is a channel of the satellite image, 2nd image is true label, 3rd image is the prediction.

image image

People

Vidit Agarwal Vedant Kaushik Utkarsh Pandey
https://github.com/Viditagarwal7479 https://github.com/vedantk-b https://github.com/Kratos-is-here

About

This repository contains the first model which I tried on cloud dataset from sentinel satellite for cloud segmentation. The model was UNET and the loss function used was BinaryCrossEntropy with Logits. For, validation IoU was used. Accuracy of 81.39% could be achieved on the validation set using this model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages