Detecting Looted Archaeological Sites from Satellite Image Time Series
Elliot Vincent
Merhaïl Saroufim
Jonathan Chemla
Yves Ubelmann
Philippe Marquis
Jean Ponce
Mathieu Aubry
Official PyTorch implementation of Detecting Looted Archaeological Sites from Satellite Image Time Series. Check out our webpage for other details!
We introduce the DAFA Looted Sites dataset (DAFA-LS), a labeled multi-temporal remote sensing dataset containing 55,480 images acquired monthly over 8 years across 675 Afghan archaeological sites, including 135 sites looted during the acquisition period. DAFA-LS is an interesting playground to assess the performance of satellite image time series (SITS) classification methods on a real and important use case.
If you find this code useful, don't forget to star the repo ⭐.
git clone [email protected]:ElliotVincent/DAFA-LS.git --recursive
You can download the datasets using the code below or by following this link (426M).
cd DAFA-LS
mkdir datasets
cd datasets
gdown 16v7_AcRwNeRhCacmQuX2477VYs51f4fU
unzip DAFA_LS.zip
cd ..
mkdir weights
cd weights
wget https://huggingface.co/XShadow/DOFA/resolve/main/DOFA_ViT_base_e100.pth?download=true
wget https://zenodo.org/record/7369797/files/fmow_pretrain.pth
wget https://github.com/bair-climate-initiative/scale-mae/releases/download/base-800/scalemae-vitlarge-800.pth
python3 -m venv dafals
source dafals/bin/activate
python3 -m pip install -r requirements.txt
This implementation uses Pytorch.
If you use the repository for the first time, please create a results
folder:
mkdir results
Now you can run the following command, replacing <config_name>
by either resnet
[2], dofa
[1], satmae
[10], scalemae
[11], ltae
[3], tempcnn
[4], duplo
[5], transformer
[6], utae
[7], tsvit_cls
[8], tsvit_seg
[8], pse_ltae
[9], dofa_ltae
[1,3], satmae_ltae
[10,3] or scalemae_ltae
[11,3].
Replace <exp_name>
by the experiment name of your choice. Output files will be located at results/<exp_name>/
.
PYTHONPATH=$PYTHONPATH:./src python src/trainer.py -t <exp_name> -c <config_name>.yaml
If you use our work in your project please cite:
@article{vincent2024detecting,
title = {Detecting Looted Archaeological Sites from Satellite Image Time Serie},
author = {Vincent, Elliot and Saroufim, Mehraïl and Chemla, Jonathan and Ubelmann, Yves and Marquis, Philippe and Ponce, Jean and Aubry, Mathieu},
journal = {arXiv},
year = {2024},
}
And if you use our dataset, please give proper attributions to Planet Labs:
@article{planet2024planet,
author={{Planet Team}},
title={{Planet Application Program Interface: In Space for Life on Earth (San Francisco, CA)}},
journal={\url{https://api.planet.com}},
year={2024}
}
[1] Z. Xiong et al. Neural plasticity-inspired foundation model for observing the Earth crossing modalities. (2024)
[2] K. He et al. Deep residual learning for image recognition. (2016)
[3] V. S. F. Garnot et al. Lightweight temporal self-attention for classifying satellite images time series. (2020)
[4] C. Pelletier et al. Temporal convolutional neural network for the classification of satellite image time series. (2019)
[5] R. Interdonato et al. Duplo: A dual view point deep learning architecture for time series classification. (2019)
[6] M. Rußwurm et al. Self-attention for raw optical satellite time series classification. (2020)
[7] V. S. F. Garnot et al. Panoptic segmentation of satellite image time series with convolutional temporal attention networks. (2021)
[8] M. Tarasiou et al. Vits for sits: Vision transformers for satellite image time series. (2023)
[9] V. S. F. Garnot et al. Satellite image time series classification with pixel-set encoders and temporal self-attention. (2020)
[10] Y. Cong et al. Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery. (2022)
[11] C. Reed et al. Scale-mae: A scale-aware masked autoencoder for multiscale geospatial representation learning. (2023)