The dataset contains thermal images acquired in a controlled indoor (c-indoor
), semi-controlled indoor (s-indoor
), and uncontrolled outdoor (u-outdoor
) environments. The c-indoor
dataset was constructed using our previously published SpeakingFaces dataset. The s-indoor
and u-outdoor
datasets were collected using the same FLIR T540 thermal camera with a resolution of 464x348 pixels, a wave-band of 7.5–14 μm, the field of view 24, and an iron color palette. The dataset was manually annotated with face bounding boxes and five point facial landmarks (the center of the right eye, the center of the left eye, the tip of the nose, the right outer corner of the mouth, the left outer corner of the mouth).
Environment | Subjects | Images | Labeled faces | Visual pair |
---|---|---|---|---|
c-indoor | 142 | 5,112 | 5,112 | yes |
s-indoor | 9 | 780 | 1,748 | yes |
u-outdoor | 15 | 4,090 | 9,649 | no |
combined | 147 | 9,982 | 16,509 | yes & no |
Examples of annotated images:
TFW: Annotated Thermal Faces in the Wild Dataset (Open Access)
TFW: Annotated Thermal Faces in the Wild Dataset (IEEE)
TFW: Annotated Thermal Faces in the Wild Dataset
$ git clone https://github.com/IS2AI/TFW.git
You can download the dataset directly from Google Drive or send a request to get the access to our server.
- To visualize the
outdoor
dataset:
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/train/ --set outdoor
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/test/ --set outdoor
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/val/ --set outdoor
- To visualize the
indoor
dataset:
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/train/ --set indoor
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/test/ --set indoor
python visualize_dataset.py --dataset PATH_TO_DATASET/TFW/val/ --set indoor
First, convert the TFW dataset to a yolo format using the dataset2yolo.ipynb
notebook.
Then, follow these steps to train the YOLOv5 models on the TFW dataset:
- Clone the repository from Github and install necessary packages:
$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt
- Copy our
yolov5_tfw.yaml
file into/yolov5/data
and update paths to the training and validation sets. - Train the model on the TFW dataset (change the --img_size to 832 for models with the P6 output block):
python train.py --data data/yolov5_tfw.yaml --cfg models/yolov5s.yaml --weights 'pretrained weights' --batch-size 64 --epochs 250 --img-size 800
To train the YOLO5Face models on the TFW dataset:
- Copy the repository from Github and install necessary packages:
$ git clone https://github.com/deepcam-cn/yolov5-face.git
$ cd yolov5-face
- Copy our
yolov5_tfw.yaml
file into/yolov5-face/data
and update paths to the training and validation sets. - Train the model on the TFW dataset (change the --img_size to 832 for models with the P6 output block):
python train.py --data data/yolov5_tfw.yaml --cfg models/yolov5s.yaml --weights 'pretrained weights' --batch-size 64 --epochs 250 --img-size 800
Model | Backbone | c-indoor AP50 |
u-outdoor AP50 |
Speed (ms) V100 b1 |
Params (M) | Flops (G) @512x384 |
---|---|---|---|---|---|---|
YOLOv5n | CSPNet | 100 | 97.29 | 6.16 | 1.76 | 0.99 |
YOLOv5n6 | CSPNet | 100 | 95.79 | 8.18 | 3.09 | 1.02 |
YOLOv5s | CSPNet | 100 | 96.82 | 7.20 | 7.05 | 3.91 |
YOLOv5s6 | CSPNet | 100 | 96.83 | 9.05 | 12.31 | 3.88 |
YOLOv5m | CSPNet | 100 | 97.16 | 9.59 | 21.04 | 12.07 |
YOLOv5m6 | CSPNet | 100 | 97.10 | 12.11 | 35.25 | 11.76 |
YOLOv5l | CSPNet | 100 | 96.68 | 12.39 | 46.60 | 27.38 |
YOLOv5l6 | CSPNet | 100 | 96.29 | 15.73 | 76.16 | 110.2 |
YOLOv5n-Face | ShuffleNetv2 | 100 | 95.93 | 10.12 | 1.72 | 1.36 |
YOLOv5n6-Face | ShuffleNetv2 | 100 | 95.59 | 13.30 | 2.54 | 1.38 |
YOLOv5s-Face | CSPNet | 100 | 96.73 | 8.29 | 7.06 | 3.67 |
YOLOv5s6-Face | CSPNet | 100 | 96.36 | 10.86 | 12.37 | 3.75 |
YOLOv5m-Face | CSPNet | 100 | 95.32 | 11.01 | 21.04 | 11.58 |
YOLOv5m6-Face | CSPNet | 100 | 96.32 | 13.97 | 35.45 | 11.84 |
YOLOv5l-Face | CSPNet | 100 | 96.18 | 13.57 | 46.59 | 25.59 |
YOLOv5l6-Face | CSPNet | 100 | 95.76 | 17.29 | 76.67 | 113.2 |
To use pre-trained YOLOv5
models:
- Download pre-trained models from Google Drive and unzip them in the
yolov5
repository folder. - Run
detect.py
on terminal:
python detect.py --weights PATH_TO_MODEL --source PATH_TO_IMAGE --img-size 800
- The result is saved in
runs/detect/exp
To use pre-trained YOLO5Face
models:
- Download pre-trained models from Google Drive and unzip them in the
yolov5-face
repository folder. - Run
detect_face.py
on terminal:
python detect_face.py --weights PATH_TO_MODEL --image PATH_TO_IMAGE --img-size 800
- The result is saved as
result.jpg
in./yolov5-face/
@ARTICLE{9781417,
author={Kuzdeuov, Askat and Aubakirova, Dana and Koishigarina, Darina and Varol, Huseyin Atakan},
journal={IEEE Transactions on Information Forensics and Security},
title={TFW: Annotated Thermal Faces in the Wild Dataset},
year={2022},
volume={17},
number={},
pages={2084-2094},
doi={10.1109/TIFS.2022.3177949}}