Skip to content

This repository contains the codebase and documentation for a comparative analysis of various lightweight face detection models optimized for real-time detection on edge devices. The study evaluates the models based on their accuracy, speed, and performance across different poses and environments, using the WIDER FACE dataset.

Notifications You must be signed in to change notification settings

AkinduID/Face-Detection-Model-Benchmark

Repository files navigation

Comparative Analysis of Lightweight Face Detection Models for Edge Devices

This repository contains the codebase and documentation for a comparative analysis of various lightweight face detection models optimized for real-time detection on edge devices. The study evaluates the models based on their accuracy, speed, and performance across different poses and environments, using the WIDER FACE dataset.

Overview

Face detection is a fundamental task in computer vision, with applications in security, robotics, and human-computer interaction. Deploying these solutions on edge devices requires models that are both efficient and accurate. This project investigates and benchmarks several lightweight face detection models to determine the best options for such constrained environments.

Models Evaluated

The following models are included in this benchmark:

  • Haar Cascade - A classic machine learning-based approach using handcrafted features. Although lightweight, it struggles with pose variations and complex lighting.

  • MediaPipe BlazeFace - An SSD-based model optimized for mobile and edge devices, trained specifically on selfie images for fast and accurate detection.

  • MediaPipe Holistic - A comprehensive solution integrating facial landmarks, pose estimation, and hand tracking, offering more context but at a higher computational cost.

  • MobileNet SSD - A deep learning model leveraging depthwise separable convolutions for efficient detection on mobile platforms.

  • YOLOv8 Nano - A compact version of the YOLO framework designed for real-time, high-accuracy detection on resource-constrained devices.

Dataset

The evaluation uses the WIDER FACE validation subset, consisting of 3,226 images with faces in varied poses, lighting, and environments. This dataset provides a diverse range of conditions for testing model robustness and performance. http://shuoyang1213.me/WIDERFACE/

Metrics

The models are evaluated based on the following metrics:

  • Average IoU (Intersection over Union): Measures the accuracy of the detected bounding box compared to the ground truth.

  • Mean Average Precision (mAP): Assesses detection accuracy across different IoU thresholds (0.5, 0.75, etc.).

  • Average Inference Time: The time taken for each model to process an image and output results, crucial for determining suitability for real-time applications.

Installation

To set up the environment and run the code, follow these steps: Clone the repository

git clone https://github.com/AkinduID/Face-Detection-Model-Benchmark.git

Navigate to the project directory

cd Face-Detection-Model-Benchmark

Install the required dependencies

pip install -r requirements.txt

Download the validation data set from the website mentioned under the Dataset section and place it in the dataset folder

+-- dataset
|   +-- WIDER_val
|   +-- wider_face_split

To evaluate the models, run the following command

python main.py

Results

The results of the face detection model comparison are stored in the results folder. The key findings are summarized in the following comparison graphs generated using Matplotlib

  • Average IoU vs Model
  • Average Inference Time vs Model
  • Average Mean Precision vs Model
Avg IoU Avg Inference Time Mean Avg Precision

Current result are achieved on a laptop with the follwing specifications:

  • Processor - Intel i5-1135G7
  • RAM - 8GB
  • GPU - Intel Iris Xe 4GB
  • Operating System - Windows 11

Total execution for the main script took 45 Minutes.

Contributing

Contributions are welcome! If you would like to add new models or improve the existing evaluation metrics, feel free to fork this repository and submit a pull request.

Future Work

  • Model Expansion: Incorporate additional face detection models to broaden the comparative analysis.
  • Metric Enrichment: Introduce further evaluation metrics, such as F1 score and confusion matrix, for a more comprehensive assessment.
  • Dataset Optimization: Explore techniques to reduce the dataset size while preserving its diversity, thereby improving computational efficiency.
  • User Experience Enhancement: Enhance the user-friendliness of the main program for easier operation and accessibility.
  • Platform Testing: Extend testing to include more platforms, such as high-end PCs and resource-constrained devices like Raspberry Pi, to understand the models' performance in various environments.
  • Hardware Resource Monitoring: Evaluate the usage of hardware resources (RAM, GPU, processor) for each model to provide insights into their computational efficiency and suitability for different devices

References

This codebase is built upon the initial work from the following repository: Modifications were made to adapt the code for the specific needs of this project,including the selection of evaluated models, evaluation metrics, and output formatting. https://github.com/nodefluxio/face-detector-benchmark

About

This repository contains the codebase and documentation for a comparative analysis of various lightweight face detection models optimized for real-time detection on edge devices. The study evaluates the models based on their accuracy, speed, and performance across different poses and environments, using the WIDER FACE dataset.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages