Object tracking using OpenCV feature detectors (detectors) and descriptor extractors (descriptors) algorithms with GUI for fun, tests and education.
Only OpenCV integrated detectors, descriptors and
detector-descriptors are used. Neural Network detector-descriptors (such as
R2D2, D2NET, SUPERPOINT, ORB-SLAM2, DELF, CONTEXTDESC, LFNET, KEYNET, DISK,
etc)
and descriptors (such as TFEAT, HARDNET, GEODESC, SOSNET, L2NET, LOGPOLAR,
etc)
are not considered.
All object trackers in the application are placed in decrease of efficiency and
implemented rotation invariant and scalable except of “StarDetector + DAISY”
for education and fun. It doesn't mean that lower methods are always ineffective,
but for this task it is so (because there is no "silver bullet" method for all tasks).
All feature detector-descriptor logic is in the
logic_extractor.py file.
Snapshots, logs and configuration parameters are saved in temp
directory
of this folder.
In general the source code of the GUI is not as elegant as I would like, but it works :-).
Previous simple script is here SIFT object tracking. SIFT algorithm became free since March 2020. SURF algorithm is patented and is excluded from OpenCV. Now SURF is for Python version 3.4.2.16 and older.
Tested on Windows 10 for Python 3.11.
External libraries:
- OpenCV to process images.
- NumPy to support arrays.
- Pillow to open images of various formats.
To start:
# Install additional libraries
pip install -r requirements.txt
# Run the application
python runme.py
Usage:
- Open GUI:
python runme.py
. - Place object in front of the web camera, so it take all visible space.
- Press
Get snapshot
button. Application will make snapshot of the object to track. - After taking snapshot there will be a red rectangle around tracking object and green lines connecting special keypoints of the image.
Rectangular object, like book, is tracked better than face.
MS PowerPoint presentation
of the application in the data
subdirectory.