Welcome to the SoccerNet Development Kit for the Action Spotting and Ball Action Spotting Tasks and Challenges. This kit is meant as a help to get started working with the soccernet data and the proposed task. More information about the dataset can be found on our official website.
SoccerNet Action Spotting is part of the SoccerNet-v2 dataset, which is an extension of SoccerNet-v1 with new and challenging tasks including action spotting, camera shot segmentation with boundary detection, and a novel replay grounding task.
[New] In 2024, we renew the Ball Action Spotting Challenge, including 12 classes: Pass, Drive, Header, High Pass, Out, Cross, Throw In, Shot, Ball Player Block, Player Successful Tackle, Free Kick, Goal. Those events are much more dense and require a higher level of spotting precision. The density of those events, as well as the subtle underlying movement of the ball and players, make this new task even more challenging. For this new challenge, you only have access to 7 annotated games, so you may want to take a look at different training paradigms such as transfer learning, unsupervised learning or semi-supervised learning. Remember that you still have access to the 500 videos from soccernet to help you.
[New] A Baseline code to get started on the 2024 Ball Action spotting Challenge is available here: https://github.com/recokick/ball-action-spotting. It is a fork of the amazing work of last year's Ball Action Spotting Winner, Ruslan Baikulov, adapted to the new classes. Go check out and star his original repo: https://github.com/lRomul/ball-action-spotting.
The Action Spotting dataset consists of 500 complete soccer games including:
- Full untrimmed broadcast videos in both low and high resolution.
- Pre-computed features such as ResNET-152.
- Annotations of actions among 17 classes (Labels-v2.json).
The new Ball Action Spotting dataset consists of 7 complete soccer games including:
- Full untrimmed broadcast videos in both low and high resolution.
- Annotations of actions among 12 classes (Labels-ball.json).
Participate in our upcoming Challenges during the CVPR 2024 International Challenge at the CVSports Workshop! All details are provided on our evaluation server for the ball action spotting task, or on the main soccernet page.
The participation deadline is fixed at the 30th of May 2024. The official rules and guidelines are available on ChallengeRules.md.
Team | Average-mAP (tight) | Shown only (tight) | Unshown only (tight) | Average-mAP (loose) | Shown only (loose) | Unshown only (loose) |
---|---|---|---|---|---|---|
SDU_VSISLAB | 71.31 | 76.29 | 54.09 | 78.56 | 81.67 | 69.13 |
mt_player | 71.1 | 77.22 | 58.5 | 78.79 | 82.02 | 77.62 |
ASTRA (ASBY193) | 70.1 | 75.0 | 57.98 | 79.21 | 81.69 | 75.36 |
team_ws_action | 69.17 | 75.18 | 59.12 | 76.95 | 80.39 | 75.92 |
CEA LVA | 68.38 | 74.79 | 47.68 | 73.98 | 78.57 | 61.75 |
Baseline (Yahoo) | 68.33 | 73.22 | 60.88 | 78.06 | 80.58 | 78.32 |
DVP | 66.95 | 74.68 | 53.81 | 73.61 | 79.15 | 67.38 |
JAMY2 (AF_GRU) | 51.97 | 58.05 | 44.29 | 63.12 | 65.98 | 61.66 |
tyru (GRU_CALF) | 51.38 | 57.5 | 41.82 | 62.88 | 66.3 | 56.57 |
JAMY (LocPoint) | 45.83 | 49.68 | 45.71 | 61.8 | 64.23 | 63.48 |
test_YYQ | 12.73 | 14.13 | 11.21 | 54.21 | 58.75 | 48.55 |
Team | tight Avg-mAP (challenge) | tight Avg-mAP visible (challenge) | tight Avg-mAP unshown (challenge) | Avg-mAP (challenge) | Avg-mAP visible (challenge) | Avg-mAP unshown (challenge) |
---|---|---|---|---|---|---|
Yahoo Research | 67.81 | 72.84 | 60.17 | 78.05 | 80.61 | 78.05 |
PTS | 66.73 | 74.84 | 53.21 | 73.62 | 79.16 | 67.42 |
AS&RG | 64.88 | 70.31 | 53.03 | 72.83 | 76.08 | 72.35 |
mt_sdu_action | 62.26 | 67.48 | 45.04 | 69.86 | 73.81 | 59.15 |
Rkrystal | 61.84 | 67.39 | 48.71 | 74.75 | 78.29 | 69.02 |
arturxe | 60.56 | 65.75 | 53.00 | 71.72 | 75.15 | 69.91 |
cihe | 59.97 | 64.51 | 53.80 | 72.95 | 76.29 | 71.95 |
GUC | 58.71 | 63.70 | 51.86 | 70.49 | 73.46 | 70.11 |
abcdefg | 56.07 | 62.97 | 46.51 | 67.88 | 72.54 | 66.37 |
intro-and inter | 53.97 | 60.04 | 47.52 | 67.75 | 71.16 | 70.12 |
memory | 53.03 | 57.94 | 43.16 | 67.15 | 69.20 | 68.28 |
stargazer | 52.04 | 60.18 | 32.06 | 60.86 | 66.64 | 48.46 |
heaven | 51.85 | 59.85 | 31.62 | 60.88 | 66.67 | 48.45 |
lczazu | 49.56 | 56.82 | 31.60 | 60.86 | 66.56 | 48.51 |
Baseline* | 49.56* | 54.42 | 45.42 | 74.84 | 78.58 | 71.52 |
zqing | 47.54 | 51.75 | 41.65 | 66.66 | 69.06 | 67.17 |
welkin | 42.74 | 49.91 | 20.67 | 50.90 | 56.48 | 35.38 |
DUT | 40.65 | 43.87 | 43.10 | 68.40 | 71.68 | 68.53 |
sshinde5 | 36.71 | 39.33 | 21.26 | 51.36 | 55.29 | 35.34 |
SIT | 21.60 | 26.55 | 16.83 | 29.92 | 34.92 | 25.22 |
This table summarizes the current performances on the 2021 Action Spotting Challenge. For the leaderboard on the 2022 challenge, please visit EvalAI test and challenge leaderboards.
Model | tight Avg-mAP (challenge) | Avg-mAP (challenge) | tight Avg-mAP (test) | Avg-mAP (test) |
---|---|---|---|---|
Baidu Research | 49.56% | 74.84% | 47.05% | 73.77% |
OPPO | 46.17% | 64.73% | NA | NA |
NetVLAD++ with Baidu features | 43.99% | 74.63% | NA | NA |
AImageLab-RMS | 27.69% | 60.92% | 28.83% | 63.49% |
IdealCat | 26.47% | 54.24% | NA | NA |
CALF-calibration | 15.83% | 46.39% | NA | 46.80% |
CALF | 15.33% | 42.22% | 14.10% | 41,61% |
NetVLAD++ | 9.91% | 52.54% | 11.51% | 53.40% |
straw | 7.39% | 51.65% | 5.92% | 49.78% |
NetVLAD | 4.31% | 30.74% | 4.20% | 31.37% |
This table summarizes the current performances of published methods only. Last update January 2022.
Model | tight Avg-mAP (challenge) | Avg-mAP (challenge) | tight Avg-mAP (test) | Avg-mAP (test) |
---|---|---|---|---|
Baidu Research | 49.56% | 74.84% | 47.05% | 73.77% |
NetVLAD++ with Baidu features | 43.99% | 74.63% | NA | NA |
AImageLab-RMS | 27.69% | 60.92% | 28.83% | 63.49% |
CALF-calibration | 15.83% | 46.39% | NA | 46.80% |
CALF | 15.33% | 42.22% | NA | NA |
NetVLAD++ | 9,91% | 52.54% | 11.51% | 53.40% |
NetVLAD | 4.31% | 30.74% | 4.20% | 31.37% |
AudioVid | NA | NA | NA | 39.9% |
MaxPool | NA | NA | NA | 18.6% |
Team | mAP@1 | mAP@2 | mAP@3 | mAP@4 | mAP@5 | Average-mAP (tight) |
---|---|---|---|---|---|---|
Ruslan Baikulov | 86.47 | 87.98 | 88.28 | 88.18 | 87.95 | 87.91 |
FDL@ZLab | 83.39 | 85.19 | 85.81 | 86.0 | 86.19 | 85.45 |
BASIK | 82.06 | 83.39 | 83.86 | 84.04 | 83.91 | 83.57 |
FC Pixel Nets | 81.89 | 83.22 | 83.97 | 83.85 | 84.02 | 83.5 |
play | 79.74 | 82.58 | 84.06 | 84.49 | 84.34 | 83.29 |
Baseline (PTS) | 62.72 | 69.24 | 72.57 | 74.29 | 74.8 | 71.21 |
A SoccerNet pip package to easily download the data and the annotations is available.
To install the pip package simply run:
pip install SoccerNet
Then use the API to downlaod the data of interest:
from SoccerNet.Downloader import SoccerNetDownloader
mySoccerNetDownloader = SoccerNetDownloader(LocalDirectory="/path/to/SoccerNet")
mySoccerNetDownloader.downloadGames(files=["Labels-v2.json"], split=["train","valid","test"])
If you want to download the videos, you will need to fill a NDA to get the password.
mySoccerNetDownloader.password = input("Password for videos?:\n")
mySoccerNetDownloader.downloadGames(files=["1_224p.mkv", "2_224p.mkv"], split=["train","valid","test","challenge"])
mySoccerNetDownloader.downloadGames(files=["1_720p.mkv", "2_720p.mkv", "video.ini"], split=["train","valid","test","challenge"])
We provide several features including ResNET (used for our benchmarks), and last year's winners features from Baidu Research. Check out our pip package documentation for more features.
mySoccerNetDownloader.downloadGames(files=["1_ResNET_TF2_PCA512.npy", "2_ResNET_TF2_PCA512.npy"], split=["train","valid","test","challenge"])
mySoccerNetDownloader.downloadGames(files=["1_baidu_soccer_embeddings.npy", "2_baidu_soccer_embeddings.npy"], split=["train","valid","test","challenge"])
The easiest way to download and prepare the data is to use the following python code.
python Download/download_ball_data.py --dataset_dir path/to/SoccerNet --password_videos "password_received_from_NDA"
Alternatively, you may use the same pip package as for SoccerNet-v2.
The following code will download the videos and annotations (except annotations on the challenge set) in separated zip files.
from SoccerNet.Downloader import SoccerNetDownloader as SNdl
mySNdl = SNdl(LocalDirectory="path/to/SoccerNet")
mySNdl.downloadDataTask(task="spotting-ball-2024", split=["train", "valid", "test", "challenge"], password=<PW_FROM_NDA>)
Note that you may have to extract and merge the zip files yourself if you choose the second option.
As it was one of the most requested features on SoccerNet-V1, this repository provides functions to automatically extract the ResNet-152 features and compute the PCA on your own broadcast videos. These functions allow you to test pre-trained action spotting, camera segmentation or replay grounding models on your own games.
The functions to extract the video features can be found in the Features folder.
This repository contains several benchmarks for action spotting, which are presented in the SoccerNet-V2 paper, or subsequent papers. You can use these codes to build upon our methods and improve the performances.
The benchmark to beat this year is the E2E-spot method published at ECCV 2022 by J. Hong et. al trained with default parameters and 25fps videos. This method arrived second in the 2022 Action Spotting Challenge The code will be updated on their github repository for anyone to reproduce the baseline results and improve on them.
This repository and the pip package provide evaluation functions for the proposed tasks based on predictions saved in the JSON format. See the Evaluation folder of this repository for more details.
Finally, this repository provides the Annotation tool used to annotate the actions. This tool can be used to visualize the information. Please follow the instruction in the dedicated folder for more details.
Check out our other challenges related to SoccerNet!
- Action Spotting
- Replay Grounding
- Calibration
- Re-Identification
- Tracking
- Dense Video Captioning
- Jersey Number Recognition
For further information check out the paper and supplementary material: https://openaccess.thecvf.com/content/CVPR2021W/CVSports/papers/Deliege_SoccerNet-v2_A_Dataset_and_Benchmarks_for_Holistic_Understanding_of_Broadcast_CVPRW_2021_paper.pdf
Please cite our work if you use our dataset:
@InProceedings{Deliège2020SoccerNetv2,
title={SoccerNet-v2 : A Dataset and Benchmarks for Holistic Understanding of Broadcast Soccer Videos},
author={Adrien Deliège and Anthony Cioppa and Silvio Giancola and Meisam J. Seikavandi and Jacob V. Dueholm and Kamal Nasrollahi and Bernard Ghanem and Thomas B. Moeslund and Marc Van Droogenbroeck},
year={2021},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
}
For further information about the chalklenge check out the paper and supplementary material: https://arxiv.org/abs/2210.02365
@inproceedings{Giancola_2022,
doi = {10.1145/3552437.3558545},
url = {https://doi.org/10.1145%2F3552437.3558545},
year = 2022,
month = {oct},
publisher = {{ACM}},
author = {Silvio Giancola and Anthony Cioppa and Adrien Deli{\`{e}}ge and Floriane Magera and Vladimir Somers and Le Kang and Xin Zhou and Olivier Barnich and Christophe De Vleeschouwer and Alexandre Alahi and Bernard Ghanem and Marc Van Droogenbroeck and Abdulrahman Darwish and Adrien Maglo and Albert Clap{\'{e}}s and Andreas Luyts and Andrei Boiarov and Artur Xarles and Astrid Orcesi and Avijit Shah and Baoyu Fan and Bharath Comandur and Chen Chen and Chen Zhang and Chen Zhao and Chengzhi Lin and Cheuk-Yiu Chan and Chun Chuen Hui and Dengjie Li and Fan Yang and Fan Liang and Fang Da and Feng Yan and Fufu Yu and Guanshuo Wang and H. Anthony Chan and He Zhu and Hongwei Kan and Jiaming Chu and Jianming Hu and Jianyang Gu and Jin Chen and Jo{\~{a}}o V. B. Soares and Jonas Theiner and Jorge De Corte and Jos{\'{e}} Henrique Brito and Jun Zhang and Junjie Li and Junwei Liang and Leqi Shen and Lin Ma and Lingchi Chen and Miguel Santos Marques and Mike Azatov and Nikita Kasatkin and Ning Wang and Qiong Jia and Quoc Cuong Pham and Ralph Ewerth and Ran Song and Rengang Li and Rikke Gade and Ruben Debien and Runze Zhang and Sangrok Lee and Sergio Escalera and Shan Jiang and Shigeyuki Odashima and Shimin Chen and Shoichi Masui and Shouhong Ding and Sin-wai Chan and Siyu Chen and Tallal El-Shabrawy and Tao He and Thomas B. Moeslund and Wan-Chi Siu and Wei Zhang and Wei Li and Xiangwei Wang and Xiao Tan and Xiaochuan Li and Xiaolin Wei and Xiaoqing Ye and Xing Liu and Xinying Wang and Yandong Guo and Yaqian Zhao and Yi Yu and Yingying Li and Yue He and Yujie Zhong and Zhenhua Guo and Zhiheng Li},
title = {{SoccerNet} 2022 Challenges Results},
booktitle = {Proceedings of the 5th International {ACM} Workshop on Multimedia Content Analysis in Sports}
}