Important dates
- Challenge announcement: Jan 10, 2022
- Release of testing data:
Feb 10, 2022March 08, 2022 - Leaderboard open:
Feb 25, 2022March 10, 2022 - Challenge submission deadline [paper track]:
March 10, 2022March 28, 2022 - Challenge submission deadline: May 31, 2022
- Winner announcement: June 05, 2022
Challenge winners
Challenge winner based on overall rank for all three datasets:
- USTC-IAT-United (1st place) [Video] [Report] [Presentation]
Affiliations: University of Science and Technology of China, PAII Inc.
Team Members: Jun Yu, Zhihong Wei, Mohan Jing, Zepeng Liu, Xiaohua Qi, Keda Lu, Liwen Zhang, Hao Chang, Hang Zhou - Inyang (2nd place) [Video] [Report]
Affiliations: Xiamen University
Team Members: Liting Liu, Shenshen Du, Zhongpeng Cai, Shuoping Yang - SIS Lab (3rd place) [Video] [Report]
Affiliations: University of South Florida
Team Members: Keval Doshi, Yasin Yilmaz - HAVPR (Honorable mention, 2nd place in Kinetics-400P) [Video] [Report] [Presentation]
Affiliations: Wuhan University
Team Members: Zitao Gao, Yuwei Yin, Yuanzhong Liu, Zhigang Tu, Juefeng Xiao, Xiangyue Zhang
Challenge overview
This challenge invites participants from both academia and industry to develop robust activity recognition models which will be tested for robustness against various perturbations.
The robustness will be evaluated based on the model's performance on the test set with natural corruptions and perturbations. We will test the model robustness against natural perturbations including spatial corruptions, temporal corruptions, camera related perturbations, and compression perturbations.
We will use a public leaderboard for this challenge where the participants can submit their solutions which will be automatically evaluated.
The robustness will be evaluated based on the model's performance on the test set with natural corruptions and perturbations. We will test the model robustness against natural perturbations including spatial corruptions, temporal corruptions, camera related perturbations, and compression perturbations.
We will use a public leaderboard for this challenge where the participants can submit their solutions which will be automatically evaluated.
Task details
The challenge is focused on developing solutions that reduce the gap in performance between training set and real-world testing scenario. The goal of this challenge is to promote methods that can handle the various types of perturbations and corruptions observed in real-world data.
The task will involve recognition of activities on three different datasets, including Kinetics-400, UCF-101 and HMDB-51. The participants will develop robust activity recognition model on these three datasets. These models will be evaluated on perturbed and corrupted samples based on above mentioned criteria, with the goal to test a model's robustness against various natural, camera-related and compression related perturbations and corruptions. Participants can train using the training set from the three datasets mentioned. We will provide test set for each dataset containing a full-sets as well as mini-sets for faster evaluation.
Dataset download
Training dataset can be downloaded from their original sources here:
- Kinetics-400 [Original]: Click here
- UCF-101 [Original]: Click here
- HMDB-51 [Original]: Click here
- Kinetics-400P: Full set (724G). (Smaller Chunks)
- UCF-101P: Full set (22G)
- HMDB-51P: Full set (4.8G)
Evaluation
We will use existing benchmark datasets in activity recognition for the evaluation including Kinetics-400, UCF-101, and HMDB-51. We will release a mini-set and a full-set for testing which comprised of modified data for all three datasets which will include perturbations and corruptions. Both the mini-set and the full-set will be used for submission to the leaderboard for evaluation. The winners will be decided using the accuracy metric on the full-set.
Leaderboard is live HERE
The evaluation process and submission format is explained in detail in the leaderboard evaluation tab.
Leaderboard is live HERE
The evaluation process and submission format is explained in detail in the leaderboard evaluation tab.
Challenge paper submission guidelines
Participants willing to submit paper for consideration under the challenge track should follow the given guidelines. Failure to adhere to these guidelines will result in rejection of the paper, however the evaluation scores will still be considered for non-paper track.
- The manuscript should follow CVPR 2022 paper template. The paper can have maximum of 8 pages (excluding references) on above mentioned topics. We encourage the authors to submit 4 page papers. Please refer to CVPR 2022 author kit for detailed formatting instructions.
- Submitted manuscript should follow the double-blind policy following CVPR 2022.
- Dual submission to CVPR 2022 and ROSE 2022 is allowed, however the manuscript must contain substantial original contents that is not submitted to any other workshop, conference or journal.
- Submissions will be desk-rejected without review if they:
- violate the double-blind or dual-submission policy
- have more than 8 pages (excluding references)
- Submitted manuscripts will be peer reviewed under the double-blind policy of CVPR 2022. Submission should be done online through the CMT submission system.
- Accepted papers will be available at the workshop webpage. They will also be made available in the main conference proceedings if the authors agree (only for full-length papers not published at CVPR 2022).
Tentative schedule
-
Feb 10, 2022March 08, 2022: Test data release
We will release the mini-set for the challenge track. The full-set will also be released shortly. -
Feb 25, 2022March 10, 2022: Leaderboard open HERE
The evaluation server and leaderboard will be open to public for challenge participation. -
March 10, 2022March 28, 2022: Challenge submission deadline for paper track
Challenge participants willing to submit a paper to the workshop should submit their final manuscript with scores by this date. The participants may continue to submit for non-paper track until May 31, 2022, but those evaluations will not be considered for the paper. -
April 01, 2022April 12, 2022: Notification to authors
Authors of submitted papers will be notified. -
April 08, 2022April 18, 2022: Camera ready deadline
Selected papers should submit a well-formatted camera ready paper by this deadline. - May 31, 2022: Challenge submission deadline (non-paper track)
Participants for the non-paper track can submit final evaluation files by this date for challenge consideration. - June 05, 2022: Challenge winner announcement
Highest ranking participants will be asked to send the solutions and reports to the organizers. - June 20, 2022: Workshop
Winners will present their solution. Presentation duration will be announced later.
Join our mailing list for updates.
For any questions, please contact Yogesh Rawat [yogesh@crcv.ucf.edu] and
Vibhav Vineet [Vibhav.Vineet@microsoft.com].