Challenge

Important dates

Challenge winners

Challenge winner based on overall rank for all three datasets:

Challenge overview

This challenge invites participants from both academia and industry to develop robust activity recognition models which will be tested for robustness against various perturbations.

The robustness will be evaluated based on the model's performance on the test set with natural corruptions and perturbations. We will test the model robustness against natural perturbations including spatial corruptions, temporal corruptions, camera related perturbations, and compression perturbations.

We will use a public leaderboard for this challenge where the participants can submit their solutions which will be automatically evaluated.

Task details

The challenge is focused on developing solutions that reduce the gap in performance between training set and real-world testing scenario. The goal of this challenge is to promote methods that can handle the various types of perturbations and corruptions observed in real-world data. The task will involve recognition of activities on three different datasets, including Kinetics-400, UCF-101 and HMDB-51. The participants will develop robust activity recognition model on these three datasets. These models will be evaluated on perturbed and corrupted samples based on above mentioned criteria, with the goal to test a model's robustness against various natural, camera-related and compression related perturbations and corruptions. Participants can train using the training set from the three datasets mentioned. We will provide test set for each dataset containing a full-sets as well as mini-sets for faster evaluation.

Dataset download

Training dataset can be downloaded from their original sources here: These will be used for evaluation purpose and should not be used for training. All testing datasets can be downloaded from: here. Full testing dataset can be downloaded from: Mini testing dataset can be downloaded from:

Evaluation

We will use existing benchmark datasets in activity recognition for the evaluation including Kinetics-400, UCF-101, and HMDB-51. We will release a mini-set and a full-set for testing which comprised of modified data for all three datasets which will include perturbations and corruptions. Both the mini-set and the full-set will be used for submission to the leaderboard for evaluation. The winners will be decided using the accuracy metric on the full-set.

Leaderboard is live HERE
The evaluation process and submission format is explained in detail in the leaderboard evaluation tab.

Challenge paper submission guidelines

Participants willing to submit paper for consideration under the challenge track should follow the given guidelines. Failure to adhere to these guidelines will result in rejection of the paper, however the evaluation scores will still be considered for non-paper track.

Tentative schedule

Join our mailing list for updates.

For any questions, please contact Yogesh Rawat [yogesh@crcv.ucf.edu] and
Vibhav Vineet [Vibhav.Vineet@microsoft.com].