Video Saliency Prediction Challenge 2024

Terms and Conditions

Video Saliency Prediction Challenge

These are the official rules (terms and conditions) that govern how the AIM 2024 challenge on Video Saliency Prediction Challenge will operate. This challenge will be simply referred to as the “challenge” or the “contest” throughout the remaining part of these rules and may be named as the “AIM” benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).

In these rules, “we”, “our”, and “us” refer to the organizers (Artem Borisov alxbrc0@gmial.com) of the ECCV challenge, and “you” and “yourself” refer to an eligible contest participant.

Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest, the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsibility of the participant to send an email to the organizers such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.

Contest description

This is a skill-based contest and chance plays no part in the determination of the winner(s).

The goal of the contest is to predict saliency maps of an input video and the challenge is called Video Saliency Prediction Challenge.

Competition focus: a dataset customized to the specific needs of the competition will be provided. The videos are characterized by a broad coverage of the use cases of Saliency Prediction methods. We will refer to this dataset, its section, and related materials as the AIM Dataset. The dataset is divided into training and test data. The goal is to achieve predictions with the best metric score with the reference (Ground Truth) saliency maps obtained from observers in crowdsource. Participants will not have access to the maps from the test data. Participants must provide an archive with maps obtained by their method on test data and the code of their method according to the template we provide (details in the “Participate” section) and they will be ranked according to the performance on this data.

Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the “Overview” page.

Eligibility

You are eligible to register and compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the open tasks, who accept to follow the rules of this contest

  • you are not an AIM challenge organizer or an employee of ECCV challenge organizers

  • you are not involved in any part of the administration and execution of this contest

  • you are not a first-degree relative, partner, household member of an employee or of an organizer of the ECCV challenge or a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

NOTE: industry and research labs are allowed to submit entries and to compete in both the validation phase and final test phase. However, to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must, and, therefore, the participants need to make available and submit their codes or executables. Since that we do not save the code once it is executed, this will need to be done again after the competition is over. All the top entries will be checked for reproducibility and marked accordingly.

Entry

To be eligible for judging, an entry must meet all the following requirements:

Entry contents: The participants are required to submit saliency prediction results on all videos from the test set. To be eligible for judgment, the top-ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses section and make their code or executables online accessible for not less than one year following the end of the challenge (applies only for top ten ranked participants of the competition). All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the ECCV Workshop.

Use of data provided: All data provided by AIM are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. ECCV and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify AIM and the organizers, including their employees, trustees, officers, and agents, against any claims arising from your use of the data. You agree not to redistribute the data without this notice.

Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The Ground Truth test data will not be made available to the participants during the contest.

Training and validation data: The organizers will make available to the participants a training dataset with Ground Truth video saliency maps. At the start of the final phase, the test data without Ground Truth video scores will be made available to the registered participants.

Post-challenge analyses: The organizers may also perform additional post-challenge analyses using extra data, but without effect on the challenge ranking.

Submission: The entries will be online and submitted via the videoprocessing.ai web platform. During the development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final perceptual evaluation will be computed on the test data submissions, the final scores will be released after the challenge is over.

Original work, permissions: In addition, by submitting your entry into this contest you confirm that to the best of your knowledge:

  • your entry is your original work;
  • your entry only includes material that you own, or that you have permission to use.

Submission of entries

The participants will follow the instructions on the videoprocessing.ai website to submit entries (details in the “Participate” section)

The participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.

Judging the entries

We will be also the judges of the contest; all of us are forbidden to enter the contest and are experts in causality, statistics, machine learning, computer vision, or a related field. We will review all eligible entries received and select three winners based on the prediction score on test data. We will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

Our decisions are final and binding. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

09 May 2024
See Also
Real-World Stereo Color and Sharpness Mismatch Dataset
Download new real-world video dataset of stereo color and sharpness mismatches
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Site structure