MSU Video Super-Resolution Quality Assessment Challenge 2024

Terms and Conditions

Video Super-Resolution Quality Assessment Challenge

These are the official rules (terms and conditions) that govern how the AIM 2024 challenge on Video Super-Resolution Quality Assessment Challenge will operate. This challenge will be simply referred to as the “challenge” or the “contest” throughout the remaining part of these rules and may be named as “AIM” benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).

In these rules, “we”, “our”, and “us” refer to the organizers (Artem Borisov artem.borisov@graphics.cs.msu.ru and Ivan Molodetskikh ivan.molodetskikh@graphics.cs.msu.ru) of ECCV challenge and “you” and “yourself” refer to an eligible contest participant.

Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsibility of the participant to send an email to the organizers such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.

Contest description

This is a skill-based contest and chance plays no part in the determination of the winner (s).

The goal of the contest is to predict the perceptual quality of an input video with Super-Resolution method applied and the challenge is called Video Super-Resolution Quality Assessment.

Competition focus: a dataset customized to the specific needs of the competition will be provided. The videos are characterized by a broad coverage of the usecases of Super-Resolution methods. We will refer to this dataset, its section and related materials as the AIM Dataset. The dataset is divided into training, validation, and test data. We focus on the perceptual quality of the results; the goal is to achieve predictions with the best accuracy/correlation with the reference (Ground-Truth) scores obtained from subjective video comparisons. Participants will not have access to the benchmark scores from the test data. Participants will be ranked according to the performance of their methods on the test data. Participants must provide an archive with working method code written according to the template we provide (details on the “Participate” page). Participants should also provide information about the name and type of metric (Full-Reference/No-Reference, Image/Video Quality Assessment), team name and composition.

Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the “Overview” page.

Eligibility

You are eligible to register and compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the open tasks, who accepts to follow the rules of this contest

  • you are not an AIM challenge organizer or an employee of ECCV challenge organizers

  • you are not involved in any part of the administration and execution of this contest

  • you are not a first-degree relative, partner, household member of an employee or of an organizer of ECCV challenge or of a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

NOTE: industry and research labs are allowed to submit entries and to compete in both the validation phase and final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. Due to the fact that we do not save the code once it is executed, this will need to be done again after the competition is over. All the top entries will be checked for reproducibility and marked accordingly.

Entry

In order to be eligible for judging, an entry must meet all the following requirements:

Entry contents: the participants are required to submit metric results on all videos from the test set. To be eligible for judgement, the top-ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top ten ranked participants of the competition). All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the ECCV Workshop.

Use of data provided: all data provided by AIM are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. ECCV and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify AIM and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The Ground-Truth test data will not be made available to the participants during the contest.

Training and validation data: The organizers will make available to the participants a training dataset with Ground-Truth video scores and a validation (public test) dataset without Ground-Truth video scores. At the start of the final phase, the test data without Ground-Truth video scores will be made available to the registered participants.

Post-challenge analyses: the organizers may also perform additional post-challenge analyses using extra data, but without effect on the challenge ranking.

Submission: the entries will be on-line submitted via the videoprocessing.ai web platform. During the development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final perceptual evaluation will be computed on the test data submissions, the final scores will be released after the challenge is over.

Original work, permissions: In addition, by submitting your entry into this contest you confirm that to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.

Submission of entries

The participants will follow the instructions on the videoprocessing.ai website to submit entries (details on the “Participate” page)

The participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.

Judging the entries

We will be also the judges of contest; all of us are forbidden to enter the contest and are experts in causality, statistics, machine learning, computer vision, or a related field. We will review all eligible entries received and select (three) winners based upon the prediction score on test data. We will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

Our decisions are final and binding. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

07 May 2024
See Also
Video Colorization Benchmark
Explore the best video colorization algorithms
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Site structure