Video Saliency Prediction Challenge 2024

Participate

Here you can register team and send your own predicted saliency maps and the code to take part in this challenge. Metric results will be sent to your email

Submitting

For deep-learned solutions only working with PyTorch are sought. Other solutions will be accepted but will not be officially ranked.

  1. Process the input images and keep the same name for the output image and its parent video directory (e.g., for an input file “video-0001/083.png” the output file should be “video-0001/083.png”);
  2. Apply the code and training checkpoints for neural network models;
  3. Create a ZIP archive containing all the output image results named above, code, and training checkpoints;
  4. Send this archive to the submission acceptance form by clicking on the “Upload File” button.

Evaluation

The evaluation consists of the comparison of the predicted by the participant models saliency maps with the ground truth saliency maps collected with the assistance of real people in crowdsource.

The comparison is carried out according to 4 popular metrics for the saliency prediction task:

  • Linear Correlation Coefficient (CC),
  • Similarity (SIM),
  • Normalized Scanpath Saliency (NSS),
  • Area Under the Curve — Judd (AUC Judd)

The final score for a participant is calculated as the average rank on all four metrics. If the final score is equal by different methods, the higher place is awarded to the earlier submission.

09 May 2024
See Also
Real-World Stereo Color and Sharpness Mismatch Dataset
Download new real-world video dataset of stereo color and sharpness mismatches
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Site structure