CrowdSAL: Crowdsourced Video Saliency Prediction
Dataset and Benchmark

Explore the best video saliency prediction (VSP) dataset

Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Alexey Bryncev,
Andrey Moskalenko,
Ivan Kosmynin,
Kira Shilovskaya

Our dataset is a high-resolution multitype set of videos collected from observers using mouse-saliency. The benchmark of the best video saliency prediction methods for recognition of the most important areas of the video is based on it.

5000 High-Resolution Test Clips

especially movie fragments,
sport streams and live caption clips

Reliable Data Collection

using mouse-saliency
for 90+ observers

An Open Visual Comparison

with source fragments
available for reference

10 Models Tested

in 8 various works
with different weights/architectures

Domain Adaptation

with brightness change
and Center Prior blending
for prediction generalization

Speed/Quality Scatter Plots

and tables with objective metrics
for a comprehensive comparison

What’s New

  • March 27th, 2023: Release

Introduction

We use various objective metrics for evaluating video saliency prediction methods. Also, we calculate the average FPS (frames per second) to compare the speed of the algorithms.

To generalize the output of the models, we use the domain adaptation involving such transformations as brightness correction and blending with the Center Prior. Check the “Methodology” section to learn the details.

Scroll below for comparison charts, tables, and interactive visual comparisons of saliency model results.

Visualizations

Leaderboards

Charts

Cite Us

@inproceedings{
moskalenko2024aim,
title={AIM 2024 challenge on video saliency prediction: Methods and results},
author={Moskalenko, A and Bryncev, A and Vatolin, D and Timofte, R and Zhan, G and Yang, Li and Tang, Y and Liao, Y and Lin, J and Huang, B and others},
booktitle={The 18th European Conference on Computer Vision ECCV 2024, Advances in Image Manipulation workshop},
year={2024},
}
@inproceedings{
gitman2014semiautomatic,
title={Semiautomatic visual-attention modeling and its application to video compression},
author={Gitman, Yury and Erofeev, Mikhail and Vatolin, Dmitriy and Andrey, Bolshakov and Alexey,Fedorov},
booktitle={2014 IEEE international conference on image processing (ICIP)},
pages={1105--1109},
year={2014},
organization={IEEE}
}

Further Reading

Check the “Methodology” section to learn how we prepare our dataset.


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

27 Mar 2026
See Also
Real-World Stereo Color and Sharpness Mismatch Dataset
Download new real-world video dataset of stereo color and sharpness mismatches
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
Forecasting of viewers’ discomfort
How do distortions in a stereo movie affect the discomfort of viewers?
SAVAM - the database
During our work we have created the database of human eye-movements captured while viewing various videos
Super-Resolution Quality Metrics Benchmark
Discover 50 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Super-Resolution Quality Metrics Benchmark
Discover 50 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Site structure