MSU Video Saliency Prediction Benchmark

Explore the best methods of video saliency prediction (VSP) algorithms

Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Alexey Bryncev,
Andrey Moskalenko

Our benchmark determines the best video saliency prediction methods for recognition of the most important areas of the video using our compact yet comprehensive dataset.

Everyone is welcome to participate! Run your favorite video saliency prediction method on our dataset and send us the result to see how well it performs. Check the “Submitting” section to learn the details.

41 High-Resolution Test Clips

especially movie fragments,
sport streams and live caption clips

Reliable Data Collection

using 500 Hz eye-tracker
for 50 observers

An Open Visual Comparison

with source fragments
available for reference

28 Models Tested

in 15 various works
with different weights/architectures

Domain Adaptation

with brightness change
and Center Prior blending
for prediction generalization

Speed/Quality Scatter Plots

and tables with objective metrics
for a comprehensive comparison

What’s new

  • May 25th, 2023: Beta-version Release


We use various objective metrics for evaluating video saliency prediction methods. Also, we calculate the average FPS (frames per second) to compare the speed of the algorithms.

To generalize the output of the models, we use the domain adaptation involving such transformations as brightness correction and blending with the Center Prior. Check the “Methodology” section to learn the details.

Scroll below for comparison charts, tables, and interactive visual comparisons of saliency model results.





To add your video saliency prediction method to the benchmark, follow these steps:

1. Download the input Download the dataset
2. Apply your method Apply your video saliency prediction method to the dataset
3. Send us the result
Send to
the following information:
  • The full and short names of the video saliency prediction method that we will specify in our benchmark.
  • An compressed directory with saliency maps. The link must be valid within a month after receiving it. Please ensure that you permit us to download the file. By submitting this video, you agree that third parties may use it.
  • The exact commands, options, versions of used programs, etc. Each submission must contain results of exactly one model with fixed settings. Please do not fine-tune model parameters by hand for each video segment.
  • If you want us to verify your method, send us the executable, and we will run it ourselves. Then we will mark the FPS of your algorithm as “Verified” in charts and tables. Executable's arguments must include paths to the folder with input PNG images and the folder for output PNG images. By submitting this executable, you agree that third parties may use it.

If you have any suggestions or questions, please contact us:

Get Notifications About the Updates of This Benchmark

Do you want to be the first to discover the best new video saliency prediction algorithm? We can notify you about this benchmark’s updates: simply submit your preferred email address using the form below. We promise not to send you unrelated information.

Cite Us

title={Semiautomatic visual-attention modeling and its application to video compression},
author={Gitman, Yury and Erofeev, Mikhail and Vatolin, Dmitriy and Andrey, Bolshakov and Alexey,Fedorov},
booktitle={2014 IEEE international conference on image processing (ICIP)},

Further Reading

Check the “Methodology” section to learn how we prepare our dataset.

Check the “Participants” section to learn which video saliency prediction method implementations we use.

25 May 2023
See Also
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
Video Upscalers Benchmark
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
Site structure