MSU Video Saliency Prediction Benchmark

Explore the best video saliency prediction (VSP) algorithms

Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Alexey Bryncev,
Andrey Moskalenko

Our benchmark is a comparison of the best video saliency prediction methods for recognition of the most important areas of the video. It is based on a high-resolution multitype dataset collected from observers using eye-tracker.

Everyone is welcome to participate! Run your favorite video saliency prediction method on our dataset and send us the result to see how well it performs. Check the “Submitting” section to learn the details.

41 High-Resolution Test Clips

especially movie fragments,
sport streams and live caption clips

Reliable Data Collection

using 500 Hz eye-tracker
for 50 observers

An Open Visual Comparison

with source fragments
available for reference

28 Models Tested

in 15 various works
with different weights/architectures

Domain Adaptation

with brightness change
and Center Prior blending
for prediction generalization

Speed/Quality Scatter Plots

and tables with objective metrics
for a comprehensive comparison

What’s New

  • May 25th, 2023: Beta-version Release
  • September 20th, 2023: v1.0 Release

Introduction

We use various objective metrics for evaluating video saliency prediction methods. Also, we calculate the average FPS (frames per second) to compare the speed of the algorithms.

To generalize the output of the models, we use the domain adaptation involving such transformations as brightness correction and blending with the Center Prior. Check the “Methodology” section to learn the details.

Scroll below for comparison charts, tables, and interactive visual comparisons of saliency model results.

Visualizations

Leaderboards

Charts

Submitting

To add your video saliency prediction method to the benchmark, follow these steps:

1. Download the dataset
2. Apply your video saliency prediction method to the dataset
3. Send to video-saliency-prediction-benchmark@videoprocessing.ai
the following information:
  • The full and short names of the video saliency prediction method that we will specify in our benchmark.
  • An compressed directory with saliency maps. The link must be valid within a month after receiving it. Please ensure that you permit us to download the file. By submitting this video, you agree that third parties may use it.
  • The exact commands, options, versions of used programs, etc. Each submission must contain results of exactly one model with fixed settings. Please do not fine-tune model parameters by hand for each video segment.
  • If you want us to verify your method, send us the executable, and we will run it ourselves. Then we will mark the FPS of your algorithm as “Verified” in charts and tables. Executable's arguments must include paths to the folder with input PNG images and the folder for output PNG images. By submitting this executable, you agree that third parties may use it.

If you have any suggestions or questions, please contact us: video-saliency-prediction-benchmark@videoprocessing.ai

Get Notifications About the Updates of This Benchmark

Do you want to be the first to discover the best new video saliency prediction algorithm? We can notify you about this benchmark’s updates: simply submit your preferred email address using the form below. We promise not to send you unrelated information.

Cite Us

@inproceedings{
gitman2014semiautomatic,
title={Semiautomatic visual-attention modeling and its application to video compression},
author={Gitman, Yury and Erofeev, Mikhail and Vatolin, Dmitriy and Andrey, Bolshakov and Alexey,Fedorov},
booktitle={2014 IEEE international conference on image processing (ICIP)},
pages={1105--1109},
year={2014},
organization={IEEE}
}

Further Reading

Check the “Methodology” section to learn how we prepare our dataset.

Check the “Participants” section to learn which video saliency prediction method implementations we use.


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

08 Nov 2023
See Also
Real-World Stereo Color and Sharpness Mismatch Dataset
Download new real-world video dataset of stereo color and sharpness mismatches
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Metrics Robustness Benchmark
Check your image or video quality metric for robustness to adversarial attacks
Video Upscalers Benchmark
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
Site structure