MSU Video Saliency Prediction Benchmark
Explore the best methods of video saliency prediction (VSP) algorithms
Andrey Moskalenko
Our benchmark determines the best video saliency prediction methods for recognition of the most important areas of the video using our compact yet comprehensive dataset.
Everyone is welcome to participate! Run your favorite video saliency prediction method on our dataset and send us the result to see how well it performs. Check the “Submitting” section to learn the details.
41 High-Resolution Test Clips
especially movie fragments,
sport streams and live caption clips
Reliable Data Collection
using 500 Hz eye-tracker
for 50 observers
An Open Visual Comparison
with source fragments
available for reference
28 Models Tested
in 15 various works
with different weights/architectures
Domain Adaptation
with brightness change
and Center Prior blending
for prediction generalization
Speed/Quality Scatter Plots
and tables with objective metrics
for a comprehensive comparison
What’s new
- May 25th, 2023: Beta-version Release
Introduction
We use various objective metrics for evaluating video saliency prediction methods. Also, we calculate the average FPS (frames per second) to compare the speed of the algorithms.
To generalize the output of the models, we use the domain adaptation involving such transformations as brightness correction and blending with the Center Prior. Check the “Methodology” section to learn the details.
Scroll below for comparison charts, tables, and interactive visual comparisons of saliency model results.
Visualizations
Leaderboards
Charts
Submitting
To add your video saliency prediction method to the benchmark, follow these steps:
1. Download the input | Download the dataset |
2. Apply your method |
Apply your video saliency prediction method to the dataset |
3. Send us the result |
Send to video-saliency-prediction-benchmark@videoprocessing.ai
|
If you have any suggestions or questions, please contact us: video-saliency-prediction-benchmark@videoprocessing.ai
Get Notifications About the Updates of This Benchmark
Do you want to be the first to discover the best new video saliency prediction algorithm? We can notify you about this benchmark’s updates: simply submit your preferred email address using the form below. We promise not to send you unrelated information.
Cite Us
@inproceedings{
gitman2014semiautomatic,
title={Semiautomatic visual-attention modeling and its application to video compression},
author={Gitman, Yury and Erofeev, Mikhail and Vatolin, Dmitriy and Andrey, Bolshakov and Alexey,Fedorov},
booktitle={2014 IEEE international conference on image processing (ICIP)},
pages={1105--1109},
year={2014},
organization={IEEE}
}
|
Further Reading
Check the “Methodology” section to learn how we prepare our dataset.
Check the “Participants” section to learn which video saliency prediction method implementations we use.
-
MSU Benchmark Collection
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- Super-Resolution for Video Compression Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects