MSU Video Quality Metrics Benchmark

Discover the newest metrics and find the most appropriate method for your tasks

Powered by
Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis:
Aleksandr Gushchin
Anastasia Antsiferova
  Maxim Smirnov
  Eugene Lyapustin

Diverse dataset

  • 40 different video codecs of 10 compression standards
  • 2500+ compressed streams
  • 780.000+ subjective scores
  • 10.000+ viewers
  • User-generated content

VQA and IQA metrics

  • 20+ metrics without variations
  • The biggest leaderboard of neural networks-based video quality metrics
  • Calculations over U and V planes
  • Metrics with different weighted
    average for planes

Various charts

  • Bar chart with the overall metrics perfomance
  • Comparison on different compression standards with 95% confidence intervals
  • Speed-Quality chart

What’s new

  • 08.11.2024 Added 20 new metrics!
  • 27.02.2024 Added 20 new metrics. Added link to new Metrics Robustness Benchmark.
  • 07.02.2023 Added 5 submissions and 11 new metrics. Added Citation section. Fixed bugs.
  • 09.10.2022 Added 3 new metrics. Updated Leaderbord and Charts.
  • 09.09.2022 Updated visualisations and structure of the benchmark.
  • 12.03.2022 Benchmark Release!

Important note

This is the Home page. It contains only a small part of the results and graphs. To see the comprehensive analysis of metrics behavior you can visit the leaderboard page.

Results

The chart below shows the correlation of metrics with subjective scores on our dataset. You can choose the type of correlation and compression standard of codecs used for compression. We recommend that you focus on Spearman’s rank correlation coefficient.

Correlation type: Compression Standard:

The results of the comparison on different compression standards and different bitrates ranges, as well as full-reference and no-reference metrics detailed analysis, are presented on the leaderboard page.

Methodology and dataset

To see all steps of metrics evaluation and the description of our dataset visit the methodology page.

How to submit your method

Find out the strong and weak sides of your method and compare it to the best commercial and free methods.
We kindly invite you to participate in our benchmark. To do this follow the steps below:

Send us an email to vqa@videoprocessing.ai with the following information:
    A. Your method name that will be specified in our benchmark
    B. Your method launch scipt with the following options (or their analogs)
      -ref — path to reference video (for full-reference metrics)
      -dist — path to distorted video
      -output — path to output of your algorithm
      -t — threshold, if it's required in your algorithm
    C. (Optional) Any additional information about the method:
      1. The parameters set that you want us to use
      2. A link to the paper about your model
      3. Any characteristics of your model's architecture
You can verify the results of current participants or estimate the perfomance of your method on public samples
of our dataset. Just send us an email with a request to share them with you.

Our policy:

  • We won't publish the results of your private method without your permission.
  • You can estimate the performance of your method on the open part of our dataset.
  • You can also verify the results of your method, tested by us.

Information about all other participants you can find in the participants page.

Cite us

@inproceedings{
NEURIPS2022_59ac9f01,
author = {Antsiferova, Anastasia and Lavrushkin, Sergey and Smirnov, Maksim and Gushchin, Aleksandr and Vatolin, Dmitriy and Kulikov, Dmitriy},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {13814--13825},
publisher = {Curran Associates, Inc.},
title = {Video compression dataset and benchmark of learning-based video-quality metrics},
url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/59ac9f01ea2f701310f3d42037546e4a-Paper-Datasets_and_Benchmarks.pdf},
volume = {35},
year = {2022}
}

You can find the full text of our paper through the link.

Contacts

We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via email: vqa@videoprocessing.ai.

Also you can subscribe to updates on our benchmark:


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

12 Mar 2022
See Also
PSNR and SSIM: application areas and criticism
Learn about limits and applicability of the most popular metrics
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Video Colorization Benchmark
Explore the best video colorization algorithms
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Site structure