MSU No-Reference Video Quality Metrics Benchmark 2022

Discover the newest metrics and find the most appropriate method for your tasks

Powered by                  
                       G&M Lab head: Dr. Dmitriy Vatolin
                       Measurements, analysis:
Maxim Smirnov
Eugene Lyapustin

Diverse dataset

  • 40 different video codecs of 10 compression standards
  • 2500+ compressed streams
  • 780.000+ subjective scores
  • 10.000+ viewers
  • User-generated content

VQA and IQA metrics

  • 20+ metrics without variations
  • The biggest leaderboard of neural networks-based video quality metrics
  • Calculations over U and V planes
  • Metrics with different weighted
    average for planes

Various charts

  • Bar chart with the overall metrics perfomance
  • Comparison on different compression standards with 95% confidence intervals
  • Speed-Quality chart


This page is a part of MSU Video Quality Benchmark, which you can find here.


The chart below shows the correlation of metrics with subjective scores on our dataset. You can choose the type of correlation and compression standard of codecs used for compression. We recommend that you focus on Spearman’s rank correlation coefficient.

Correlation type: Compression Standard:

The results of the comparison on different compression standards and different bitrates ranges, as well as full-reference and no-reference metrics detailed analysis, are presented on the leaderboard page.

Methodology and dataset

To see all steps of metrics evaluation and the description of our dataset visit the methodology page.

How to submit your method

Find out the strong and weak sides of your method and compare it to the best commercial and free methods.
We kindly invite you to participate in our benchmark. To do this follow the steps below:

Send us an email to with the following information:
    A. Your method name that will be specified in our benchmark
    B. Your method launch scipt with the following options (or their analogs)
      -ref — path to reference video (for full-reference metrics)
      -dist — path to distorted video
      -output — path to output of your algorithm
      -t — threshold, if it's required in your algorithm
    C. (Optional) Any additional information about the method:
      1. The parameters set that you want us to use
      2. A link to the paper about your model
      3. Any characteristics of your model's architecture
You can verify the results of current participants or estimate the perfomance of your method on public samples
of our dataset. Just send us an email with a request to share them with you.

Our policy:

  • We won't publish the results of your method without your permission.
  • We share only public samples of our dataset as it is private.

Information about all other participants you can find in the participants page.

Cite us

title={Video compression dataset and benchmark of learning-based video-quality metrics},
author={Anastasia Antsiferova and Sergey Lavrushkin and Maksim Smirnov and Aleksandr Gushchin and Dmitriy S. Vatolin and Dmitriy Kulikov},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},

You can find the full text of our paper through the link.


We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via email:

Also you can subscribe to updates on our benchmark:

12 Mar 2022
See Also
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
MSU Video Upscalers Benchmark 2022
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
MSU Video Deblurring Benchmark 2022
Learn about the best video deblurring methods and choose the best model
Site structure