MSU Video Quality Metrics Benchmark 2022

Discover the newest metrics and find the most appropriate method for your tasks

Powered by                  
                  
                       G&M Lab head: Dr. Dmitriy Vatolin
                       Measurements, analysis:
                       Aleksandr Gushchin   
                       Anastasia Antsiferova   
Maxim Smirnov
Eugene Lyapustin

Diverse dataset

  • 40 different video codecs of 10 compression standards
  • 2500+ compressed streams
  • 780.000+ subjective scores
  • 10.000+ viewers
  • User-generated content

VQA and IQA metrics

  • 20+ metrics without variations
  • The biggest leaderboard of neural networks-based video quality metrics
  • Calculations over U and V planes
  • Metrics with different weighted
    average for planes

Various charts

  • Bar chart with the overall metrics perfomance
  • Comparison on different compression standards with 95% confidence intervals
  • Speed-Quality chart

What’s new

  • 12.03.2022 Benchmark Release!

Results

The chart below shows the correlation of metrics with subjective scores on our dataset. You can choose the type of correlation and compression standard of codecs used for compression. We recommend that you focus on Spearman’s rank correlation coefficient.

Correlation type: Compression Standard:

The results of the comparison on different compression standards and different bitrates ranges, as well as full-reference and no-reference metrics detailed analysis, are presented on the leaderboard page.

Methodology and dataset

To see all steps of metrics evaluation and the description of our dataset visit the methodology page.

How to submit your method

Find out the strong and weak sides of your method and compare it to the best commercial and free methods.
We kindly invite you to participate in our benchmark. To do this follow the steps below:

Send us an email to vqa@videoprocessing.ai with the following information:
    A. Your method name that will be specified in our benchmark
    B. Your method launch scipt with the following options (or their analogs)
      -ref — path to reference video (for full-reference metrics)
      -dist — path to distorted video
      -output — path to output of your algorithm
      -t — threshold, if it's required in your algorithm
    C. (Optional) Any additional information about the method:
      1. The parameters set that you want us to use
      2. A link to the paper about your model
      3. Any characteristics of your model's architecture
You can verify the results of current participants or estimate the perfomance of your method on public samples
of our dataset. Just send us an email with a request to share them with you.

Our policy:

  • We won't publish the results of your method without your permission.
  • We share only public samples of our dataset as it is private.

Information about all other participants you can find in the participants page.

Contacts

We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via email: vqa@videoprocessing.ai.

Also you can subscribe to updates on our benchmark:

12 Mar 2022
See Also
MSU HDR Video Reconstruction Benchmark 2022
The most comprehensive comparison of HDR video reconstruction methods
MSU Super-Resolution for Video Compression Benchmark 2022
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Site structure