MSU Video Super-Resolution Quality Metrics Benchmark 2023

Here you can find 66 Super-Resolution metrics and choose the most suitable for your videos

Powered by
&
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis:
Artem Borisov,
Evgeney Bogatyrev,
Egor Kashkarov


Did you know that PSNR isn't the best metric?

Complex Content

  • A lot of distorted videos with different types of content
  • 46 Super-Resolution methods were used to process videos
  • RealSR, iSeeBetter, waifu2x, VRT and Real-ERSGAN were among them

A lot of Metrics!

  • 66 Super-Resolution Metrics for different tasks
  • Comparison on 1187 videos
  • Regular leaderboard updates

Large Leaderboard

  • Convenient charts with runtime and results of metrics
  • Useful Leaderboard with 3 types of correlation between humans' subjective scores and metrics' results

What’s new

  • 16.06.2023 The alpha version of Benchmark page is released
  • 05.12.2023 Benchmark Release!
  • 05.05.2024 Benchmark Release on GitHub and PapersWithCode, added Q-Align (VQA, IQA and IAA), added LIQE and LIQE-MIX

Introduction

Our benchmark provides a ranking of Super-Resolution quality evaluation metrics.

This rating is based on the correlation between the values of metrics with subjective scores. All of these scores were obtained from pairwise comparisons of different distorted videos with a single reference video (ground-truth, GT).

Scroll down for comparison charts, tables, and interactive visual comparisons of metric performance.

Leaderboard Table

In this section, you can see the leaderboard of the metrics. The metrics' values were calculated on the videos from our four datasets (details are in the "Methodology" section).
You can see information about all participants here.


Correlation:
Rank Name Full Dataset SR Dataset SR+Codecs VSR
Benchmark
VUB
Benchmark
FPS
The best metrics on each dataset are highlighted


Leaderboard Chart

This is the same leaderboard, but presented as a barchart with 95% confidence intervals.

Dataset:
Dataset:

Runtime Comparison

In this section, you can compare the performance of each metric with its correlation with subjective scores. For clarity, the chart shows the Pareto front.

Correlation:

How to participate

Find out the strong and weak sides of your metric and compare it to the best commercial and free metrics. We kindly invite you to participate in our benchmark. To do this follow the steps below:

Download the dataset from Google Drive or here (for a description of the dataset format, see the following link)

Run your metric on it

Send us an email to artem.borisov@graphics.cs.msu.ru with the following information:
    A. Results of the metric on this dataset in the format shown here
    B. The name of your metric that will be specified in our benchmark
    C. Your metric and a script to run it
      Launch script must have the following (or similar) options
        --ref — path to reference video/image (for full-reference metrics)
        --dist — path to distorted video/image
        --output — path to output of your metric
    D. (Optional) Any additional information about the metric:
      1. The parameter set that you want us to use
      2. A link to the paper about your model
      3. Any characteristics of your model's architecture

If you have any suggestions or questions, please contact us.


Contacts

We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via e-mail: artem.borisov@graphics.cs.msu.ru.
Also you can subscribe to updates on our benchmark:


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

Crowd-sourced subjective
quality evaluation platform

  • Conduct comparison of video codecs and/or encoding parameters

What is it?

Subjectify.us is a web platform for conducting fast crowd-sourced subjective comparisons.

The service is designed for the comparison of images, video, and sound processing methods.

Main features

  • Pairwise comparison
  • Detailed report
  • Providing all of the raw data
  • Filtering out answers from cheating respondents

  • Subjectify.us
24 Jun 2024
See Also
Video Colorization Benchmark
Explore the best video colorization algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Metrics Robustness Benchmark
Check your image or video quality metric for robustness to adversarial attacks
Site structure