MSU Super-Resolution for Video Compression Benchmark

Discover SR methods for compressed videos and choose the best model to use with your codec

G&M Lab head: Dr. Dmitriy Vatolin
Project adviser: Dr. Dmitriy Kulikov
Measurements, analysis: 
Evgeney Zimin,
Egor Sklyarov

What’s new

Key features of the Benchmark

The pipeline of our benchmark

Below you can see the comparison between some SR+codec pairs. FullHD video was downscaled twice and compressed with an approximate bitrate of 600 kbps. Then, Super-Resolution models were applied to the compressed input. In the picture, you can see cropped parts from models' outputs.


In this section, you can see RD curves and barcharts. RD curves show the bitrate/quality distribution of each SR+codec pair. Barcharts show the BSQ-rate calculated for objective metrics and subjective scores.

Read about the participants here. Read about "only compressed" method here.
You can see the information about codecs in the methodology.


Objective metrics

You can choose a test sequence, the codec that was used to compress it, and a metric.

Video: Codec: Metric:

Hide high bitrates:

Highlight the plot region where you want to zoom in

Subjective assessment


*You can see the information about extrapolation and subjective BSQ-rate calculation here.

Highlight the plot region where you want to zoom in

Correlation of metrics with subjective assessment

We calculated objective metrics on the crops used for subjective comparison and calculated a correlation between the subjective and objective results.

ERQAv1.1 is a newer version of ERQAv1.0 metric.
Y-VMAF (clipped) is Y-VMAF that is capped from 0 to 100.

Correlation of metrics with subjective assessment via extrapolated BSQ-rate

We calculated the correlation of metrics via the correlation of BSQ-rates, evaluated over metrics’ values.

Seconds per iteration chart

Read about second per iteration (s/it) calculation here.


In this section, you can choose a sequence, see a cropped piece of a frame from it, shifted Y-PSNR Visualization, and ERQAv1.0 Visualization for this crop. For shifted Y-PSNR find the optimal shift for Y-PSNR and apply MSU VQMT PSNR to frames with this shift. See methodology for more information.

Video: Codec: Approximate bitrate:

Model 1: Model 2: Model 3:

Drag a red rectangle to the area that you want to see zoomed-in






The table below shows a comparison of Super-Resolution algorithms paired with different codecs by subjective score and a few objective metrics.

You can choose a metric and a method of comparison that we used to calculate the BSQ-rate. Read the information about the methods of comparison here.
Click on the model's name in the table to read information about the method.
You can see information about all participants here.
Click on the labels to sort the table.

Metric: Method:

Model x264 x265 aomenc VVenC uavs3e

*To Be Published

Best sequence for each model

The table below shows the best sequence for each Super-Resolution algorithm based on the subjective score and a few objective metrics. The best video is the one in which the algorithm has the highest metric value among other videos. Sequences are represented by different colors. You can see the list of videos in the methodology.


Model x264 x265 aomenc VVenC uavs3e

*To Be Published

SR+codec pairs leaderboard

The table below shows a comparison of all pairs of Super-Resolution algorithms and codecs. The table also includes "only compressed" methods (e.g. "only x264"). You can choose a metric for the table.

Methods of comparison:


SR + codec rel. x264 rel. self max rel. self

SR+Codec Benchmark Roadmap

Feature What it achieves Release date
4 more test sequences We will extend our dataset to make it more diverse and cover more
use cases. We expect it to contain 7×7×5=245 Full HD videos.
Extensive subjective
Right now, we have made a subjective comparison for only one codec.
We plan to make a subjective comparison for all codecs to get a more
accurate ranking for more SR+codec pairs. The subjective comparison
with that many video pairs will be very expensive. If you want to
support our benchmark, please contact us:
Q4 2021
More state-of-the-art
Super-Resolution methods
New Super-Resolution methods are constantly being developed. We will
add new qualitative SR methods to our benchmark as they appear. We also
expect developers to submit their methods to us.
You can submit your method here.
Q1 2022
A new metric to
measure compressed video
restoration quality
Subjective comparison showed that the most popular video quality
metrics — PSNR and SSIM — are not applicable to the Super-Resolution task.
We are researching our metric for compressed video restoration quality
that will correlate well with subjective assessment.
Q1 2022
“Real-time” and “restoration”
Some Super-Resolution models work faster than others, while slow
methods can achieve results of much better quality.
We plan to divide the leaderboard of our benchmark into 2 categories:
  • The “real-time” category will contain fast SR methods that can be
    used to enhance videos in real-time
  • The “restoration” category will contain SR methods that can produce
    high-quality results over any period of time
Q2 2022

Submit your method

Verify your method’s ability to restore compressed videos and compare it with other algorithms.
You can go to the page with information about other participants.
Submission of participants’ methods has been temporarily suspended.

1. Download input data Download input low-resolution videos as sequences of frames in png format
There are 2 available options:
  1. Download 1 folder with all videos, joined in one sequence here.
    Neighboring videos are separated by 5 black frames, which will be skipped for evaluation.

  2. If you worry this strategy can decrease your performance, you can download 245 folders
    with each video here.

2. Apply your algorithm Apply your super-resolution algorithm.
You can also send us the code of your method or the executable file and we will run it ourselves.
If your SR can only increase the resolution by 4 times, then you can downscale the result
by 2 times using the following FFmpeg command:
ffmpeg -i {4x_path} -vf "scale=1920:1080" -sws_flags gauss {2x_path}

3. Send us result Send us an email to with the following
    A. Name of your method that will be specified in our benchmark
    B. A way for us to download your method's output frames (e.g. link to the cloud drive)
    C. (Optional) Any additional information about the method:
      1. Full name of your model
      2. The parameter set that was used
      3. A link to the code of your model, if it is available
      4. A link to the paper about your model
      5. Execution time of your algorithm and information about used GPU
      6. Any other additional information

Download the report

Download the report
(free download)

PDF report (25,5 МБ)

Released on October, 12
75 SR+codec pairs
x264, x265, aomenc, VVenC, uavs3 codecs
3 Full HD video sequences
6 different objective metrics and subjective comparison
50+ pages with plots

Videos used in the report:


For questions and propositions, please contact us:

31 Aug 2021
See Also
MSU 3D-video Quality Analysis. Report 12
MSU Video Upscalers Benchmark 2021
The most comprehensive comparison of video super resolution (VSR) algorithms by subjective quality
MSU Video Upscalers Benchmark Participants
The list of the participants of the MSU Video Upscalers Benchmark
MSU Video Upscalers Benchmark Methodology
The methodology of the MSU Video Upscalers Benchmark
MSU Video Alignment and Retrieval Benchmark
Explore the best algorithms in different video alignment tasks
MSU Video Alignment and Retrieval Benchmark Suite Participants
List of participants of MSU Video Alignment and Retrieval Benchmark Suite
Site structure