MSU Super-Resolution for Video Compression Benchmark

Discover SR methods for compressed videos and choose the best model to use with your codec

G&M Lab head: Dr. Dmitriy Vatolin
Project adviser: Dr. Dmitriy Kulikov
Measurements, analysis: 
Evgeniy Bogatyrev,
Egor Sklyarov



Diverse dataset

  • H.264, H.265, H.266, AV1, AVS3 codec standards
  • More than 260 test videos
  • 6 different bitrates

Various charts

  • Visual comparison for more than
    80 SR+codec pairs
  • RD curves and bar charts
    for 5 objective metrics
  • SR+codec pairs ranked by BSQ-rate

Extensive report

  • 80+ pages with different plots
  • 15 SOTA SR methods
    and 6 objective metrics
  • Extensive subjective comparison
    with 5300+ valid participants
  • Powered by Subjectify.us


The pipeline of our benchmark

What’s new

  • 21.07.2022 Added SR codecs to the benchmark. Added MDTVSFA correlation to the correlation chart.
  • 12.04.2022 Uploaded the results of extensive subjective comparison. See “Subjective score” in Charts section.
  • 25.03.2022 Added VRT, BasicVSR, RBPN, and COMISR. Updated Leaderboards and Visualizations sections.
  • 14.03.2022 Uploaded new dataset. Updated the Methodology.
  • 26.10.2021 Updated the Methodology.
  • 12.10.2021 Published October Report. Added 2 new videos to the dataset. Updated Charts section and Visualizations.
  • 28.09.2021 Improved the Leaderboards section to make it more user-friendly, updated the Methodology and added ERQAv1.1 metric.
  • 21.09.2021 Added 2 new videos to the dataset, new plots to the Charts section, and new Visualizations.
  • 14.09.2021 Public beta-version Release.
  • 31.08.2021 Alpha-version Release.

Charts

In this section, you can see RD curves, which show the bitrate/quality distribution of each SR+codec pair, and bar charts, which show the BSQ-rate calculated for objective metrics and subjective scores.

Read about the participants here.
You can see the information about codecs in the methodology.

Metrics:

Charts with metrics

You can choose the test sequence, the codec that was used to compress it, and the metric.

If BSQ-rate of any method equals 0, then this method should be considered much better than reference codec (codec with no SR).

Highlight the plot region where you want to zoom in.

Video: Codec: Metric:

Show hidden curves:

Correlation of metrics with subjective assessment

We calculated objective metrics on the crops used for subjective comparison and calculated a correlation between the subjective and objective results. Below you can see the average correlation of metrics over all test cases.

* ERQA-MDTVSFA is calculated by multiplying MDTVSFA and ERQA values over the video.

Speed/BSQ-rate trade-off

Read about Frames per Second (FPS) calculation here. Read about BSQ-rate over Subjective score here.

Visualization

In this section, you can choose the sequence, see a cropped part of a frame from it, shifted Y-PSNR visualization, and ERQAv2.0 Visualization for this crop. For shifted Y-PSNR we find the optimal shift for Y-PSNR and apply MSU VQMT PSNR to frames with this shift. See the methodology for more information.

Video: Codec: Approximate bitrate:

Model 1: Model 2: Model 3:

Drag a red rectangle to the area that you want to see zoomed-in

GT

amq-12

ahq-11

amqs-1

Leaderboards

SR+codec pairs leaderboard

The table below shows a comparison of all pairs of Super-Resolution algorithms and codecs. Each column shows BSQ-rate over a specific metric. You can sort the table by any column.
All methods that took part in subjective comparison are ranked by BSQ-rate over subjective score. Other methods are ranked by BSQ-rate over ERQA.
If BSQ-rate of any method equals 0, then this method should be considered much better than reference codec.
If BSQ-rate of any method is striving to eternity (marked as '∞'), then this method should be considered much worse than reference codec.
"TBP" means that this SR+codec pair did not take part in subjective comparison.

Video:

Rank SR + codec Y-VMAF ERQAv2.0 Y-PSNR Y-MS-SSIM LPIPS

SR codecs

You can find information about SR codecs on the participants page.

You can choose the test sequence and the metric.

Highlight the plot region where you want to zoom in.

Video: Metric:

SR+codec Benchmark Roadmap

Feature What it achieves Release date
More test sequences We will extend our dataset to make it more diverse and cover more
use cases. We expect it to contain 9×6×5=270 Full HD videos.
14.03.2022
Extensive subjective
comparison
We plan to conduct a bigger subjective comparison for all codecs
to get a more accurate ranking for more SR+codec pairs. The subjective
comparison with that many video pairs will be very expensive. If you
want to support our benchmark, please contact us:
sr-codecs-benchmark@videoprocessing.ai
12.04.2022
More state-of-the-art
Super-Resolution methods
New Super-Resolution methods are constantly being developed.
We will add new qualitative SR methods to our benchmark
as they appear. We also expect developers to submit
their methods to us.
You can submit your method here.
Q4 2022
Enterprise report We will compile an enterprise report with more test sequences
and different objective and subjective metrics. You can see
the example in the Report section.
Q1 2023
A new metric to
measure compressed video
restoration quality
Subjective comparison showed that the most popular video quality
metrics — PSNR and SSIM — are not applicable to the
Super-Resolution task. We are researching our metric
for compressed video restoration quality that will correlate
well with subjective assessment
.
Q1 2023
“Real-time” and “restoration”
categories
Some Super-Resolution models work faster than others, while slow
methods can achieve results of much better quality.
We plan to divide the leaderboard of our benchmark into 2 categories:
  • The “real-time” category will contain fast SR methods that can be
    used to enhance videos in real-time
  • The “restoration” category will contain SR methods that can
    produce high-quality results over any period of time
Q2 2023

Submit your method

Verify your method’s ability to restore compressed videos and compare it with other algorithms.
You can go to the page with information about other participants.

1. Download input data Download low-resolution input videos as sequences of frames in PNG format.
There are 2 available options:
  1. Download 1 folder with all videos joined in one sequence here.
    Neighboring videos are separated by 5 black frames, which will be skipped
    for evaluation.

  2. If you worry this strategy can affect your performance, you can download
    269 folders with each video here.


2. Apply your algorithm Apply your Super-Resolution algorithm to upscale frames to 1920×1080 resolution.
You can also send us the code of your method or the executable file
with the instructions on how to run it and we will run it ourselves.

3. Send us result Send us an email to sr-codecs-benchmark@videoprocessing.ai with the following
information:
    A. Name of your method that will be specified in our benchmark
    B. A way for us to download your method's output frames (e.g. link
    to the cloud drive)
    C. (Optional) Any additional information about the method:
      1. Full name of your model
      2. Parameters which were used
      3. A link to the code of your model, if it is available
      4. A link to the paper about your model, if it is available
      5. Execution time of your algorithm and information about used GPU, if it was used
      6. Any other additional information

You can verify the results of current participants or estimate the perfomance of your method on public samples of our dataset. Just send an email to sr-codecs-benchmark@videoprocessing.ai with a request to share them with you.

Our policy:

  • We won't publish the results of your method without your permission.
  • We share only public samples of our dataset as it is private.

Download the Report

Download the report
(free download)


(PDF, 25,5 MB)



Released on October, 12
75 SR+codec pairs
x264, x265, aomenc, VVenC, uavs3 codecs
3 Full HD video sequences
6 different objective metrics and subjective comparison
Y-PSNR, YUV-MS-SSIM, Y-VMAF, Y-VMAF NEG, LPIPS, ERQA
80+ pages with plots

Videos used in the report:

Acknowledgements:

Contact Us

For questions and propositions, please contact us: sr-codecs-benchmark@videoprocessing.ai

You can subscribe to updates on our benchmark:

14 Mar 2022
See Also
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
MSU Video Upscalers Benchmark 2022
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
MSU Video Deblurring Benchmark 2022
Learn about the best video deblurring methods and choose the best model
Site structure