MSU Super-Resolution for Video Compression Benchmark

Discover SR methods for compressed videos and choose the best model to use with your codec

Powered by
Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Evgeney Bogatyrev,
Ivan Molodetskikh,
Egor Sklyarov



Diverse dataset

  • H.264, H.265, H.266, AV1, AVS3 codec standards
  • More than 260 test videos
  • 6 different bitrates

Various charts

  • Visual comparison for more than
    80 SR+codec pairs
  • RD curves and bar charts
    for 5 objective metrics
  • SR+codec pairs ranked by BSQ-rate

Extensive report

  • 80+ pages with different plots
  • 15 SOTA SR methods
    and 6 objective metrics
  • Extensive subjective comparison
    with 5300+ valid participants
  • Powered by Subjectify.us
arxiv logo
arxiv.org paper

Introduction

We present a comprehensive SR benchmark to test the ability of SR models to upscale and restore videos compressed by video codecs of different standards. We evaluate the perceptual quality of the restored videos by conducting a crowd-sourced subjective comparison. For every tested codec, we show which SR methods provide the most video bitrate reduction with the least quality loss.

Everyone is welcome to participate! Run your favorite super-resolution method on our compact test video and send us the result to see how well it performs. Check the “Submit your method” section to learn the details.


The pipeline of our benchmark

What’s new

  • 20.08.2024 Our paper “SR+Codec: a Benchmark of Super-Resolution for Video Compression Bitrate Reduction” was accepted to BMVC 2024.
  • 21.07.2023 Added E-MoEVRT.
  • 12.06.2023 Added RKPQ-4xSR.
  • 21.07.2022 Added MDTVSFA correlation to the correlation chart.
  • 12.04.2022 Uploaded the results of extensive subjective comparison. See “Subjective score” in Charts section.
  • 25.03.2022 Added VRT, BasicVSR++, RBPN, and COMISR. Updated Leaderboards and Visualizations sections.
  • 14.03.2022 Uploaded new dataset. Updated the Methodology.
  • 26.10.2021 Updated the Methodology.
  • 12.10.2021 Published October Report. Added 2 new videos to the dataset. Updated Charts section and Visualizations.
  • 28.09.2021 Improved the Leaderboards section to make it more user-friendly, updated the Methodology and added ERQAv1.1 metric.
  • 21.09.2021 Added 2 new videos to the dataset, new plots to the Charts section, and new Visualizations.
  • 14.09.2021 Public beta-version Release.
  • 31.08.2021 Alpha-version Release.

Charts

In this section, you can see RD curves, which show the bitrate/quality distribution of each SR+codec pair, and bar charts, which show the BSQ-rate calculated for objective metrics and subjective scores.

Read about the participants here.
You can see the information about codecs in the methodology. More details about metrics here.

Charts with metrics

You can choose the test sequence, the codec that was used to compress it, and the metric.

If BSQ-rate of any method equals 0, then this method should be considered much better than reference codec (codec with no SR).

Highlight the plot region where you want to zoom in.

Video: Codec: Metric:

Show hidden curves:

Correlation of metrics with subjective assessment

We calculated objective metrics on the crops used for subjective comparison and calculated a correlation between the subjective and objective results. Below you can see the average correlation of metrics over all test cases.

* ERQA-MDTVSFA is calculated by multiplying MDTVSFA and ERQA values over the video.

Visualization

In this section, you can choose the sequence, see a cropped part of a frame from it, shifted Y-PSNR visualization, and ERQAv2.0 Visualization for this crop. For shifted Y-PSNR we find the optimal shift for Y-PSNR and apply MSU VQMT PSNR to frames with this shift. See the methodology for more information.

Video: Codec: Approximate bitrate:

Model 1: Model 2: Model 3:

Drag a red rectangle to the area that you want to see zoomed-in

GT

amq-12

ahq-11

amqs-1

Leaderboards

The table below lists all methods that took part in subjective comparison SR methods for each codec and “only codec” method, which applies the video codec to the source video without downscaling or super-resolving.
You can choose the video and the codec that was used to compress it.
All sequences compressed at an approximate bitrate of 2 Mbps.
You can sort the table by any column.

Video:

Codec:

SR Subjective ERQA LPIPS Y-PSNR MDTVSFA Y-VMAF Y-MS-SSIM

Submit your method

Verify your method’s ability to restore compressed videos and compare it with other algorithms.
You can go to the page with information about other participants.

1. Download input data Download low-resolution input videos as sequences of frames in PNG format.
There are 2 available options:
  1. Download 1 folder with all videos joined in one sequence here.
    Neighboring videos are separated by 5 black frames, which will be skipped
    for evaluation.

  2. If you worry this strategy can affect your performance, you can download
    269 folders with each video here.


2. Apply your algorithm Apply your Super-Resolution algorithm to upscale frames to 1920×1080 resolution.
You can also send us the code of your method or the executable file
with the instructions on how to run it and we will run it ourselves.

3. Send us result Send us an email to sr-codecs-benchmark@videoprocessing.ai with the following
information:
    A. Name of your method that will be specified in our benchmark
    B. A way for us to download your method's output frames (e.g. link
    to the cloud drive)
    C. (Optional) Any additional information about the method:
      1. Full name of your model
      2. Parameters which were used
      3. A link to the code of your model, if it is available
      4. A link to the paper about your model, if it is available
      5. Execution time of your algorithm and information about used GPU, if it was used
      6. Any other additional information

You can verify the results of current participants or estimate the perfomance of your method on public samples of our dataset. Just send an email to sr-codecs-benchmark@videoprocessing.ai with a request to share them with you.

Our policy:

  • We won't publish the results of your method without your permission.
  • We share only public samples of our dataset as it is private.

Download the Report

Download the report
(free download)


(PDF, 25,5 MB)



Released on October, 12
75 SR+codec pairs
x264, x265, aomenc, VVenC, uavs3 codecs
3 Full HD video sequences
6 different objective metrics and subjective comparison
Y-PSNR, YUV-MS-SSIM, Y-VMAF, Y-VMAF NEG, LPIPS, ERQA
80+ pages with plots

Videos used in the report:

Acknowledgements:

Cite Us

To refer to our benchmark in your work, cite our paper:

@article{bogatyrev2023srcodec,
author={Bogatyrev, Evgeney and Molodetskikh, Ivan and Vatolin, Dmitriy},
journal={arXiv preprint arXiv:2305.04844},
title={SR+Codec: a Benchmark of Super-Resolution for Video Compression Bitrate Reduction},
year={2024},
}

Contact Us

For questions and propositions, please contact us: sr-codecs-benchmark@videoprocessing.ai

You can subscribe to updates on our benchmark:


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

Crowd-sourced subjective
quality evaluation platform

  • Conduct comparison of video codecs and/or encoding parameters

What is it?

Subjectify.us is a web platform for conducting fast crowd-sourced subjective comparisons.

The service is designed for the comparison of images, video, and sound processing methods.

Main features

  • Pairwise comparison
  • Detailed report
  • Providing all of the raw data
  • Filtering out answers from cheating respondents

  • Subjectify.us
04 Dec 2024
See Also
PSNR and SSIM: application areas and criticism
Learn about limits and applicability of the most popular metrics
Video Colorization Benchmark
Explore the best video colorization algorithms
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Site structure