MSU Video Upscalers Benchmark 2022: Quality Enhancement

The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality

G&M Lab head: 
Dr. Dmitriy Vatolin
Measurements, analysis: 
Nikolai Karetin,
Ivan Molodetskikh

Our benchmark determines the best upscaling methods for increasing video resolution and improving visual quality using our compact yet comprehensive dataset.

Everyone is welcome to participate! Run your favorite super-resolution method on our compact test video and send us the result to see how well it performs. Check the “Submitting” section to learn the details.

Over 3700 People

have participated in the verified pairwise subjective comparison

30 Test Clips

with both camera-shot
and 2D-animated content

41 Upscalers Tested

with both 4× and 2× scaling on video with complex distortion

An Open Visual Comparison

with original high-resolution fragments available for reference

Structural Distortion Maps

with compensated pixel shifts
for easy artifacts detection

Speed/Quality Scatter Plots

and tables with objective metrics
for a comprehensive comparison

What’s new

  • August 28th, 2022: Release of the Benchmark
  • November 9th, 2021: Beta-version Release

Introduction

Our benchmark presents the ranking of video upscalers using crowd-sourced subjective comparison. Over 3700 valid participants have selected the most visually appealing upscaling result in many pairwise comparisons.

For evaluating upscaling methods, we also use various metrics (objective quality measures). In addition, we calculate the average FPS (frames per second).

Scroll below for comparison charts, tables, and interactive visual comparisons of upscaling results.

4× Camera-Shot Leaderboards

4× Camera-Shot Visualizations

4× Camera-Shot Charts

4× 2D-Animated Leaderboards

4× 2D-Animated Visualizations

4× 2D-Animated Charts

2× Camera-Shot Leaderboards

2× Camera-Shot Visualizations

2× Camera-Shot Charts

2× 2D-Animated Leaderboards

2× 2D-Animated Visualizations

2× 2D-Animated Charts

Submitting

To add your upscaling method to the benchmark, follow these steps:

1. Download the input Download the test video
If your upscaler requires extracting frames,
we recommend using this command:
ffmpeg -i test.mp4 wrong_gamma/frame%04d.png &&
ffmpeg -i wrong_gamma/frame%04d.png test/frame%04d.png
2. Apply your upscaler Apply your 2× or 4× upscaling method to the video
Configure it for lossless (CRF0/PNG) output, if possible.
If your upscaler outputs images, we recommend
using this command after upscaling: ffmpeg -i result/frame%04d.png -crf 0 -pix_fmt yuv444p
-vf scale=out_range=tv:out_color_matrix=bt709:
flags=full_chroma_int+accurate_rnd+full_chroma_inp
-color_primaries bt709 -color_trc bt709
-colorspace bt709 -color_range tv result.mp4
3. Send us the result
Send video-upscalers-benchmark@videoprocessing.ai
the following information:
  • The full and short names of the upscaling method that we will specify in our benchmark.
  • An MP4 file with upscaled frames. The link must be valid within a month after receiving it. Please ensure that you permit us to download the file. By submitting this video, you agree that third parties may use it.
  • The exact upscaling commands, options, versions of used programs, etc. Each submission must contain results of exactly one model with fixed settings. Please do not fine-tune model parameters by hand for each video segment.
  • If you want us to verify your method, send us the upscaler’s executable, and we will run it ourselves. Then we will mark the FPS of your algorithm as “Verified” in charts and tables. Executable's arguments must include paths to the folder with input PNG images and the folder for output PNG images. By submitting this executable, you agree that third parties may use it.
  • You can verify the results of current participants or estimate the performance of your method on public samples of our dataset (clips “Fire” and “Pier”). Send us an email with a request to share them with you.

If you have any suggestions or questions, please contact us: video-upscalers-benchmark@videoprocessing.ai

Get Notifications About the Updates of This Benchmark

Do you want to be the first to discover the best new upscalers? We can notify you about this benchmark’s updates: simply submit your preferred email address using the form below. We promise not to send you unrelated information.

Personal Upscalers Comparison

Choose the best upscaler for your content:

This benchmark uses complex distortions to prepare a test video for real-world simulation. However, a completely different combination of distortion, noise, and compression may be present in your content, leading to contrasting upscalers behavior.

Choose the best upscaler for your usage:

This benchmark covers only upscaling the test video generated from sources downscaled four times: from 1920×1080 to 480×270. It is a good solution for comparing the limits of different upscalers but may not correspond to your intended upscaling usage.

See the whole picture and temporal artifacts:

In this benchmark, we provide only fragments from single frames to prevent participants from cheating. For your content, we can present the full results allowing you to see the whole picture and detect temporal instability and artifacts (random noise, flicker, morphing).

Personal Upscalers Comparison is coming this year!

Subscribe to this benchmark's updates using the form above and be the first to learn about the release of Personal Upscalers Comparison.

We need your feedback!

How do upscalers help you with your work? Which of the features above are you interested in the most? Share your feedback before the release of Personal Upscalers Comparison and receive a 20% discount (tied to a specified email address).

Further Reading

Check the “Methodology” section to learn how we prepare our dataset.

Check the “Participants” section to learn which upscalers’ implementations we use.

28 Aug 2022
See Also
MSU HDR Video Reconstruction Benchmark 2022
The most comprehensive comparison of HDR video reconstruction methods
Site structure