Video Upscalers Benchmark: Quality Enhancement

The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality

Powered by
Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Nikolai Karetin,
Ivan Molodetskikh

Our benchmark determines the best upscaling methods for increasing video resolution and improving visual quality using our compact yet comprehensive dataset.

Everyone is welcome to participate! Run your favorite super-resolution method on our compact test video and send us the result to see how well it performs. Check the “Submitting” section to learn the details.

Over 3700 People

have participated in the verified pairwise subjective comparison

30 Test Clips

with both camera-shot
and 2D-animated content

41 Upscalers Tested

with both 4× and 2× scaling on video with complex distortion

An Open Visual Comparison

with original high-resolution fragments available for reference

Structural Distortion Maps

with compensated pixel shifts
for easy artifacts detection

Speed/Quality Scatter Plots

and tables with objective metrics
for a comprehensive comparison

What’s new

  • April 9th, 2023: Added LESRCNN, CFSRCNN and ACNet
  • November 13th, 2022: Added HGSRCNN and ESRGCNN
  • August 28th, 2022: Release of the Benchmark
  • November 9th, 2021: Beta-version Release

Introduction

Our benchmark presents the ranking of video upscalers using crowd-sourced subjective comparison. Over 3700 valid participants have selected the most visually appealing upscaling result in many pairwise comparisons.

For evaluating upscaling methods, we also use various metrics (objective quality measures). In addition, we calculate the average FPS (frames per second).

Scroll below for comparison charts, tables, and interactive visual comparisons of upscaling results.

4× Camera-Shot Leaderboards

4× Camera-Shot Visualizations

4× Camera-Shot Charts

4× 2D-Animated Leaderboards

4× 2D-Animated Visualizations

4× 2D-Animated Charts

2× Camera-Shot Leaderboards

2× Camera-Shot Visualizations

2× Camera-Shot Charts

2× 2D-Animated Leaderboards

2× 2D-Animated Visualizations

2× 2D-Animated Charts

Submitting

To add your upscaling method to the benchmark, follow these steps:

1. Download the input Download the test video
If your upscaler requires extracting frames,
we recommend using this command:
ffmpeg -i test.mp4 wrong_gamma/frame%04d.png &&
ffmpeg -i wrong_gamma/frame%04d.png test/frame%04d.png
2. Apply your upscaler Apply your 2× or 4× upscaling method to the video
Configure it for lossless (CRF0/PNG) output, if possible.
If your upscaler outputs images, we recommend
using this command after upscaling: ffmpeg -i result/frame%04d.png -crf 0 -pix_fmt yuv444p
-vf scale=out_range=tv:out_color_matrix=bt709:
flags=full_chroma_int+accurate_rnd+full_chroma_inp
-color_primaries bt709 -color_trc bt709
-colorspace bt709 -color_range tv result.mp4
3. Send us the result
Send video-upscalers-benchmark@videoprocessing.ai
the following information:
  • The full and short names of the upscaling method that we will specify in our benchmark.
  • An MP4 file with upscaled frames. The link must be valid within a month after receiving it. Please ensure that you permit us to download the file.
  • The exact upscaling commands, options, versions of used programs, etc. Each submission must contain results of exactly one model with fixed settings. Please do not fine-tune model parameters by hand for each video segment.
  • If you want us to verify your method, send us the upscaler’s executable, and we will run it ourselves. Then we will mark the FPS of your algorithm as “Verified” in charts and tables. Executable's arguments must include paths to the folder with input PNG images and the folder for output PNG images.
  • You can verify the results of current participants or estimate the performance of your method on public samples of our dataset (clips “Fire” and “Pier”). Send us an email with a request to share them with you.

If you have any suggestions or questions, please contact us: video-upscalers-benchmark@videoprocessing.ai

Get Notifications About the Updates of This Benchmark

We can notify you about this benchmark’s updates: simply submit your preferred email address using the form below.

Further Reading

Check the “Methodology” section to learn how we prepare our dataset.

Check the “Participants” section to learn which upscalers’ implementations we use.

MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

Crowd-sourced subjective
quality evaluation platform

  • Conduct comparison of video codecs and/or encoding parameters

What is it?

Subjectify.us is a web platform for conducting fast crowd-sourced subjective comparisons.

The service is designed for the comparison of images, video, and sound processing methods.

Main features

  • Pairwise comparison
  • Detailed report
  • Providing all of the raw data
  • Filtering out answers from cheating respondents

  • Subjectify.us
09 Apr 2023
See Also
Video Colorization Benchmark
Explore the best video colorization algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Site structure