MSU Video Super Resolution Benchmark 2021 — selecting the best upscaler

Discover the newest methods and find the most appropriate method for your tasks

What’s new

Key features of the Benchmark

The test-stand of the benchmark: the captured frame and the frame divided by content types.
See methodology for more information.

Leaderboard

The table below shows a comparison of Video Super Resolution methods by subjective comparison and a few objective metrics on Bicubic degradation type. Default ranking is by subjective score.

Click on the labels to sort the table

Rank Model Subjective ERQAv1.0 PSNR-Y** SSIM-Y** QRCRv1.0 CRRMv1.0 FPS(s)
9 RRN-5L 3.556 0.617 23.786 0.789 0.549 0.989 0.365
6 RRN-10L 3.887 0.627 24.252 0.790 0.557 0.989 0.390
11 SOF-VSR-BD 3.450 0.647 25.986 0.831 0.557 0.971 1.430
8 DUF-16L 3.677 0.641 24.606 0.828 0.000 0.968 1.653
5 DUF-28L 3.910 0.645 25.852 0.830 0.549 0.993 2.392
1 DBVSR 5.561 0.737 31.071 0.894 0.629 0.992 *
3 DynaVSR-R 4.751 0.709 28.377 0.865 0.557 0.997 5.664
2 LGFN 5.040 0.740 31.291 0.898 0.629 0.996 1.499
10 D3Dnet 3.525 0.674 29.703 0.876 0.549 0.975 24.238
14 ESPCN 0.274 0.521 26.714 0.811 0.000 0.948 *
4 TDAN 4.036 0.706 30.244 0.883 0.557 0.994 *
12 SOF-VSR-BI 3.311 0.660 29.381 0.872 0.557 0.974 1.750
7 RealSR 3.749 0.690 25.989 0.767 0.000 0.886 *
13 DynaVSR-V 2.733 0.643 29.011 0.864 0.000 0.997 6.660

Here you can see the difference between metric values on bicubic and gauss degradation

Metric:

Click here to see table with metrics on data prepared with gauss degradation

Click on the labels to sort the table

Rank Model Subjective ERQAv1.0 PSNR-Y** SSIM-Y** QRCRv1.0 CRRMv1.0 FPS(s)
9 RRN-5L 3.556 0.505 24.382 0.793 0.549 0.977 0.365
6 RRN-10L 3.887 0.517 25.001 0.798 0.609 0.977 0.390
11 SOF-VSR-BD 3.450 0.536 26.808 0.826 0.549 0.954 1.430
8 DUF-16L 3.677 0.533 24.969 0.820 0.000 0.995 1.653
5 DUF-28L 3.910 0.534 26.365 0.823 0.000 0.976 2.392
1 DBVSR 5.561 0.589 27.664 0.843 0.557 0.964 *
3 DynaVSR-R 4.751 0.690 30.182 0.882 0.619 0.977 5.664
2 LGFN 5.040 0.618 28.020 0.853 0.629 0.966 1.499
10 D3Dnet 3.525 0.538 27.093 0.831 0.000 0.956 24.238
14 ESPCN 0.274 0.411 25.672 0.784 0.000 0.940 *
4 TDAN 4.036 0.550 27.057 0.831 0.000 0.964 *
12 SOF-VSR-BI 3.311 0.527 27.004 0.826 0.000 0.955 1.750
7 RealSR 3.749 0.588 25.406 0.751 0.000 0.948 *
13 DynaVSR-V 2.733 0.666 29.724 0.875 0.629 0.986 6.660

*To be published

**Shifted version (see Methodology for more information)

Correlation of metrics with subjective assessment

Correlation method:

Charts

In these section you can see barcharts and speed-to-performance plots. You can choose metric, motion, content and degradation type, see tests with or without noise. See Methodology for more information about metrics.

Metric: Test: Content:

Highlight the plot region where you want to zoom in

Metric: Test: Content:

Visualization

In this section you can choose a part of frame with particular content, see cropped piece from this, MSU VQMT PSNR* Visualization and ERQAv1.0 Visualization for this crop. On part “QR-codes” codes, which can be detected, are surrounded by a blue rectangle. Drag a red rectangle in the area, which you want to crop.
*We visualize shifted PSNR metric by applying MSU VQMT PSNR Visualization to frames with optimal shift for PSNR (see Methodology for more information).

Frame: Content:

Model 1: Model 2: Model 3:

GT

D3Dnet

DBVSR

DUF-16L

[an error occurred while processing this directive]

Your method submission

Verify restoration ability of your VSR algorithm and compare it with state-of-the-art solutions.

1. Download input
Download input low-resolution videos as a sequences of frames in .png format
There are 2 available options:
    a. Download 1 folder with all videos, joined in one sequence here.
    Neighboring videos are separated by 10 black frames, which will be skipped for evaluation.
    b. If you worry this strategy can decrease your performance, you can download 12 folders
    with 100 frames each here.


2. Apply your algorithm Restore high-resolution frames with your algorithm.
You can also send us any executable file or code of your method, in order not to execute algorithm
by yourself.


3. Send us result Send us an email to vsr-benchmark@videoprocessing.ai with the following information:
    A. Name of your method that will be specified in our benchmark
    B. Link to the cloud drive (Google Drive, OneDrive, Dropbox, etc.), containing output frames.
    Check that a number of output frames matches the number of input frames:
      1310 frames in one folder for download option (a)
      12 folders with 100 frames each for download option (b)
    C. (Optional) Execution time of your algorithm and information about used GPU
    D. (Optional) Any additional information about the method:
      1. Information about your model's architecture (e.g. motion compensation, propagation, fusion)
      2. Full name of your model
      3. The parameter set that was used
      4. Any other additional information
      5. A link to the code of your model, if it is available
      6. A link to the paper about your model

Contacts

For questions and propositions, please contact us: vsr-benchmark@videoprocessing.ai

26 Apr 2021
See Also
MSU Video Quality Measurement Tool: Picture types
VQMT 13.1 Online help: List of all picture types available in VQMT and their aliases
MSU VSR Benchmark Participants
The list of participants of MSU Video Super Resolution Benchmark
MSU VSR Benchmark Methodology
The evaluation methodology of MSU Video Super Resolution Benchmark
MSU SBD Benchmark 2020
MSU SBD Benchmark 2020 Participants
MSU Deinterlacer Benchmark
Site structure