MSU Video Super Resolution Benchmark 2021 — find the best upscaler

Discover the newest methods and find the most appropriate method for your tasks

Video group head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Anastasia Kirillova,
Eugene Lyapustin

What’s new

Key features of the Benchmark

The test-stand of the benchmark: the captured frame and the frame divided by content types.
See methodology for more information.

Leaderboard

The table below shows a comparison of Video Super Resolution methods by subjective score and a few objective metrics on Bicubic interpolation (BI). Default ranking is by subjective score.
You can click on the model's name in the table to read information about the method. You can see information about all participants here.

Metrics: Subjective assessment — read about,
ERQAv1.0 — read about,
PSNR-Y — read about,
SSIM-Y — read about,
QRCRv1.0 — read about,
CRRMv1.0 — read about,
FPS — read about.

Click on the labels to sort the table

Rank Model Subjective ERQAv1.0 QRCRv1.0 SSIM-Y** CRRMv1.0 PSNR-Y** FPS(s)
16 D3Dnet[16] 3.283 0.674 0.549 0.876 0.975 29.703 24.2381
2 DBVSR[2] 5.247 0.737 0.629 0.894 0.992 31.071 TBP*
15 DUF-16L[15] 3.393 0.641 0.549 0.828 0.968 24.606 1.6531
13 DUF-28L[13] 3.571 0.645 0.549 0.830 0.993 25.852 2.3921
5 DynaVSR-R[5] 4.403 0.709 0.557 0.865 0.997 28.377 5.6641
20 DynaVSR-V[20] 2.554 0.643 0.549 0.864 0.997 29.011 6.6601
22 ESPCN[22] 0.214 0.521 0.000 0.811 0.948 26.714 TBP*
10 ESRGAN[10] 3.674 0.735 0.000 0.808 0.998 27.330 0.9961
4 LGFN[4] 4.715 0.740 0.629 0.898 0.996 31.291 1.4991
1 RBPN[1] 5.325 0.746 0.629 0.899 0.998 31.407 TBP*
12 RRN-10L[12] 3.587 0.627 0.557 0.790 0.989 24.252 0.3901
17 RRN-5L[17] 3.245 0.617 0.549 0.789 0.989 23.786 0.3651
7 RSDN[7] 3.845 0.667 0.619 0.826 0.985 25.321 TBP*
11 Real-ESRGAN[11] 3.652 0.663 0.000 0.774 0.942 24.441 0.9911
21 Real-ESRnet[21] 1.830 0.598 0.000 0.824 0.951 27.195 0.9821
14 RealSR[14] 3.520 0.690 0.000 0.767 0.886 25.989 TBP*
18 SOF-VSR-BD[18] 3.110 0.647 0.557 0.831 0.971 25.986 1.4301
19 SOF-VSR-BI[19] 3.041 0.660 0.557 0.872 0.974 29.381 1.7501
9 TDAN[9] 3.718 0.706 0.609 0.883 0.994 30.244 TBP*
8 TGA[8] 3.832 0.669 0.549 0.831 0.987 25.786 1.4171
6 TMNet[6] 4.298 0.712 0.549 0.885 0.997 30.364 TBP*
3 iSeeBetter[3] 5.122 0.748 0.629 0.896 1.000 31.104 22.2751

*To Be Published
**Shifted version (see methodology for more information)
1 measured by MSU
2 measured by participants

The absolute difference between metric values on BI and BD degradation models

Metric:

Click here to see table with metrics on data prepared with BD degradation

Click on the labels to sort the table

Rank Model Subjective ERQAv1.0 QRCRv1.0 SSIM-Y** CRRMv1.0 PSNR-Y** FPS(s)
16 D3Dnet[16] 3.283 0.538 0.000 0.831 0.956 27.093 24.2381
2 DBVSR[2] 5.247 0.589 0.557 0.843 0.964 27.664 TBP*
15 DUF-16L[15] 3.393 0.533 0.000 0.820 0.995 24.969 1.6531
13 DUF-28L[13] 3.571 0.534 0.000 0.823 0.976 26.365 2.3921
5 DynaVSR-R[5] 4.403 0.690 0.619 0.882 0.977 30.182 5.6641
20 DynaVSR-V[20] 2.554 0.666 0.629 0.875 0.986 29.724 6.6601
22 ESPCN[22] 0.214 0.411 0.000 0.784 0.940 25.672 TBP*
10 ESRGAN[10] 3.674 0.506 0.000 0.797 0.964 26.115 0.9961
4 LGFN[4] 4.715 0.618 0.629 0.853 0.966 28.020 1.4991
1 RBPN[1] 5.325 0.595 0.629 0.847 0.966 27.765 TBP*
12 RRN-10L[12] 3.587 0.517 0.609 0.798 0.977 25.001 0.3901
17 RRN-5L[17] 3.245 0.505 0.549 0.793 0.977 24.382 0.3651
7 RSDN[7] 3.845 0.554 0.557 0.826 0.978 25.767 TBP*
11 Real-ESRGAN[11] 3.652 0.656 0.000 0.764 0.950 24.239 0.9911
21 Real-ESRnet[21] 1.830 0.591 0.000 0.822 0.955 27.194 0.9821
14 RealSR[14] 3.520 0.588 0.000 0.751 0.948 25.406 TBP*
18 SOF-VSR-BD[18] 3.110 0.536 0.549 0.826 0.954 26.808 1.4301
19 SOF-VSR-BI[19] 3.041 0.527 0.000 0.826 0.955 27.004 1.7501
9 TDAN[9] 3.718 0.550 0.000 0.831 0.964 27.057 TBP*
8 TGA[8] 3.832 0.554 0.557 0.828 0.976 26.247 1.4171
6 TMNet[6] 4.298 0.537 0.000 0.827 0.964 26.918 TBP*
3 iSeeBetter[3] 5.122 0.596 0.557 0.848 0.969 27.712 22.2751

*To be published
**Shifted version (see methodology for more information)
1 measured by MSU
2 measured by participants

Correlation of metrics with subjective assessment

Correlation method:

Charts

In this section, you can see barcharts and speed-to-performance plots. You can choose metric, motion, content, and degradation type, see tests with or without noise.
You can see information about all participants here.

Metric: Test: Content:

Metric: Test: Content:

Highlight the plot region where you want to zoom in

Visualization

In this section, you can choose a part of the frame with particular content, see a cropped piece from this, MSU VQMT PSNR* Visualization, and ERQAv1.0 Visualization for this crop. In part "QR-codes" codes, which can be detected, are surrounded by a blue rectangle. You can see information about all participants here.
*We visualize shifted PSNR metric by applying MSU VQMT PSNR Visualization to frames with optimal shift for PSNR.

Frame: Content:

Model 1: Model 2: Model 3:

Drag a red rectangle in the area, which you want to crop.

GT

D3Dnet

DBVSR

DUF-16L

Your method submission

Verify the restoration ability of your VSR algorithm and compare it with state-of-the-art solutions.
You can see information about all other participants here.

1. Download input data
Download input low-resolution videos as sequences of frames in .png format
There are 2 available options:
    a. Download 1 folder with all videos, joined in one sequence here.
    Neighboring videos are separated by 10 black frames, which will be skipped for evaluation.
    b. If you worry this strategy can decrease your performance, you can download 12 folders
    with 100 frames each here.


2. Apply your algorithm Restore high-resolution frames with your algorithm.
You can also send us the code of your method or the executable file and we will run it ourselves.


3. Send us result Send us an email to vsr-benchmark@videoprocessing.ai with the following information:
    A. Name of your method that will be specified in our benchmark
    B. Link to the cloud drive (Google Drive, OneDrive, Dropbox, etc.), containing output frames.
    Check that the number of output frames matches the number of input frames:
      1310 frames in one folder for download option (a)
      12 folders with 100 frames each for download option (b)
    C. (Optional) Execution time of your algorithm and information about used GPU
    D. (Optional) Any additional information about the method:
      1. Full name of your model
      2. The parameter set that was used
      3. Any other additional information
      4. A link to the code of your model, if it is available
      5. A link to the paper about your model
      6. Any characteristics of your model's architecture (e.g. motion compensation, propagation, fusion)

Contacts

For questions and propositions, please contact us: vsr-benchmark@videoprocessing.ai

26 Apr 2021
See Also
PSNR and SSIM: application areas and critics
Learn about limits and applicability of the most popular metrics
MSU Video Upscalers Benchmark 2021
The most comprehensive comparison of video super resolution (VSR) algorithms by subjective quality
MSU Video Upscalers Benchmark Participants
The list of the participants of the MSU Video Upscalers Benchmark
MSU Video Upscalers Benchmark Methodology
The methodology of the MSU Video Upscalers Benchmark
MSU Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
MSU Super-Resolution for Video Compression Benchmark Participants
The list of participants of MSU Super-Resolution for Video Compression Benchmark
Site structure