MSU Super-Resolution for Video Compression Benchmark
Discover SR methods for compressed videos and choose the best model to use with your codec
- H.264, H.265, H.266, AV1, AVS3 codec standards
- More than 260 test videos
- 6 different bitrates
- Visual comparison for more than
80 SR+codec pairs
- RD curves and bar charts
for 5 objective metrics
- SR+codec pairs ranked by BSQ-rate
- 80+ pages with different plots
- 15 SOTA SR methods
and 6 objective metrics
- Extensive subjective comparison
with 5300+ valid participants
- Powered by Subjectify.us
The pipeline of our benchmark
- 21.07.2022 Added SR codecs to the benchmark. Added MDTVSFA correlation to the correlation chart.
- 12.04.2022 Uploaded the results of extensive subjective comparison. See “Subjective score” in Charts section.
- 25.03.2022 Added VRT, BasicVSR, RBPN, and COMISR. Updated Leaderboards and Visualizations sections.
- 14.03.2022 Uploaded new dataset. Updated the Methodology.
- 26.10.2021 Updated the Methodology.
- 12.10.2021 Published October Report. Added 2 new videos to the dataset. Updated Charts section and Visualizations.
- 28.09.2021 Improved the Leaderboards section to make it more user-friendly, updated the Methodology and added ERQAv1.1 metric.
- 21.09.2021 Added 2 new videos to the dataset, new plots to the Charts section, and new Visualizations.
- 14.09.2021 Public beta-version Release.
- 31.08.2021 Alpha-version Release.
In this section, you can see RD curves, which show the bitrate/quality distribution of each SR+codec pair, and bar charts, which show the BSQ-rate calculated for objective metrics and subjective scores.
Read about the participants here.
You can see the information about codecs in the methodology.
- Subjective score — more details
- ERQAv2.0 — more details
- Y-PSNR — more details
- Y-MS-SSIM — more details
- Y-VMAF — more details
- LPIPS — more details
Charts with metrics
You can choose the test sequence, the codec that was used to compress it, and the metric.
If BSQ-rate of any method equals 0, then this method should be considered much better than reference codec (codec with no SR).
Highlight the plot region where you want to zoom in.
Video: Codec: Metric:
Correlation of metrics with subjective assessment
We calculated objective metrics on the crops used for subjective comparison and calculated a correlation between the subjective and objective results. Below you can see the average correlation of metrics over all test cases.
* ERQA-MDTVSFA is calculated by multiplying MDTVSFA and ERQA values over the video.
Read about Frames per Second (FPS) calculation here. Read about BSQ-rate over Subjective score here.
In this section, you can choose the sequence, see a cropped part of a frame from it, shifted Y-PSNR visualization, and ERQAv2.0 Visualization for this crop. For shifted Y-PSNR we find the optimal shift for Y-PSNR and apply MSU VQMT PSNR to frames with this shift. See the methodology for more information.
Video: Codec: Approximate bitrate:
Model 1: Model 2: Model 3:
Drag a red rectangle to the area that you want to see zoomed-in
SR+codec pairs leaderboard
The table below shows a comparison of all pairs of Super-Resolution algorithms and codecs.
Each column shows BSQ-rate over a specific metric. You can sort the table by any column.
All methods that took part in subjective comparison are ranked by BSQ-rate over subjective score. Other methods are ranked by BSQ-rate over ERQA.
If BSQ-rate of any method equals 0, then this method should be considered much better than reference codec.
If BSQ-rate of any method is striving to eternity (marked as '∞'), then this method should be considered much worse than reference codec.
"TBP" means that this SR+codec pair did not take part in subjective comparison.
|Rank||SR + codec||Y-VMAF||ERQAv2.0||Y-PSNR||Y-MS-SSIM||LPIPS|
You can find information about SR codecs on the participants page.
You can choose the test sequence and the metric.
Highlight the plot region where you want to zoom in.
SR+codec Benchmark Roadmap
|Feature||What it achieves||Release date|
|More test sequences||We will extend our dataset to make it more diverse and cover more
use cases. We expect it to contain 9×6×5=270 Full HD videos.
|We plan to conduct a bigger subjective comparison for all codecs
to get a more accurate ranking for more SR+codec pairs. The subjective
comparison with that many video pairs will be very expensive. If you
want to support our benchmark, please contact us:
|New Super-Resolution methods are constantly being developed.
We will add new qualitative SR methods to our benchmark
as they appear. We also expect developers to submit
their methods to us.
You can submit your method here.
|Enterprise report||We will compile an enterprise report with more test sequences
and different objective and subjective metrics. You can see
the example in the Report section.
|A new metric to
measure compressed video
|Subjective comparison showed that the most popular video quality
metrics — PSNR and SSIM — are not applicable to the
Super-Resolution task. We are researching our metric
for compressed video restoration quality that will correlate
well with subjective assessment.
|“Real-time” and “restoration”
|Some Super-Resolution models work faster than others, while slow
methods can achieve results of much better quality.
We plan to divide the leaderboard of our benchmark into 2 categories:
Submit your method
Verify your method’s ability to restore compressed videos and compare it with other algorithms.
You can go to the page with information about other participants.
|1. Download input data||
Download low-resolution input videos as sequences of frames in PNG format.
There are 2 available options:
|2. Apply your algorithm||
Apply your Super-Resolution algorithm to upscale frames to 1920×1080 resolution.
You can also send us the code of your method or the executable file
with the instructions on how to run it and we will run it ourselves.
|3. Send us result||
Send us an email to email@example.com
with the following
to the cloud drive)
You can verify the results of current participants or estimate the perfomance of your method on public samples of our dataset. Just send an email to firstname.lastname@example.org with a request to share them with you.
- We won't publish the results of your method without your permission.
- We share only public samples of our dataset as it is private.
Download the Report
Videos used in the report:
- The Alberta Retired Teachers’ Association by SAVIAN
- First Day by Outside Adventure Media
- Subjective comparison was supported by Toloka Research Grants Program.
For questions and propositions, please contact us: email@example.com
You can subscribe to updates on our benchmark:
MSU Benchmark Collection
- MSU Video Upscalers Benchmark 2022
- MSU Video Deblurring Benchmark 2022
- MSU Video Frame Interpolation Benchmark 2022
- MSU HDR Video Reconstruction Benchmark 2022
- MSU Super-Resolution for Video Compression Benchmark 2022
- MSU No-Reference Video Quality Metrics Benchmark 2022
- MSU Full-Reference Video Quality Metrics Benchmark 2022
- MSU Video Alignment and Retrieval Benchmark
- MSU Mobile Video Codecs Benchmark 2021
- MSU Video Super-Resolution Benchmark
- MSU Shot Boundary Detection Benchmark 2020
- MSU Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects