MSU Video Super-Resolution Quality Assessment Challenge 2024


Here you can register team and send your own metric to take part in this challenge. Metric results will be sent to your email


“Metric Type” field must contain metric type in the following format: “<NR/FR>_<Image/Video>”.

  • NR - for No-Reference Metric (don’t need GT frames for evaluation)

  • FR - for Full-Reference Metric (need GT frames for evaluation)

  • Image - if Metric requires image input

  • Video - if Metric requires video input

By clicking on the “Upload File” button, you can upload a file with the results of the metric on the test network in the following format:

    <Video Basename> : <Metric Value>,

You can also find a solution template here


The evaluation consists of the comparison of the predictions with the reference ground truth scores obtained by pairwise subjective comparison.

We use the Spearman rank-order correlation coefficients (SROCC) as often employed in the literature.

Its implementations are found in most of the statistics/machine learning toolboxes. For example, the demo evaluation code in Python:

import scipy

srocc = scipy.stats.spearmanr(ground_truth_scores, metric_values).correlation

This coefficient will be computed separately for each video sequence (by sequence we mean all videos with Super-Resolution methods applied, corresponding to the same source video, corresponding to the same difficulty level).

The final score for a particular difficulty level is calculated as the average of these coefficients over all video sequences corresponding to that level.

The final metric score is equal to (0.3 * (score for “Easy” level) + 0.4 * (score for “Medium” level) + 0.5 * (score for “Hard” level)) / 1.2

07 May 2024
See Also
Real-World Stereo Color and Sharpness Mismatch Dataset
Download new real-world video dataset of stereo color and sharpness mismatches
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Site structure