MSU Video Super-Resolution Quality Assessment Challenge 2024
Participate
Here you can register team and send your own metric to take part in this challenge. Metric results will be sent to your email
Description
“Metric Type” field must contain metric type in the following format: “<NR/FR>_<Image/Video>”.
-
NR - for No-Reference Metric (don’t need GT frames for evaluation)
-
FR - for Full-Reference Metric (need GT frames for evaluation)
-
Image - if Metric requires image input
-
Video - if Metric requires video input
By clicking on the “Upload File” button, you can upload a file with the results of the metric on the test network in the following format:
{
...
<Video Basename> : <Metric Value>,
...
}
You can also find a solution template here
Evaluation
The evaluation consists of the comparison of the predictions with the reference ground truth scores obtained by pairwise subjective comparison.
We use the Spearman rank-order correlation coefficients (SROCC) as often employed in the literature.
Its implementations are found in most of the statistics/machine learning toolboxes. For example, the demo evaluation code in Python:
import scipy
# SROCC
srocc = scipy.stats.spearmanr(ground_truth_scores, metric_values).correlation
This coefficient will be computed separately for each video sequence (by sequence we mean all videos with Super-Resolution methods applied, corresponding to the same source video, corresponding to the same difficulty level).
The final score for a particular difficulty level is calculated as the average of these coefficients over all video sequences corresponding to that level.
The final metric score is equal to (0.3 * (score for “Easy” level) + 0.4 * (score for “Medium” level) + 0.5 * (score for “Hard” level)) / 1.2
-
MSU Benchmark Collection
- Super-Resolution for Video Compression Benchmark
- Video Colorization Benchmark
- Defenses for Image Quality Metrics Benchmark
- Learning-Based Image Compression Benchmark
- Super-Resolution Quality Metrics Benchmark
- Video Saliency Prediction Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects