MSU Metrics Robustness Benchmark
Aleksandr Gushchin, Kirill Malyshev,
Khaled Abud, Ekaterina Shumitskaya,
Vyacheslav Napadovsky,
Sergey Lavrushkin, Maksim Velikanov
Key features
- 40+ no-reference and full-reference image/video-quality metrics
- 9 adversarial attacks including gradient-based, Universal Adversarial Perturbation-based and Perceptual-aware attacks
- 6 training and testing datasets
- Automatic cloud-based pipeline for evaluations
GitHub repo |
arxiv.org paper |
AAAI 2024 paper |
News
- 14.09.2024 Added 23 new metrics, including full-reference ones! Check the new results in Leaderboard
- 09.12.2023 Our paper was accepted for AAAI 2024, an A* conference in artificial intelligence!
- 06.07.2023 Updated visualisations and structure of the benchmark webpage
- 07.06.2023 Benchmark Release!
Main results
This chart compares metrics’ robustness at different levels of proxy metrics loss. We show the mean robustness score for two groups of adversarial attacks and the mean SSIM measured between original and attacked images. We averaged the resulting scores for images and frames from all test datasets. See Leaderboard for more results.
How to submit your IQA/VQA metric
Compare the robustness of your method to adversarial attacks with existing quality assessment metrics.
We kindly invite you to participate in our benchmark. To do this follow the steps below:
Send us an email to mrb@videoprocessing.ai
with the following information:
ref — path to reference video (for full-reference metrics)
dist — path to distorted video
output — path to output of your algorithm
t — threshold, if it's required in your algorithm
|
You can verify the results of current participants or estimate the perfomance of your method using the
code provided on our GitHub page. |
Cite us
@article{
Antsiferova_Abud_Gushchin_Shumitskaya_Lavrushkin_Vatolin_2024,
title={Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks},
author={Antsiferova, Anastasia and Abud, Khaled and Gushchin, Aleksandr and Shumitskaya, Ekaterina and Lavrushkin, Sergey and Vatolin, Dmitriy},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
url={https://ojs.aaai.org/index.php/AAAI/article/view/27827},
DOI={10.1609/aaai.v38i2.27827},
number={2},
year={2024},
month={Mar.},
pages={700-708}
}
|
Contacts
We would highly appreciate any suggestions and ideas on improving our benchmark. Please get in touch with us via email: mrb@videoprocessing.ai
MSU Video Quality Measurement Tool
Widest Range of Metrics & Formats
- Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
- Non-reference analysis & video characteristics
Blurring, Blocking, Noise, Scene change detection, NIQE and more
Fastest Video Quality Measurement
- GPU support
Up to 11.7x faster calculation of metrics with GPU - Real-time measure
- Unlimited file size
Main MSU VQMT page on compression.ru
-
MSU Benchmark Collection
- Super-Resolution for Video Compression Benchmark
- Video Colorization Benchmark
- Defenses for Image Quality Metrics Benchmark
- Learning-Based Image Compression Benchmark
- Super-Resolution Quality Metrics Benchmark
- Video Saliency Prediction Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects