MSU Metrics Robustness Benchmark
Check your IQA/VQA metric robustness to adversarial attacks
Project head: Anastasia Antsiferova
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis:
Aleksandr Gushchin, Kirill Malyshev,
Khaled Abud, Ekaterina Shumitskaya,
Vyacheslav Napadovsky,
Sergey Lavrushkin, Maksim Velikanov
Aleksandr Gushchin, Kirill Malyshev,
Khaled Abud, Ekaterina Shumitskaya,
Vyacheslav Napadovsky,
Sergey Lavrushkin, Maksim Velikanov
Key features of the Benchmark
- More than 15 no-reference image/video-quality metrics
- 9 adversarial attacks including FGSM-based, Universal Adversarial Perturbation-based and Perceptual-aware attacks
- 6 training and testing datasets
- Automatic cloud-based pipeline for hacking each metric with each attack
What’s new
- 06.07.2023 Updated visualisations and structure of the benchmark webpage.
- 07.06.2023 Benchmark Release!
Results
This chart compares metrics robustness at different levels of proxy metrics loss. We show mean robustness score for two types of adversarial attacks and mean SSIM measured between original and attacked images. The results are averaged for images/frames from all test datasets.
How to submit your IQA/VQA metric
Compare the robustness of your method to adversarial attacks with existing quality assessment metrics.
We kindly invite you to participate in our benchmark. To do this follow the steps below:
Send us an email to mrb@videoprocessing.ai
with the following information:
ref — path to reference video (for full-reference metrics)
dist — path to distorted video
output — path to output of your algorithm
t — threshold, if it's required in your algorithm
|
You can verify the results of current participants or estimate the perfomance of your method using the
code provided on our GitHub page. |
Contacts
We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via email: mrb@videoprocessing.ai.
MSU Video Quality Measurement Tool
Widest Range of Metrics & Formats
- Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
- Non-reference analysis & video characteristics
Blurring, Blocking, Noise, Scene change detection, NIQE and more
Fastest Video Quality Measurement
- GPU support
Up to 11.7x faster calculation of metrics with GPU - Real-time measure
- Unlimited file size
Main MSU VQMT page on compression.ru
See Also
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Video Upscalers Benchmark
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
Video Deblurring Benchmark
Learn about the best video deblurring methods and choose the best model
Video Frame Interpolation Benchmark
Discover the best algorithm to make high-quality and smooth slow motion videos
Site structure
-
MSU Benchmark Collection
- Video Saliency Prediction Benchmark
- Super-Resolution for Video Compression Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects