Project: Image- and video-quality metrics analysis
Discover the newest metrics and find the most appropriate method for your tasks
Aleksandr Gushchin, Maksim Smirnov
Georgiy Bychkov, Khaled Abud,
Aleksandr Kostychev, Viktoria Leonenkova,
Maksim Khrebtov, Andrey Lebedev,
Aleksey Kirillov, Lev Borisovskiy,
Mikhail Rakhmanov, Igor Meleshin,
Georgiy Gotin, Andrey Dolgolenko,
Egor Kovalyov, Egor Grinchenko,
Daniil Konstantinov
Nowadays, most new image- and video-quality assessment (IQA/VQA) metrics are based on deep learning. However, traditional metrics like SSIM and PSNR are still commonly used when developing new image- and video-processing and compression algorithms. One of the main reasons for this is the low reproducibility of new metrics performance on real-life data. In this project, we investigate what new metrics are suitable for comparing and developing contemporary video compression and processing methods. We measure metrics’ performance as a correlation with perceptual quality scores collected with large-scale, massive crowdsourced subjective tests on our Subjectify.us platform.
Learning-based methods are more vulnerable to input transformations than traditional approaches. One of the most common types of input transformations is adversarial attacks. IQA/VQA metrics are already attacked in different real-life scenarios:
- Cheating in benchmarks. The developers of image- and video-processing methods can use metrics’ vulnerabilities to achieve better competition results.
- Putting a metric into a loss function causes a decrease in visual quality.
- Manipulating the results of image web search. Search engines use keywords, descriptions, and image quality measurement to rank image search results.
The traditional approach to compare metrics’ performance is to evaluate their correlation with subjective quality, but existing comparisons do not consider metrics’ robustness to adversarial attacks. There are studies for image classification models (RobustBench, Adversarial Robustness Benchmark, and RobustML), but none exist for image- and video-quality metrics.
Thus, we are working on the development of new image- and video-quality metrics benchmarks. Currently, three benchmarks are released:
MSU Image Quality Metrics Benchmark | MSU Metrics Robustness Benchmark |
MSU Video Super-Resolution Quality Metrics Benchmark |
---|---|---|
|
|
|
We are also proposing challenges for developing new metrics as a part of NTIRE (CVPR) and NeurIPS competitions.
Goals of this project
- The world’s best-known benchmarks
- The most relevant and valuable datasets
- Papers in top-tier conferences and journals (A*/Q1)
- Expertise in image/video quality measurement -> collaborations with industry
Research directions
- Developing adversarial attacks and defences for IQA/VQA
- Adversarial attacks: restricted (l-bounded) and not-restricted (colourisation, filters, etc.), white-box and black-box, transferable, temporally stable attacks for videos
- Adversarial defences: adversarial purification, adversarial training, certified (provably) robust metrics
- Developing benchmarks
- Comparing task-specific metrics (video compression, super-resolution, neural image compression, deblurring, etc.)
- Comparing IQA/VQA robustness to attacks
- Comparing the efficiency of defences against attacks on IQA/VQA
- Developing new IQA/VQA metrics
- Task-specific metrics: quality of video deblurring, denoising, image super-resolution, image restoration, etc.
- Specific approaches for quality measurement: saliency-driven, just-noticeable-difference (JND)
- Robust metrics to make them usable as a loss component
Our papers
- Ti-Patch: Tiled Physical Adversarial Patch for no-reference video quality metrics / Viktoria Leonenkova, Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin, WAIT/AINL 2024.
- Adversarial purification for no-reference image-quality metrics: applicability study and new methods / Aleksandr Gushchin, Anna Chistyakova, Vladislav Minashkin, Anastasia Antsiferova, Dmitriy Vatolin, preprint, 2024.
- [A*] IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image-and Video-Quality Metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin / accepted for ICML 2024.
- [A*] Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks / Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin / AAAI Conference on Artificial Intelligence 2024.
- [Q1] Towards adversarial robustness verification of no-reference image- and video-quality metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin // Computer Vision and Image Understanding, 2024.
- [A*] Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-and Video-Quality Metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin / ICLR Tiny Papers 2023.
- Unveiling the Limitations of Novel Image Quality Metrics / Siniukov Maksim, Kulikov Dmitriy, Vatolin Dmitriy // IEEE MMSP 2023.
- [B] Applicability limitations of differentiable full-reference image-quality metrics / Siniukov Maksim, Kulikov Dmitriy, Vatolin Dmitriy // Data Compression Conference (DCC) 2023.
- [A] Universal perturbation attack on differentiable no-reference image-and video-quality metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin // BMVC 2022.
- [A*] Video compression dataset and benchmark of learning-based video-quality metrics / Anastasia Antsiferova, Sergey Lavrushkin, Maksim Smirnov, Aleksandr Gushchin, Dmitriy Vatolin, Dmitriy Kulikov // NeurIPS 2022.
- Bit-depth enhancement detection for compressed video / Safonov Nickolay, Vatolin Dmitriy // IEEE MMSP 2022.
- Applying Objective Quality Metrics to Video-Codec Comparisons: Choosing the Best Metric for Subjective Quality Estimation / Antsiferova Anastasia, Yakovenko Alexander, Safonov Nickolay, Kulikov Dmitriy, Gushin Alexander, Vatolin Dmitriy // Proceedings of the 31th International Conference on Computer Graphics and Machine Vision 2021.
- ERQA: Edge-Restoration Quality Assessment for Video Super-Resolution / Kirillova Anastasia, Lyapustin Eugene, Antsiferova Anastasia, Vatolin Dmitry // VISIGRAPP 2021.
- Hacking VMAF and VMAF NEG: vulnerability to different preprocessing methods / Maksim Siniukov, Anastasia Antsiferova, Dmitriy Kulikov, Dmitriy Vatolin // Artificial Intelligence and Cloud Computing Conference 2021.
- Hacking VMAF with Video Color and Contrast Distortion / Anastasia Zvezdakova, Sergey Zvezdakov, Dmitriy Kulikov, Dmitriy Vatolin / Graphicon-Conference on Computer Graphics and Vision 2019.
-
MSU Benchmark Collection
- Video Colorization Benchmark
- Super-Resolution for Video Compression Benchmark
- Defenses for Image Quality Metrics Benchmark
- Learning-Based Image Compression Benchmark
- Super-Resolution Quality Metrics Benchmark
- Video Saliency Prediction Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects