Project: Image- and video-quality metrics analysis

Nowadays, most new image- and video-quality assessment (IQA/VQA) metrics are based on deep learning. However, traditional metrics like SSIM and PSNR are still commonly used when developing new image- and video-processing and compression algorithms. One of the main reasons for this is the low reproducibility of new metrics performance on real-life data. In this project, we investigate what new metrics are suitable for comparing and developing contemporary video compression and processing methods. We measure metrics’ performance as a correlation with perceptual quality scores collected with large-scale, massive crowdsourced subjective tests on our platform.

Learning-based methods are more vulnerable to input transformations than traditional approaches. One of the most common types of input transformations is adversarial attacks. IQA/VQA metrics are already attacked in different real-life scenarios:

The traditional approach to compare metrics’ performance is to evaluate their correlation with subjective quality, but existing comparisons do not consider metrics’ robustness to adversarial attacks. There are studies for image classification models (RobustBench, Adversarial Robustness Benchmark, and RobustML), but none exist for image- and video-quality metrics.

Thus, we are working on the development of new image- and video-quality metrics benchmarks. Currently, three benchmarks are released:

MSU Image Quality Metrics Benchmark MSU Metrics Robustness Benchmark MSU Video Super-Resolution
Quality Metrics Benchmark

We are also proposing challenges for developing new metrics as a part of NTIRE (CVPR) and NeurIPS competitions.

Goals of this project

Research directions

  1. Developing adversarial attacks and defences for IQA/VQA
    1. Adversarial attacks: restricted (l-bounded) and not-restricted (colourisation, filters, etc.), white-box and black-box, transferable, temporally stable attacks for videos
    2. Adversarial defences: adversarial purification, adversarial training, certified (provably) robust metrics
  2. Developing benchmarks
    1. Comparing task-specific metrics (video compression, super-resolution, neural image compression, deblurring, etc.)
    2. Comparing IQA/VQA robustness to attacks
    3. Comparing the efficiency of defences against attacks on IQA/VQA
  3. Developing new IQA/VQA metrics
    1. Task-specific metrics: quality of video deblurring, denoising, image super-resolution, image restoration, etc.
    2. Specific approaches for quality measurement: saliency-driven, just-noticeable-difference (JND)
    3. Robust metrics to make them usable as a loss component

Our papers

  1. [A*] Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks / Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin / accepted for AAAI 2024.
  2. [Q1] Towards adversarial robustness verification of no-reference image- and video-quality metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin // Computer Vision and Image Understanding, 2024.
  3. [A*] Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-and Video-Quality Metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin / ICLR Tiny Papers 2023.
  4. Unveiling the Limitations of Novel Image Quality Metrics / Siniukov Maksim, Kulikov Dmitriy, Vatolin Dmitriy // IEEE MMSP 2023.
  5. [B] Applicability limitations of differentiable full-reference image-quality metrics / Siniukov Maksim, Kulikov Dmitriy, Vatolin Dmitriy // Data Compression Conference (DCC) 2023.
  6. [A] Universal perturbation attack on differentiable no-reference image-and video-quality metrics / Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin // BMVC 2022.
  7. [A*] Video compression dataset and benchmark of learning-based video-quality metrics / Anastasia Antsiferova, Sergey Lavrushkin, Maksim Smirnov, Aleksandr Gushchin, Dmitriy Vatolin, Dmitriy Kulikov // NeurIPS 2022.
  8. Bit-depth enhancement detection for compressed video / Safonov Nickolay, Vatolin Dmitriy // IEEE MMSP 2022.
  9. Applying Objective Quality Metrics to Video-Codec Comparisons: Choosing the Best Metric for Subjective Quality Estimation / Antsiferova Anastasia, Yakovenko Alexander, Safonov Nickolay, Kulikov Dmitriy, Gushin Alexander, Vatolin Dmitriy // Proceedings of the 31th International Conference on Computer Graphics and Machine Vision 2021.
  10. ERQA: Edge-Restoration Quality Assessment for Video Super-Resolution / Kirillova Anastasia, Lyapustin Eugene, Antsiferova Anastasia, Vatolin Dmitry // VISIGRAPP 2021.
  11. Hacking VMAF and VMAF NEG: vulnerability to different preprocessing methods / Maksim Siniukov, Anastasia Antsiferova, Dmitriy Kulikov, Dmitriy Vatolin // Artificial Intelligence and Cloud Computing Conference 2021.
  12. Hacking VMAF with Video Color and Contrast Distortion / Anastasia Zvezdakova, Sergey Zvezdakov, Dmitriy Kulikov, Dmitriy Vatolin / Graphicon-Conference on Computer Graphics and Vision 2019.
08 Feb 2024
See Also
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
PSNR and SSIM: application areas and criticism
Learn about limits and applicability of the most popular metrics
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Site structure