MSU Metrics Robustness Benchmark

Check your IQA/VQA metric robustness to adversarial attacks

Powered by
Project head: Anastasia Antsiferova
G&M Lab head: Dr. Dmitriy Vatolin
Measurements, analysis:
Aleksandr Gushchin, Kirill Malyshev,
Khaled Abud, Ekaterina Shumitskaya,
Vyacheslav Napadovsky,
Sergey Lavrushkin, Maksim Velikanov

Key features of the Benchmark

  • More than 15 no-reference image/video-quality metrics
  • 9 adversarial attacks including FGSM-based, Universal Adversarial Perturbation-based and Perceptual-aware attacks
  • 6 training and testing datasets
  • Automatic cloud-based pipeline for hacking each metric with each attack

What’s new

  • 06.07.2023 Updated visualisations and structure of the benchmark webpage.
  • 07.06.2023 Benchmark Release!


This chart compares metrics robustness at different levels of proxy metrics loss. We show mean robustness score for two types of adversarial attacks and mean SSIM measured between original and attacked images. The results are averaged for images/frames from all test datasets.

How to submit your IQA/VQA metric

Compare the robustness of your method to adversarial attacks with existing quality assessment metrics.
We kindly invite you to participate in our benchmark. To do this follow the steps below:

Send us an email to with the following information:
    A. Your method name that will be specified in our benchmark
    B. Your method launch scipt with the following options (or their analogs)
      -ref — path to reference video (for full-reference metrics)
      -dist — path to distorted video
      -output — path to output of your algorithm
      -t — threshold, if it's required in your algorithm
    C. (Optional) Any additional information about the method:
      1. The parameters set that you want us to use
      2. A link to the paper about your model
      3. Any characteristics of your model's architecture
You can verify the results of current participants or estimate the perfomance of your method using the
code provided on our GitHub page.

Cite us

title={Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks},
author={Anastasia Antsiferova and Khaled Abud and Aleksandr Gushchin and Sergey Lavrushkin and Ekaterina Shumitskaya and Maksim Velikanov and Dmitriy Vatolin},

You can find the full text of our paper through the link.


We would highly appreciate any suggestions and ideas on how to improve our benchmark. Please contact us via email:

MSU Video Quality Measurement Tool


    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on

07 Jun 2023
See Also
Video Colorization Benchmark
Explore the best video colorization algorithms
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Learning-Based Image Compression Benchmark
The First extensive comparison of Learned Image Compression algorithms
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Site structure