MSU Learning-based Image Compression Benchmark 2024

Explore the Best Learned Image Compression Methods

Powered by
Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Project adviser: Dr. Dmitriy Kulikov
Measurements, analysis: 
Vitaly Rylov
Roman Kazantsev

Diverse dataset

  • Over 750 test images
  • HD, Full HD and 4K resolutions
  • Various content types
  • Processed over 1M images to create the dataset

Large comparison

  • 19 codecs tested
  • 13 IQA metrics
  • Subjective comparison (soon)
  • JPEG AI (soon)

Large leaderboard

  • BSQ-rate for codec ranking
  • Speed-Quality plots
  • Rate-Distortion curves

What’s new

  • 14.04.2024 Benchmark Release!



Speed-Quality trade-off


RD-curve examples


How to participate

Compare your codec with the best conventional and learning-based codecs. We kindly invite you to participate in our benchmark. To do this, follow the steps below:

Send us an email to with the following information:
    A. The name of your codec that will be specified in our benchmark
    B. Your codec and a script to run it
      Launch script must have the following (or similar) options
        --type — encode / decode
        --ref — path to reference image / path to compressed file
        --out — path to store compressed file / path to store decompressed image
        --qp — quality parameter / target bpp (for encoding mode only)
    C. (Optional) Any additional information about the codec:
      1. The parameter set that you want us to use
      2. A link to the paper about your model or GitHub page
      3. Any characteristics of your model's architecture

Also you can test your codec locally on open part of Learning-Based Image Compression Benchmark dataset.

If you have any suggestions or questions, please contact us.

Cite us

title={Learning-based Image Compression Benchmark},
author={Rylov, Vitaly and Kazantsev, Roman and Vatolin, Dmitriy},


We would highly appreciate any suggestions and ideas on how to improve our benchmark.
Please contact us via e-mail:
Also, you can subscribe to updates on our benchmark:

MSU Video Quality Measurement Tool


    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on

Crowd-sourced subjective
quality evaluation platform

  • Conduct comparison of video codecs and/or encoding parameters

What is it? is a web platform for conducting fast crowd-sourced subjective comparisons.

The service is designed for the comparison of images, video, and sound processing methods.

Main features

  • Pairwise comparison
  • Detailed report
  • Providing all of the raw data
  • Filtering out answers from cheating respondents

25 Jun 2024
See Also
Video Colorization Benchmark
Explore the best video colorization algorithms
Defenses for Image Quality Metrics Benchmark
Explore defenses from adv attacks
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Metrics Robustness Benchmark
Check your image or video quality metric for robustness to adversarial attacks
Site structure