MSU Learning-Based Image Compression Benchmark 2024

Explore the Best Learned Image Compression Methods

Powered by
Powered by
G&M Lab head: Dr. Dmitriy Vatolin
Project adviser: Dr. Dmitriy Kulikov
Measurements, analysis: 
Vitaly Rylov
Roman Kazantsev



Diverse dataset

  • Over 750 test images
  • HD, Full HD and 4K resolutions
  • Various content types
  • Processed over 1M images to create the dataset

Large comparison

  • 19 codecs tested
  • 13 IQA metrics
  • Subjective comparison (soon)
  • JPEG AI (soon)

Large leaderboard

  • BSQ-rate for codec ranking
  • Speed-Quality plot
  • Rate-Distortion curves

What’s new

  • 14.04.2024 Benchmark Release!

Leaderboard

Metric:
Resolution:

Speed-Quality trade-off

Metric:
Resolution:
Time:

RD-curve examples

Metric:
Resolution:
Sequence:

How to participate

Compare your codec with the best traditional and learning-based codecs. We kindly invite you to participate in our benchmark. To do this, follow the steps below:

Send us an email to image-compression-benchmark@videoprocessing.ai with the following information:
    A. The name of your codec that will be specified in our benchmark
    B. Your codec and a script to run it
      Launch script must have the following (or similar) options
        --type — encode / decode
        --ref — path to reference image / path to compressed file
        --out — path to store compressed file / path to store decompressed image
        --qp — quality parameter / target bpp (for encoding mode only)
    C. (Optional) Any additional information about the codec:
      1. The parameter set that you want us to use
      2. A link to the paper about your model or GitHub page
      3. Any characteristics of your model's architecture

If you have any suggestions or questions, please contact us.


Contacts

We would highly appreciate any suggestions and ideas on how to improve our benchmark.
Please contact us via e-mail: image-compression-benchmark@videoprocessing.ai.
Also, you can subscribe to updates on our benchmark:


MSU Video Quality Measurement Tool

              

    The tool for performing video/image quality analyses using reference or no-reference metrics

Widest Range of Metrics & Formats

  • Modern & Classical Metrics SSIM, MS-SSIM, PSNR, VMAF and 10+ more
  • Non-reference analysis & video characteristics
    Blurring, Blocking, Noise, Scene change detection, NIQE and more

Fastest Video Quality Measurement

  • GPU support
    Up to 11.7x faster calculation of metrics with GPU
  • Real-time measure
  • Unlimited file size

  • Main MSU VQMT page on compression.ru

Crowd-sourced subjective
quality evaluation platform

  • Conduct comparison of video codecs and/or encoding parameters

What is it?

Subjectify.us is a web platform for conducting fast crowd-sourced subjective comparisons.

The service is designed for the comparison of images, video, and sound processing methods.

Main features

  • Pairwise comparison
  • Detailed report
  • Providing all of the raw data
  • Filtering out answers from cheating respondents

  • Subjectify.us
03 May 2024
See Also
MSU Image- and video-quality metrics analysis
Description of a project in MSU Graphics and Media Laboratory
Super-Resolution Quality Metrics Benchmark
Discover 66 Super-Resolution Quality Metrics and choose the most appropriate for your videos
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Metrics Robustness Benchmark
Check your image or video quality metric for robustness to adversarial attacks
Video Upscalers Benchmark
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
Site structure