Participants of the MSU Metrics Robustness Benchmark
Metric | Year | Image or Video | Implementation |
---|---|---|---|
CLIP-IQA [1] | 2022 | Image | Link |
META-IQA [2] | 2020 | Image | Link |
RANK-IQA [3] | 2017 | Image | Link |
HYPER-IQA [4] | 2020 | Image | Link |
KONCEPT [5] | 2020 | Image | Link |
FPR [6] | 2022 | Image | Link |
NIMA [7] | 2018 | Image | Link |
WSP [8] | 2020 | Image | Link |
MDTVSFA [9] | 2021 | Video | Link |
LINEARITY [10] | 2020 | Image | Link |
VSFA [11] | 2019 | Video | Link |
PAQ2PIQ [12] | 2020 | Image | Link |
SPAQ [13] | 2020 | Image | Link |
TRES [14] | 2022 | Image | Link |
MANIQA [15] | 2022 | Image | Link |
References
- Wang, Jianyi, Kelvin C. K. Chan, and Chen Change Loy. ‘Exploring CLIP for Assessing the Look and Feel of Images’. In AAAI, 2023.
- Zhu, Hancheng, Leida Li, Jinjian Wu, Weisheng Dong, and Guangming Shi. ‘MetaIQA: Deep Meta-Learning for No-Reference Image Quality Assessment’. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14143–52, 2020.
- Liu, Xialei, Joost Van De Weijer, and Andrew D. Bagdanov. ‘Rankiqa: Learning from Rankings for No-Reference Image Quality Assessment’. In Proceedings of the IEEE International Conference on Computer Vision, 1040–49, 2017.
- Su, Shaolin, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. ‘Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network’. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3667–76, 2020.
- Hosu, Vlad, Hanhe Lin, Tamas Sziranyi, and Dietmar Saupe. ‘KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment’. IEEE Transactions on Image Processing 29 (2020): 4041–56.
- Chen, Baoliang, Lingyu Zhu, Chenqi Kong, Hanwei Zhu, Shiqi Wang, and Zhu Li. ‘No-Reference Image Quality Assessment by Hallucinating Pristine Features’. IEEE Transactions on Image Processing 31 (2022): 6139–51.
- Talebi, Hossein, and Peyman Milanfar. ‘NIMA: Neural Image Assessment’. IEEE Transactions on Image Processing 27, no. 8 (2018): 3998–4011.
- Su, Yicheng, and Jari Korhonen. ‘Blind Natural Image Quality Prediction Using Convolutional Neural Networks and Weighted Spatial Pooling’. In 2020 IEEE International Conference on Image Processing (ICIP), 191–95. IEEE, 2020.
- Dingquan Li, Tingting Jiang, and Ming Jiang. ‘Unified quality assessment of in-the-wild videos with mixed datasets training’. International Journal of Computer Vision, 129:1238–1257, 2021.
- Li, Dingquan, Tingting Jiang, and Ming Jiang. ‘Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment’. In Proceedings of the 28th ACM International Conference on Multimedia, 789–97, 2020.
- Dingquan Li, Tingting Jiang, and Ming Jiang. ‘Quality assessment of in-the-wild videos’. In Proceedings of the 27th ACM International Conference on Multimedia, pages 2351–2359, 2019
- Ying, Zhenqiang, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, and Alan Bovik. ‘From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality’. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3575–85, 2020.
- Fang, Yuming, Hanwei Zhu, Yan Zeng, Kede Ma, and Zhou Wang. ‘Perceptual Quality Assessment of Smartphone Photography’. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–86, 2020.
- Golestaneh, S. Alireza, Saba Dadsetan, and Kris M. Kitani. ‘No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency’. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1220–30, 2022.
- Yang, Sidi, Tianhe Wu, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang, and Yujiu Yang. ‘Maniqa: Multi-Dimension Attention Network for No-Reference Image Quality Assessment’. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1191–1200, 2022.
See Also
MSU CVQAD – Compressed VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
Video Saliency Prediction Benchmark
Explore the best video saliency prediction (VSP) algorithms
Super-Resolution for Video Compression Benchmark
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Metrics Robustness Benchmark
Check your image or video quality metric for robustness to adversarial attacks
Video Upscalers Benchmark
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
Video Deblurring Benchmark
Learn about the best video deblurring methods and choose the best model
Site structure
-
MSU Benchmark Collection
- Video Saliency Prediction Benchmark
- Super-Resolution for Video Compression Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects