MSU Video Codecs Comparison
Frequently Asked Questions
-
Q: Why do you use exactly PSNR metric in the comparison?
A: There are several important reasons for it:
- First, PSNR is THE ONLY GENERALLY ACCEPTED metric nowadays used for codecs comparison. To become certain of that you may look through any scientific articles or drafts of the new standards (H.264, for example). So using PSNR measures provides clear and unique interpretation by any professional in the codecs scope. Actually we use special non-standard metrics in our researches. Moreover we create metrics ourselves. But if we presented all the measures in some “MSU metric-3” it would far not improve clearness and value of the comparison.
- Second, along with having several drawbacks PSNR can be calculated rather quickly. This is very essential, because all other metrics that more or less reflect visual perception need much more time for calculation (up to dozens of times more). We’ve spent 11 days to perform all the calculations for the first test using PSNR and we are not ready yet to spend something like 110 days on it :) Of course it is possible to reduce the number of the testing video sequences but it would make the adequacy of the estimation worse.
But considering the fact that computers are becoming more and more powerful we are planning to add new non-standard metrics to our comparison. So have patience!
-
Q: Why don’t you consider some certain types of video in the comparison?
A: We haven’t had time to do it yet. Till this moment we’ve covered with tests only different types of DVD ripped movies and have left animation and video received from tuner without attention. We’ll try to increase the variety of tested videos if we find time and free computers to perform calculations.
-
Q: You’ve mistakes in the comparison!
A: Yes, this could have happened. Please be so kind and report about them to videocodec-testing@graphics.cs.msu.ru! We’ll appreciate it a lot.
-
Q: You should have performed the comparison in another way!
A: Your approach might also make sense, but we don’t have a possibility now to implement it. Try doing it yourself! We can even place your results on our site if they prove to be good.
-
Q: There are some graphs for some sequences in the comparison but there are no same graphs for another sequence. Can I somehow get to see them?
A: Alas, if we had placed graphs for ALL the measures that we had done for the comparison, we would have got a difficult-to-read 1000-paged document. That’s why we used only a subset of all the graphs in the comparison. But we have all of them in our laboratory and we can share them and other additional information that hasn’t been included in the comparison with those people who in their turn help us. These are, for example, the companies that had provided their codecs for the testing or people who have sent a lot of remarks. We won’t be sending anything for nothing, we have no time for it! We are busy working on the new projects and performing new tests.
-
Q: Why being dated September 2004 did the first test appear much later?
A: The first test contained too much information and it took us long to study the results, and to choose what to include in the final document. However we’re going to release further results with some delay as well. But this time the reason will be different: we want to give companies that have provided us with their codecs a chance to fix bugs that we find in the codecs during the testing. (We find bugs rather frequently during the testing and report about them to the codec’s producer.)
-
Q: It’s all clear to me, you’ve described all the codecs in details, thanks a lot! But which codec is the best after all?
A: Thanks for your warm words :) But the question “what codec is the best” is not correct and the comparison clearly illustrates this fact. Different codecs with different settings provide best results for different type of video. There are no any universal codecs that would outperform all other codecs for any given video sequence. We propose just a detailed report of the testing which other companies usually provide only for money. All the conclusions you should make yourself.
Contacts
-
MSU Benchmark Collection
- Video Colorization Benchmark
- Super-Resolution for Video Compression Benchmark
- Defenses for Image Quality Metrics Benchmark
- Learning-Based Image Compression Benchmark
- Super-Resolution Quality Metrics Benchmark
- Video Saliency Prediction Benchmark
- Metrics Robustness Benchmark
- Video Upscalers Benchmark
- Video Deblurring Benchmark
- Video Frame Interpolation Benchmark
- HDR Video Reconstruction Benchmark
- No-Reference Video Quality Metrics Benchmark
- Full-Reference Video Quality Metrics Benchmark
- Video Alignment and Retrieval Benchmark
- Mobile Video Codecs Benchmark
- Video Super-Resolution Benchmark
- Shot Boundary Detection Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects