MSU Deinterlacer Benchmark — selecting the best deinterlacing filter
The most comprehensive comparison of deinterlacing methods
- 26.11.2020 Beta-version Release
- 22.12.2020 Added VS EEDI3, VS TDeintMod, MC Deinterlacer. Tuned Kernel Deinterlacer
- 07.07.2021 New 2021 Dataset
- 01.09.2021 New Leader! DfRes 122000 G2e 3 Deinterlacer
- 17.09.2021 Added MFDIN, Adobe Premiere Pro Built-IN
- 22.09.2021 Added Sony Vegas Built-In
- 06.10.2021 Added subjective comparison results (MOS)
- 13.10.2021 Added EDVR, EDVR_toWSA, TDAN, DUF, new versions of DfRes and MFDIN. New Leader! MFDIN L Deinterlacer
- 04.11.2021 Added ST-Deint
- 24.11.2021 Added subjective scores for EDVR, EDVR_toWSA, TDAN, DUF, ST-Deint, new versions of DfRes and MFDIN
Key features of the Benchmark
- For Deinterlacing methods’ users
- Choose deinterlacing method that is the best for your speed and quality requirements
- Discover the newest deinterlacing methods’ achievements
- For Researchers and Developers
- Quickly get comprehensive comparison results for your paper with our tables, visual comparison tools and performance plots
- Check the performance of your deinterlacing method on the complex cases
To submit deinterlacing method, please, follow 3 simple steps in the the Deinterlacer Submission section
We appreciate new ideas. Please, write us an e-mail to email@example.com
Half FrameRate Leaderboard
The table below shows a comparison of deinterlacers by PSNR, SSIM metrics and by speed.
|Rank||Name||MOS||PSNR||SSIM||VMAF||FPS on CPU|
|3.333||DfRes 122000 G2e 3||0.898||41.891||0.969||94.848||No data|
|4.333||DfRes 61000||0.874||41.163||0.967||94.865||No data|
|6.333||DfRes V||No data||39.446||0.968||93.585||0.640|
|16.333||Real-Time Deep Deinterlacer||0.581||37.031||0.953||90.964||0.270|
|20.000||SonyVegas Interpolate Field||0.178||36.649||0.951||90.151||3.310|
|20.333||Kernel Deinterlacer (optimal parameters)||0.095||36.449||0.947||90.757||37.910|
|21.333||Weston 3-Field Deinterlacer||0.240||36.788||0.947||89.625||36.750|
|24.333||Motion and Area Pixel Deinterlacer||No data||35.278||0.932||88.882||2.150|
|26.000||ASVZZZ Deinterlacer||No data||34.486||0.928||86.873||1.900|
|28.000||PAL Interpolation||No data||32.901||0.901||82.662||2.850|
|29.667||Motion Compensation Deinterlacer||No data||29.259||0.830||64.436||1.450|
|30.000||Adobe Premiere Pro Built-In||No data||30.772||0.813||57.538||6.530|
|30.333||SonyVegas Blend Field||No data||28.344||0.856||49.308||3.510|
Full FrameRate Leaderboard
|Rank||Name||MOS||PSNR||SSIM||VMAF||FPS on CPU|
|2.333||DfRes 122000 G2e 3||0.898||42.157||0.969||95.242||No data|
|3.333||DfRes 61000||0.874||41.392||0.967||95.289||No data|
|4.000||DfRes V||No data||39.697||0.968||94.096||0.640|
|12.333||Real-Time Deep Deinterlacer||0.581||37.266||0.954||91.860||0.270|
|15.667||Weston 3-Field Deinterlacer||0.240||36.987||0.947||90.698||36.750|
In this section you can see a frame, a crop from this frame, and also MSU VQMT PSNR Visualization of this crop.
Drag a red rectangle in the area which you want to crop
The frame to compare on:
In this row you can see VQMT PSNR Visualization
DfRes 122000 G2e 3
Highlight the plot region where you want to zoom in
FPS is calculated on Intel-Core i7 10700K CPU
Sequence №: Metric:
Sequence №: Metric:
In this section you can see PSNR between the output of chosen deinterlacer and the others
Our dataset is constantly updated. Now we have 28 video sequences. Each sequence contains 60 frames. Resolution of all video sequences is 1920x1080. FPS varies from 24 to 60. TFF interlacing was used to get interlaced data from GT.
Click on this text to read how exactly our current dataset was composed.
Initially, we had about 10.000 videos, downloaded from Vimeo. These videos included sports, animation, panorama, news, landscapes, parts of movies, tv shows, ads, and many other types of content. We decided to restrict the size of the dataset by a maximum of 30 videos, each containing 60 frames. Our goal was to create the most diverse dataset, respecting given restrictions.
- The first goal was to cluster 10.000 videos. For this purpose, we mapped the videos into 2D space by calculating their Google Si/Ti, determined, for example, in this paper. For calculating Google Si/Ti we used FFmpeg x264 codec with options [-qp 28 -b_qfactor 1 -i_qfactor 1].
- Then, we performed a simple K-Means Clustering in this 2D space to divide the videos into 30 clusters.
- Then, we measured the distances between centers of clusters and all the videos from the cluster. We chose 30 videos, closest to the centers of 30 clusters. These 30 videos included some hard-cases for deinterlacers, such as running letters, scene changes, motion, etc.
- Another big part of dataset composing was to choose 60-frame cuts from these 30 big videos. To do that, we performed the same process, as in steps 1-3, but now the goal was to choose 30 cuts from about 15.000 cuts. Again, we calculated Google Si/Ti for each 60-frames cut and clustered them into 30 clusters. 2 clusters contained only black and only white cuts respectfully, so we marked them as "trash" clusters.
- Then, we selected top-5 cuts, closest to the center of each non-"trash" cluster.
- The final step was to manually choose the best from top-5 cuts for each non-"trash" cluster.
- We don't count PSNR on black frames. They were made for deinterlacers that use motion estimation (ME). Because of the black frames, ME-deinterlacer detects zero motion between two sequences and, therefore, doesn't consider motion from the previous sequence while processing the current one.
We compare RGB frames via 3 metrics - PSNR, SSIM, and VMAF. PSNR, SSIM, and VMAF are measured over the Y-component.
For each video sequence, we take the average PSNR, SSIM, and VMAF over all frames. We decided to choose these metrics because they proved themselves to be among the best metrics to show a quality loss.
We decided to measure these metrics over the Y component because YUV is the most popular type of colorspace nowadays, but there are still a lot of versions of YUV (e.g. yuv444p, yuv420p, yuv420p12le). In these versions U and V components are different, that's why we measure only the Y component. Also, there are a lot of other color spaces, which use the Y component (e.g. YCbCr, YPbPr, UYVY, ...). Finally, we can easily compute the Y component from other color spaces, such as RGB or gray.
Here is the plot of the PSNR difference between PSNR-Y and PSNR-RGB. As we can see here, the difference is rather negligible.
Also, we provide MOS (Mean Opinion Score) for the best 16 deinterlacers (excluding different versions of the same deinterlacer).
To get MOS, we host a subjective comparison using www.subjectify.us. We get the same crops (of areas with the worst PSNR) from each test video, and then assessors compare output of each deinterlacer against all others. We get 10 comparison results for each crop-crop pair, and then we use Bradley-Terry model to get MOS scores
Validation of deinterlacers' outputs
Another important direction of our work is to control the outputs of deinterlacers. Sometimes, it can convert colorspace, work in BFF mode instead of TFF, or maybe have bugs.
Click on this text to read how exactly we validate deinterlacers' outputs.
The main criteria are that the PSNR between GT fields and the same fields in the deinterlaced video must be equal to infinity. To control this, we make Top-Fields and Bottom-Fields plot on every second frame of GT video and deinterlaced video.
Here is the sample plot for Bob-Weave Deinterlacer. This deinterlacer passed the validation.
PSNR between Bottom-Fields of every deinterlaced frame and GT frame is equal to infinity, so we substitute infinity by zero. This means, that the bottom field exactly matches the corresponding field in the GT sequence.
Another sample plot, for MSU Deinterlacer, which had problems with colorspace.
As we can see here, Bottom-Field PSNR is about 100, but not infinity. In such cases, we make the following steps:
- Convert GT data to the colorspace of the deinterlacer.
- If we don't know (can't guess) which colorspace the deinterlacer uses, we form the LookUp-Table from the fields of the deinterlacer output and GT data, which must be equal to each. Then, we use this LookUp-Table to map GT to the colorspace of the deinterlacer.
- In some cases, it is impossible to precisely determine LookUp-Table because the mapping is neither injective nor surjective. In such hard cases, we just choose a GT sample with the highest PSNR from the previous 2 steps.
Also, we provide MSU Video Quality Measurement Tool (VQMT) PSNR visualization of the deinterlacer output. As the last step of the validation process, we check, that the VQMT PSNR visualization is striped. It should be so because even/odds rows must exactly match the corresponding GT rows.
Here is the example of correct VQMT PSNR visualization output:
And, finally, let us take a closer look at it:
As we can see, the output is stripped, so this means that the deinterlacer passed the last validation step.
To submit, you can either send us any executable file or code of your deinterlacer, or follow these 3 simple steps
- Download the interlaced video here.
We have more available formats, if YUV is not suitable. Just click on this text
There are 3 available options:
- a. Download frames of the video sequence in .png format here
- b. Download .yuv video file generated from frames via
ffmpeg -i %04d.png -c:v rawvideo -pix_fmt yuv444p sequences.yuvhere
- c. Download lossless encoded .mkv video generated from frames via
ffmpeg -i %04d.png -c:v libx264 -preset ultrafast -crf 0 -pix_fmt yuv444p lossless.mkvhere
- Deinterlace downloaded video
The details, which may help you
TFF interlacing was used to get interlaced sequence from GT
The video consists of 28 videos, each separated by 5 black frames. Black frames are ignored
- Send us an email to firstname.lastname@example.org
with the following information:
- A. Name of the deinterlacing method that will be specified in our benchmark
- B. Link to the cloud drive (Google Drive, OneDrive, Dropbox, etc.), containing deinterlaced
- C. (Optional) Any additional information about the method
Click here to see what may be included in additional information
- Technical information about deinterlaced video (e.g. colorspace, file-type, codec)
- The name of the theoretical method used
- Full name of the deinterlacing method or product
- The version that was used
- The parameter set that was used
- Any other additional information
- A link to the code of your deinterlacing method, if it is open-source
- A link to the paper about your deinterlacing method
- A link to the documentation of your deinterlacing method. For example, this is suitable for
deinterlacing methods that are implemented as a part of a video processing framework.
- A link to the page, where users can purchase or download your product (for example,
- D. (Optional) If you would like us to tune the parameters of your deinterlacing method, you
give us an ability to launch it. You can do it by sending us a code or an
executable file, providing us a free test version of your product, or any other possible way, that
is convenient for you
For questions and propositions, please contact us: email@example.com
MSU Benchmark Collection
- MSU Video Upscalers Benchmark 2021
- MSU Video Alignment and Retrieval Benchmark
- MSU Super-Resolution for Video Compression Benchmark 2021
- MSU Mobile Video Codecs Benchmark 2021
- MSU Video Super-Resolution Benchmark
- MSU Shot Boundary Detection Benchmark 2020
- MSU Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- Video Quality Measurement Tool 3D
MSU Datasets Collection
- Video Filters