MSU Deinterlacer Benchmark — selecting the best deinterlacing filter

The most comprehensive comparison of deinterlacing methods

What’s new

Key features of the Benchmark

To submit deinterlacing method, please, follow 3 simple steps in the the Deinterlacer Submission section

We appreciate new ideas. Please, write us an e-mail to


The table below shows a comparison of deinterlacers by PSNR, SSIM metrics and by speed.

Click on the labels to sort the table

1.0 MSU Deinterlacer 40.708 0.983 1.3
2.5 VapourSynth TDeintMod 39.916 0.977 50.29
3.0 NNEDI 39.625 0.978 1.91
4.0 Bob-Weave Deinterlacer 39.679 0.976 46.45
4.5 VapourSynth EEDI3 39.373 0.977 51.9
6.0 Real-Time Deep Deinterlacer 39.203 0.976 0.27
7.5 Bob 38.499 0.975 52.83
8.5 Weston 3-Field Deinterlacer 38.680 0.969 36.75
9.0 Kernel Deinterlacer (optimal parameters) 38.103 0.970 37.91
9.0 Elemental Live Low Latency Interpolation 38.056 0.972 Hardware Real-Time
11.0 YADIF 37.742 0.965 48.96
12.0 Elemental Live Motion Adaptive Interpolation 37.063 0.964 Hardware Real-Time
13.5 Kernel Deinterlacer 36.731 0.960 37.85
14.5 Studio Coast Pty vMix 36.990 0.950 Hardware Real-Time
14.5 Adobe Premiere Pro Built-In 36.092 0.958 43.82
16.0 Motion and Area Pixel Deinterlacer 35.415 0.950 2.15
16.5 Muksun Deinterlacer 35.444 0.949 1.95
18.0 PAL Interpolation 33.111 0.913 2.85
19.5 Elemental Live Motion Adaptive Blend 29.744 0.868 Hardware Real-Time
20.0 ASVZZZ Deinterlacer 27.499 0.902 1.9
20.5 Motion Compensation Deinterlacer 27.899 0.804 1.45

Full FrameRate Leaderboard

1.0 MSU Deinterlacer 40.917 0.983 1.3
2.0 VapourSynth TDeintMod 40.071 0.978 50.29
3.5 VapourSynth EEDI3 39.547 0.978 51.9
4.0 Bob-Weave Deinterlacer 39.775 0.976 46.45
4.5 Real-Time Deep Deinterlacer 39.450 0.977 0.27
6.5 Bob 38.645 0.975 52.83
7.0 Weston 3-Field Deinterlacer 38.726 0.969 36.75
7.5 Elemental Live Low Latency Interpolation 37.908 0.972 Hardware Real-Time
9.0 YADIF 37.860 0.966 48.96
10.0 Elemental Live Motion Adaptive Interpolation 36.953 0.964 Hardware Real-Time
11.0 Studio Coast Pty vMix 32.942 0.931 Hardware Real-Time
12.0 Elemental Live Motion Adaptive Blend 29.747 0.868 Hardware Real-Time


In this section you can see a frame, a crop from this frame, and also MSU VQMT PSNR Visualization of this crop.

Drag a red rectangle in the area which you want to crop, by default it is in the area with the worst PSNR

The area to compare on: Deinterlacer 1: Deinterlacer 2:


In this row you can see VQMT PSNR Visualization


MSU Deinterlacer

VS TDeintMod


Highlight the plot region where you want to zoom in


FPS is calculated on Intel-Core i7 10700K CPU


The following plot shows difference between every method and Bob, because Bob is considered as the least complicated deinterlacing method


Sequence №:

Sequence №: Metric:


In this section you can see PSNR between the output of chosen deinterlacer and the others


ASVZZZ 20.54547840460791
Bob inf
Bob-Weave 42.07810541012702
Deep 41.164435818653764
EL LLI 43.87789597826898
EL MAI 38.01828007878827
Kernel 42.082198877630226
MAP 36.81130955719863
MSU 39.86926525444535
Muksun 37.93674757748442
NNEDI 43.222444723870076
PAL Interpolation 34.30022078877691
VMix 30.93623525121193
Weston 3-Field 42.526302640037
YADIF 39.61242962714242

Evaluation methodology


Our dataset is constantly updated. Now we have 40 video sequences. Each sequence's length is 1 second. Resolution of all video sequences is 1920x1080. FPS varies from 24 to 60. TFF interlacing was used to get interlaced data from GT.

Click on this text to read how exactly our dataset was composed.
  1. Initially we had about 30 videos from Vimeo 90k dataset, total length of which was about 1 hour. These videos included sports, panorama, news, landscapes, parts of movies, ads and other types of content.
  2. We interlaced all these videos, and then measured PSNR between interlaced video and odd frames of gt.
  3. From each video, we wanted to get 1 or 2 sequences, each 1 second long.
  4. The sequences with the smallest mean PSNR were considered as the most “damaged”. Our hypothesis was that these sequences will be the hardest for deinterlacers.
  5. Also we counted mean PSNR over all videos, and the sequences with the closest mean PSNR to mean-of-all PSNR was considered as the most “average”.
  6. The sequences with the highest mean PSNR were considered as the most “undamaged”. These sequences were often a static shot with no moving objects.
  7. We took 15 “damaged”, 20 “average”, and 5 “undamaged” sequences and put them together in one video, but separated by 10 black frames.
  8. We don’t count PSNR on black frames. They were made for deinterlacers that use motion estimation (ME). Because of the black frames, ME-deinterlacer detects zero motion between two sequences and, therefore, doesn’t consider motion from the previous sequence while processing the current one. Also, we ignore the first 4 frames of each sequence while computing the overall mean. Again, we suppose that on the first 4 frames ME-deinterlacers are collecting information about motion and don't show their best.


We compare RGB frames via 2 metrics - PSNR and SSIM. PSNR and SSIM are measured over the Y-component.

For each video sequence, we take the average PSNR and SSIM over all frames. We decided to choose these metrics because they proved themselves to be among the best metrics to show quality loss.

We decided to measure these metrics over Y component because YUV is the most popular type of colorspace nowadays, but there are still a lot of versions of YUV (e.g. yuv444p, yuv420p, yuv420p12le). In these versions U and V components are different, that’s why we measure only Y component. Also, there are a lot of other color spaces, which use Y component (e.g. YCbCr, YPbPr, UYVY, ...). Finally, we can easily compute Y component from other color spaces, such as RGB or gray.

Here is the plot of the PSNR difference between PSNR-Y and PSNR-RGB. As we can see here, the difference is rather negligible.


Validation of deinterlacers' outputs

Another important direction of our work is to control the outputs of deinterlacers. Sometimes, it can convert colorspace, work in BFF mode instead of TFF, or maybe have bugs.

Click on this text to read how exactly we validate deinterlacers' outputs.

The main criteria is that the PSNR between GT fields and the same fields in deinterlaced video must be equal to infinity. To control this, we make Top-Fields and Bottom-Fields plot on every second frame of GT video and deinterlaced video.

Here is the sample plot for Bob-Weave Deinterlacer. This deinterlacer passed the validation.

BWDIF validation

PSNR between Bottom-Fields of every deinterlaced frame and GT frame are equal to infinity, so we substitute infinity by zero. This means, that bottom field exactly matches the corresponding field in GT sequence.

Another sample plot, for MSU Deinterlacer, which had problems with colorspace.

MSU Deinterlacer validation

As we can see here, Bottom-Field PSNR is about 100, but not infinity. In such cases, we make the following steps:

  1. Convert GT-data to the colorspace of the deinterlacer.
  2. If we don’t know (can’t guess) which colorspace the deinterlacer uses, we form the LookUp-Table from the fields of the deinterlacer output and GT data, that must be equal to each. Then, we use this LookUp-Table to map GT to the colorspace of the deinterlacer.
  3. In some cases, it is impossible to precisely determine LookUp-Table because the mapping is neither injective, nor surjective. In such hard cases, we just choose a GT sample with the highest PSNR from the previous 2 steps.

Also, we provide MSU Video Quality Measurement Tool (VQMT) PSNR visualization of the deinterlacer output. As the last step of the validation process, we check, that the VQMT PSNR visualization is striped. It should be so, because even/odds rows must exactly match the corresponding GT rows.

Here in the example of correct VQMT PSNR visualization output:

MSU Deinterlacer validation

And, finally, let us make a closer look at it:

MSU Deinterlacer validation

As we can see, the output is striped, so this means that the deinterlacer passed the last validation step.

Deinterlacer Submission

There are 3 easy steps to submit:

  1. Download the interlaced video here.
    We have more available formats, if YUV is not suitable. Just click on this text
    There are 5 available options:
      a. Download frames of the video sequence in .png format here
      b. Download .yuv video file generated from frames via

      ffmpeg -i %04d.png -c:v rawvideo -pix_fmt yuv444p sequences.yuv

      c. Download lossless encoded .mkv video generated from frames via

      ffmpeg -i %04d.png -c:v libx264 -preset ultrafast -crf 0 -pix_fmt yuv444p lossless.mkv

      d. Download .rgb video file generated from frames via

      ffmpeg -i %04d.png -c:v rawvideo -pix_fmt rgb24 sequences.rgb

      e. Download lossless encoded .avi video generated from frames via

      ffmpeg -i %04d.png -c:v libx264rgb -preset ultrafast -crf 0 lossless.avi


  2. Deinterlace downloaded video
    The details, which may help you
      TFF interlacing was used to get interlaced sequence from GT
      The video consists of 40 videos, each separated by 5 black frames. Black frames are ignored while measuring

    You can also send us any executable file or code of your deinterlacer, in order not to deinterlace video by yourself

  3. Send us an email to with the following information:
      A. Name of the deinterlacing method that will be specified in our benchmark

      B. Link to the cloud drive (Google Drive, OneDrive, Dropbox, etc.), containing deinterlaced video.

      C. (Optional) Any additional information about the method
      Click here to see what may be included in additional information
        Technical information about deinterlaced video (e.g. colorspace, file-type, codec)
        The name of the theoretical method used
        Full name of the deinterlacing method or product
        The version that was used
        The parameter set that was used
        Any other additional information
        A link to the code of your deinterlacing method, if it is open-source
        A link to the paper about your deinterlacing method
        A link to the documentation of your deinterlacing method. For example, this is suitable for deinterlacing methods that are implemented as a part of a video processing framework.
        A link to the page, where users can purchase or download your product (for example, VirtualDub Plugin)

      D. (Optional) If you would like us to tune the parameters of your deinterlacing method, you should give us an ability to launch it. You can do it by sending us a code or an executable file, providing us a free test version of your product, or any other possible way, that is convenient for you


For questions and propositions, please contact us:

05 Nov 2020
See Also
MSU Video Quality Measurement Tool: Picture types
VQMT 13.0 Online help: List of all picture types available in VQMT and their aliases
MSU SBD Benchmark 2020
MSU SBD Benchmark 2020
MSU Deinterlacer Benchmark Participants
MSU Video Quality Measurement Tool: Usage of VQMT metrics in CLI
VQMT 13.0 Online help: Description of VQMT metrics, their parameters and using in CLI
MSU Video Quality Measurement Tool: VQMT various lists and tables
VQMT 13.0 Online help: Information about VQMT visualization formats, read modes, etc...
Site structure