MSU Shot Boundary Detection Benchmark 2020 — selecting the best plugin

Comprehensive analysis of shot boundary detection methods

Video group head: Dr. Dmitriy Vatolin
Measurements, analysis: 
Aleksandr Gushchin,
Anastasia Antsiferova

This benchmark designed to evaluate different algorithms of the Shot Boundary Detection task. The benchmark provides an opportunity to test your own algorithm for detection of cuts, dissolves, wipes and fades scene transition.

What’s new

Key features of the Benchmark

To participate in our benchmark, please follow instructions in the How to submit section.

We appreciate new ideas. Please, write us an e-mail to


The table below shows a comparison of SBD algorithms by F1 score and speed.

Click on the labels to sort the table.

Performance Charts

Per-video F1 score on different datasets

Choose dataset to see detailed perfomance


Speed/F1 score — resolution dependency



Video Dataset

Our dataset is constantly being updated. Now it consists of 50 videos with a total duration about 1287 minutes with approximately 10883 scene transitions (7501 abrupt and 3382 gradual).

Our collection contains videos from popular RAI dataset, videos from MSU codecs comparison 2019 and 2020 test sets, and also videos collected from different sources. Our analysis has shown that groud truth in RAI contains imperfections, which we fixed in our collections. Dataset were extended by replacing every cut scene transition in these datasets with fades and dissolves. We manually marked up 19 videos from open-source with total length of 950+ minutes using Yandex.Toloka.

Final dataset contains videos with different resolutions from 360×288 to 1920×1080 in MP4 and MKV formats. Videos include samples in rgb scale or in grayscale with FPS from 23 to 60.

Click to open spatio-temporal distribution and examples of video dataset
Detailed SI-TI of each scene in dataset

Select the area to zoom in

SI-TI metric:

Dataset examples

Evaluation Steps

1. Launches
We launch your algorithm with given options (or tune them if you would like to) and measure running time.
Important notes:
  • We expect the output of your algorithm to be in the following format:
    json or csv file that contains three keys or columns ー cuts, dissolves and fades.
    Each of these three must contain a list of frames with scene transitions according to the Definition Section
    Output example
  • If your algorithm depends on randomness (KMeans, for example) we will launch it several times
    to get the average of the outcomes.
  • We have time limitation so your algorithm must process more than 5 frames per second.

2. Calculation With given shot boundaries we calculate F1 score, Precision and Recall.

Here are the formulas for these metrics:

3. Verification We send you final results of your algorithm including metrics and visualisation on our video dataset.

4. Publication If you give us permission we add results to the main of our benchmark alongside with others algorithms.

How to submit

Send us an email to with the following:

If you have any suggestions or ideas how to improve our benchmark, please write us an e-mail on


For questions and propositions, please contact us:

28 Dec 2020
See Also
MSU 3D-video Quality Analysis. Report 12
MSU Video Upscalers Benchmark 2021
The most comprehensive comparison of video super resolution (VSR) algorithms by subjective quality
MSU Video Upscalers Benchmark Participants
The list of the participants of the MSU Video Upscalers Benchmark
MSU Video Upscalers Benchmark Methodology
The methodology of the MSU Video Upscalers Benchmark
MSU Video Alignment and Retrieval Benchmark
Explore the best algorithms in different video alignment tasks
MSU Video Alignment and Retrieval Benchmark Suite Participants
List of participants of MSU Video Alignment and Retrieval Benchmark Suite
Site structure