MSU Video Alignment and Retrieval Benchmark Suite


Explore the best algorithms in different video alignment tasks

Video group head: Dr. Dmitriy Vatolin
Measurements, analysis:
Mikhail Dremin,
Konstantin Kozhemyakov

What’s new

Key features of the Benchmark Suite

To submit your method, please, follow several simple steps in the Algorithm Submission section

We appreciate new ideas. Please, write us an e-mail to aligners-benchmark@videoprocessing.ai

Evaluation

You can choose the preset on which the algorithms were tested and sort charts by the algorithm.

NOTE: The same number of test pair on different charts does not mean the same test pair because of sorting the results by algorithm.

Local Time Distortions

Leaderboard

Name Accuracy
0 frames error
Accuracy
3 frames error
Accuracy
10 frames error

Charts

Highlight the plot region where you want to zoom in

Global Time Distortions

Leaderboard

Name F1-score Precision Recall

Charts

Highlight the plot region where you want to zoom in

Mixed Time Distortions

Leaderboard

Name F1-score Precision Recall

Charts

Highlight the plot region where you want to zoom in

Methodology

Video sequences selection

Content diversity in test sequences is necessary for running algorithms in near-realistic conditions. To provide this diversity the following steps have been undertaken.

We identified two families of cases: processes during which temporary distortions could occur, i.e. video editing, streaming, capturing from different rigidly-connected cameras etc., and content in test videos which is difficult to align, i.e. talking head, recurring events (such as football matches, formula-1 racing), camera on a tripod (small amount of movement) etc. Then, we collected ~2000 videos from Vimeo from each taxonomic category to ensure unbiased video selection and sufficient coverage in terms of semantics and context presented in the video.

Finally, SI/TI features were calculated for each video and for each class we mapped videos to use cases mentioned above.

We chose videos that sufficiently cover SI/TI space and all the identified use cases and got 56 source videos.

Dataset

Our dataset is constantly updated. Now we have 56 source video sequences with total duration of 195,411 frames. Resolution of all video sequences is 1920×1080. FPS ranges from 16 to 60.

Benchmarks and presets

We decided to divide the problem of video alignment into three logical parts: local time distortions (occurs due to video processing), global time distortions (occurs due to video editing) and mixed time distortions (for versatility testing on both distortion types). Hence, our benchmark suite contains three benchmarks.

Benchmark Time distortions Metric
Local time distortions Drop frames Accuracy(%) of correctly localized time-shifts of initial frames depending on maximum accepted error in frames
Duplicate frames
Freeze frames
Time shift
Global time distortions Fragment insertion F1-score
Drop fragment
Cut the video
Mixed time distortions All time distortions

Metrics are calculated as follows:

Each benchmark consists of four presets for a more precise understanding of the pros and cons of algorithms.

Preset Used distortions
Light Time distortions
Medium geometric Time distortions
Geometric distortions
Medium color Time distortions
Color distortions
Hard All distortions

Distortions and distortion distribution

Distortion type Distortion Description
Time distortions Drop frames First, last or random frame number < fps, drop/duplicate 1, 2 or random up to 10 frames
Duplicate/Freeze frames
Time shift Crop up to 11 frames from the start
Fragment insertion Insert fragment from another video
Drop fragment Drop random fragment
Cut the video Cut up to 3/4 of the entire video
Color distortions Add noise Add gaussian noise, sigma 3
Add blur Add gaussian blur, sigma 1.5
Random brightness&contrast Random brignthess decrease from -10% up to -25%,
contrast increase from 5% up to 30%
Geometric distortions Add logo Logo or news title. Covers max 300px from bottom/top/left/right
Add subtitles English language. Covers max 250px from bottom or top
Crop Crop 2/4/8 px from right and bottom
Scale Scale up to 110%
Add black bars Crop to a ratio of 4:3 or 21:9. Then return to 16:9 by adding black bars

Submit your algorithm

1. Download input data Download the necessary folders for the benchmark you want to take part in here.
Some important notes:
    A. Each benchmark has train and validation sets
    B. There are 3 folders for each benchmark, containing 4 zip-folders for each preset

2. Apply your algorithm Align videos with your algorithm
You can also send us the code of your method or the executable file and we will run it ourselves.
We expect a specific output of your algorithm. The description and some examples can be found in
downloadable dataset folder. Depending on benchmark, we calculate Accuracy, F1-score,
Precision and Recall.

3. Send us result Send us an email to aligners-benchmark@videoprocessing.ai with the following information:
    A. Name of your method that will be specified in our benchmark
    B. Link to the cloud drive (Google Drive, OneDrive, Dropbox, etc.), containing output maps
    C. (Optional) Any additional information about the method:
      1. Full name of your model
      2. The parameter set that was used
      3. A link to the code of your model, if it is available
      4. A link to the paper about your model
      5. Execution time of your algorithm and information about used GPU and CPU
      6. Any other additional information

Contacts

For questions and propositions, please contact us: aligners-benchmark@videoprocessing.ai

22 Oct 2021
See Also
MSU 3D-video Quality Analysis. Report 12
MSU Video Upscalers Benchmark 2021
The most comprehensive comparison of video super resolution (VSR) algorithms by subjective quality
MSU Video Upscalers Benchmark Participants
The list of the participants of the MSU Video Upscalers Benchmark
MSU Video Upscalers Benchmark Methodology
The methodology of the MSU Video Upscalers Benchmark
MSU Video Alignment and Retrieval Benchmark Suite Participants
List of participants of MSU Video Alignment and Retrieval Benchmark Suite
MSU Super-Resolution for Video Compression Benchmark 2021
Learn about the best SR methods for compressed videos and choose the best model to use with your codec
Site structure