List of participants of MSU Video Alignment and Retrieval Benchmark Suite

TMK

    Adapted the kernel descriptor framework of Bo (paper) to sequences of frames. Proposed a query expansion (QE) technique that automatically aligns the videos deemed relevant for the query.
    Added to the benchmark by MSU G&M Lab

VideoIndexer

    Detect scene changes, split the video on scenes. Align scenes, than align frames respectively.
    Added to the benchmark by MSU G&M Lab

Time shift metric in VQMT tool

    Use PSNR to detect relevant frames.
    Added to the benchmark by MSU G&M Lab

Time shift metric in VQMT3D tool

    Use motion vectors and RANSAC to measure time shift between frames.
    Added to the benchmark by MSU G&M Lab

ViSiL

    Use RMAC descriptors to estimate frame-to-frame and video-to-video similarity.
    Added to the benchmark by MSU G&M Lab
    This method was modified by MSU to suit the benchmark suite tasks: we measure only frame-to-frame similarity, then we make synchronization map by taking maximum values in the resulting cost matrix.

ViSiL_SCD

    Use ViSiL architecture to compute frames features. Detect scene changes by the features and split videos on scenes. Match the scenes by video-to-video similarity and then make synchronization map by taking maximum values in the frame-to-frame similarity matrix.
    Added to the benchmark by MSU G&M Lab
22 Oct 2021
See Also
VQA Dataset
During our work we have created the database for video quality assessment with subjective scores
MSU Video Upscalers Benchmark 2022
The most extensive comparison of video super-resolution (VSR) algorithms by subjective quality
MSU Video Deblurring Benchmark 2022
Learn about the best video deblurring methods and choose the best model
Site structure