This is part of MSU Video Quality Measurement tool Online Help for MSU VQMT 14.1

MSU Video Quality Measurement Tool (MSU VQMT) project:

Online help for MSU VQMT 14.1 contents:

Content

List of available metrics

psnr

PSNR

https://videoprocessing.ai/vqmt/metrics/#psnr

  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Type: reference metric
  • Usage: -metr psnr [over <color component>]

identity

Identity

https://videoprocessing.ai/vqmt/metrics/#identity

  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Type: reference metric
  • Usage: -metr identity [over <color component>]

This metric can be configured using next parameter(s):

  • Mode
    Possible types:
    • binary - 1 images are similar, 0 in other case
    • pixels - proportion of similar pixels, 1 - all pixels are same (0..1)
      Default value: binary
      Usage: -set "mode=<value>", where <value> can be:
    • binary
    • pixels

ssim

SSIM

https://videoprocessing.ai/vqmt/metrics/#ssim

  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Type: reference metric
  • Usage: -metr ssim [over <color component>]

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
    • 0.00000000000000000
    • 999999.00000000000
  • Usage: -metr ssim_fast [over <color component>]
  • Usage: -metr ssim_precise [over <color component>]
  • Usage: -metr ssim_gpu_id [over <color component>]
  • Color components: Y, U, V
  • Usage: -metr ssim_cuda [over <color component>]
  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Usage: -metr ssim [over <color component>] -dev OpenCL0

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
    • 0.00000000000000000
    • 999999.00000000000

msssim

MS-SSIM

https://videoprocessing.ai/vqmt/metrics/#msssim

  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Type: reference metric
  • Usage: -metr msssim [over <color component>]

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
  • Usage: -metr msssim_fast [over <color component>]
    https://videoprocessing.ai/vqmt/metrics/#msssim
  • Usage: -metr msssim_precise [over <color component>]
  • Color components: Y, U, V
  • Usage: -metr msssim_cuda [over <color component>]
  • Color components: Y, U, V, R, G, B, LUV-L, RGB, YUV
  • Usage: -metr msssim [over <color component>] -dev OpenCL0

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
    • 0.00000000000000000
    • 999999.00000000000

3ssim

3SSIM

https://videoprocessing.ai/vqmt/metrics/#3ssim

  • Color components: Y, U, V
  • Type: reference metric
  • Usage: -metr 3ssim [over <color component>]
  • Usage: -metr 3ssim_cuda [over <color component>]
  • Usage: -metr 3ssim [over <color component>] -dev OpenCL0

vqm

VQM

https://videoprocessing.ai/vqmt/metrics/#vqm

  • Color components: Y
  • Type: reference metric
  • Usage: -metr vqm [over <color component>]

blocking

Blocking

https://videoprocessing.ai/vqmt/metrics/#blockingmeasure

  • Color components: Y
  • Type: no-reference metric
  • Usage: -metr blocking [over <color component>]

blurring

Blurring

https://videoprocessing.ai/vqmt/metrics/#ybluringmeasure

  • Color components: Y, R, G, B
  • Type: no-reference metric
  • Usage: -metr blurring [over <color component>]
  • Color components: Y, U, V, R, G, B
  • Usage: -metr blurring_delta [over <color component>]

delta

Delta

https://videoprocessing.ai/vqmt/metrics/#delta

  • Color components: Y, U, V, R, G, B, LUV-L
  • Type: reference metric
  • Usage: -metr delta [over <color component>]

msad

MSAD

https://videoprocessing.ai/vqmt/metrics/#msad

  • Color components: Y, U, V, R, G, B, LUV-L
  • Type: reference metric
  • Usage: -metr msad [over <color component>]

mse

MSE

https://videoprocessing.ai/vqmt/metrics/#mse

  • Color components: Y, U, V, R, G, B, LUV-L
  • Type: reference metric
  • Usage: -metr mse [over <color component>]

time-shift

Time shift

https://videoprocessing.ai/vqmt/metrics/#shift

  • Color components: Y
  • Type: reference metric
  • Usage: -metr time-shift [over <color component>]

This metric can be configured using next parameter(s):

  • Max. shift
    Maximum shift, that can be detected. Note: large values leads big memory consumption
    Default value: 5
    Usage: -set "max-shift=<value>", where <value> can be:
    • value in range 0..25
  • Direction
    Detect only positive shifts (frame dups), negatives (frame drops) or both
    Default value: both
    Usage: -set "direction=<value>", where <value> can be:
    • positive
    • negative
    • both
  • Destination metric
    This metric will be used to measure similarity between frames
    Default value: psnr
    Usage: -set "metric=<value>", where <value> can be:
    • psnr
    • ssim
  • Show metric values
    Metric will output now only shift, but destination metric values
    Default value: false
    Usage: -set "show-metric=<value>", where <value> can be:
    • true
    • false
  • Threshold
    We will consider shift only if metric for neighbour frame better than this thresold multiplied to metric for similar frame
    Default value: 0.99500000476837158
    Usage: -set "threshold=<value>", where <value> can be:
    • any floating point number
  • Smoothing
    Will smooth metric values over time. If equal n, than smoothing will be in the interval frame-n..frame+n
    Default value: 1
    Usage: -set "smoothing=<value>", where <value> can be:
    • value in range 0..25

si

SI / TI

https://videoprocessing.ai/vqmt/metrics/#si

  • Color components: Y
  • Type: no-reference metric
  • Usage: -metr si [over <color component>] -dev CPU

ti

SI / TI

https://videoprocessing.ai/vqmt/metrics/#ti

  • Color components: Y
  • Type: no-reference metric
  • Usage: -metr ti [over <color component>] -dev CPU

niqe

NIQE

https://videoprocessing.ai/vqmt/metrics/#niqe

  • Color components: Y
  • Type: no-reference metric
  • Usage: -metr niqe [over <color component>]

This metric can be configured using next parameter(s):

  • Mean threshold
    Values of metric greater than this value will be skipped during mean calculation. 0 for disable skipping
    Default value: 15.000000000000000
    Usage: -set "mean_thresh=<value>", where <value> can be:
    • any floating point number
  • Threshold smoothing
    Values of metric greater than ‘Mean threshold’ + ‘Threshold smoothing’ will be skipped, values less than ‘Mean threshold’ - ‘Threshold smoothing’ will be assumed with weight 1. Intermediate values will be taken with intermediate weight
    Default value: 5.0000000000000000
    Usage: -set "mean_thresh_smoothing=<value>", where <value> can be:
    • any floating point number
  • Type of normalization
    • fast - the fastest algorithm, low precision
      • native - like in native NIQE implementation. Slowest one
      • precise - the most precise algorithm
        Default value: native
        Usage: -set "norm_alg=<value>", where <value> can be:
    • fast
    • native
    • precise
  • Usage: -metr niqe [over <color component>] -dev OpenCL0

This metric can be configured using next parameter(s):

  • Mean threshold
    Values of metric greater than this value will be skipped during mean calculation. 0 for disable skipping
    Default value: 15.000000000000000
    Usage: -set "mean_thresh=<value>", where <value> can be:
    • any floating point number
  • Threshold smoothing
    Values of metric greater than ‘Mean threshold’ + ‘Threshold smoothing’ will be skipped, values less than ‘Mean threshold’ - ‘Threshold smoothing’ will be assumed with weight 1. Intermediate values will be taken with intermediate weight
    Default value: 5.0000000000000000
    Usage: -set "mean_thresh_smoothing=<value>", where <value> can be:
    • any floating point number

vmaf

Netflix VMAF

  • Color components: Y
  • Type: reference metric
  • Usage: -metr vmaf [over <color component>]

This metric can be configured using next parameter(s):

  • Model preset
    Choose built-in model or ‘custom’ for loading model from file. Built-in models:
    • default - VMAF default behaviour:
      • VMAF v0.6.1 for running without confidence interval and per-model values
      • VMAF v0.6.1 4k for previous case if applying 4k model
      • VMAF v0.6.3 for running with confidence interval or per-model values
      • VMAF v0.6.2 4k for previous case if applying 4k model (NOTE: no v0.6.3 for 4k)
    • vmaf_v061 - Netflix model VMAF v0.6.1 (2k or 4k)
    • vmaf_v061_neg - Netflix model VMAF v0.6.1 (only 2k), no enhancement gain
    • vmaf_v062 - Netflix model VMAF v0.6.2 (2k or 4k), supports confidence interval
    • vmaf_v063 - Netflix model VMAF v0.6.3 (only 2k), supports confidence interval
    • all_models - vmaf_v061..vmaf_v063, vmaf_v061_neg computed sumultaneously
    • basic_features - view only basic features from VMAF. Model will not be applied
    • standard_features - features that is used in VMAF v0.6.1 and VMAF score (2k or 4k)
    • all_features - view all features from VMAF. Model will not be applied
    • all_features_with_neg - view all features from VMAF and no enhancement gain features. Model will not be applied
    • all - all feature and next models:
      • VMAF v0.6.1 (2k or 4k)
      • VMAF v0.6.1 no enhancement gain (neg, 2k only)
      • VMAF v0.6.2 (2k or 4k)
      • VMAF v0.6.3
        Default value: default
        Usage: -set "model_preset=<value>", where <value> can be:
    • default
    • vmaf_v061
    • vmaf_v061_neg
    • vmaf_v062
    • vmaf_v063
    • vmaf_v060
    • all_models
    • basic_features
    • standard_features
    • standard_features_neg
    • all_features
    • all_features_with_neg
    • all
    • custom
  • Custom model (*.pkl or JSON file)
    you can specify path to *.pkl or JSON file here (or multiple ;-separated *.pkl or JSON files). Model file should be placed near PKL.
    NOTE: this only means if preset is set to ‘custom’
    Default value: ``
    Usage: -set "custom_model_files=<value>", where <value> can be:
    • any string
  • 4k
    selection 4k model policy:
    • auto - select 4k if exists suitable model and input video is 4k
    • forced_2k - always 2k model
    • forced_4k - 4k if exsists: VMAF v0.6.1-2
      NOTE: this param does not affects custom model
      Default value: auto
      Usage: -set "4k=<value>", where <value> can be:
    • auto
    • forced_2k
    • forced_4k
  • Confidence interval
    turn on additional VMAF features: 95%-confidence interval output and other statistical information
    Default value: false
    Usage: -set "confidence_interval=<value>", where <value> can be:
    • true
    • false
  • Confidence interval size
    the length of confidence interval if turned on, percent
    Default value: 95.000000000000000
    Usage: -set "ci_size=<value>", where <value> can be:
    • any floating point number
  • Per-model values
    output values for all bootstrap models if confidence interval is on
    Default value: false
    Usage: -set "permodel_values=<value>", where <value> can be:
    • true
    • false
  • Bootstrap type
    output values for all bootstrap models if confidence interval is on
    Default value: common
    Usage: -set "bootstrap_type=<value>", where <value> can be:
    • common
    • residue
  • Visualize algorithm (if on)
    if visualization turned on you can select feature to visualize. It’s impossible to calculate distribution of real VMAF value, so you can only visualize one of supposed features
    Default value: default
    Usage: -set "visualize_alg=<value>", where <value> can be:
    • default
    • adm
    • ansnr
    • motion
    • vif
    • adm
    • motion
    • vif
  • Use phone model
    turn on postprocessing of metric value that produces more precise results for handheld devices. Select ‘both’ to see both results with and without postprocessing
    Default value: no
    Usage: -set "phone_model=<value>", where <value> can be:
    • no
    • yes
    • both
  • Disable clipping values
    turn off clipping value to range set by model (0..100 for example)
    Default value: false
    Usage: -set "disable_clip=<value>", where <value> can be:
    • true
    • false
  • Model internal datatype (integer or float)
    turn off clipping value to range set by model (0..100 for example)
    Default value: float
    Usage: -set "datatype=<value>", where <value> can be:
    • float
    • integer
  • Usage: -metr vmaf [over <color component>] -dev OpenCL0

vmaf_legacy

Netflix VMAF legacy

  • Color components: Y
  • Type: reference metric
  • Usage: -metr vmaf_legacy [over <color component>]

This metric can be configured using next parameter(s):

  • Model preset
    Choose built-in model or ‘custom’ for loading model from file. Built-in models:
    • default - VMAF default behaviour:
      • VMAF v0.6.1 for running without confidence interval and per-model values
      • VMAF v0.6.1 4k for previous case if applying 4k model
      • VMAF v0.6.3 for running with confidence interval or per-model values
      • VMAF v0.6.2 4k for previous case if applying 4k model (NOTE: no v0.6.3 for 4k)
    • vmaf_v061 - Netflix model VMAF v0.6.1 (2k or 4k)
    • vmaf_v061_neg - Netflix model VMAF v0.6.1 (only 2k), no enhancement gain
    • vmaf_v062 - Netflix model VMAF v0.6.2 (2k or 4k), supports confidence interval
    • vmaf_v063 - Netflix model VMAF v0.6.3 (only 2k), supports confidence interval
    • all_models - vmaf_v061..vmaf_v063, vmaf_v061_neg computed sumultaneously
    • basic_features - view only basic features from VMAF. Model will not be applied
    • standard_features - features that is used in VMAF v0.6.1 and VMAF score (2k or 4k)
    • all_features - view all features from VMAF. Model will not be applied
    • all_features_with_neg - view all features from VMAF and no enhancement gain features. Model will not be applied
    • all - all feature and next models:
      • VMAF v0.6.1 (2k or 4k)
      • VMAF v0.6.1 no enhancement gain (neg, 2k only)
      • VMAF v0.6.2 (2k or 4k)
      • VMAF v0.6.3
        Default value: default
        Usage: -set "model_preset=<value>", where <value> can be:
    • default
    • vmaf_v061
    • vmaf_v062
    • vmaf_v063
    • vmaf_v060
    • all_models
    • basic_features
    • standard_features
    • all_features
    • all
    • custom
  • Custom model (*.pkl or JSON file)
    you can specify path to *.pkl or JSON file here (or multiple ;-separated *.pkl or JSON files). Model file should be placed near PKL.
    NOTE: this only means if preset is set to ‘custom’
    Default value: ``
    Usage: -set "custom_model_files=<value>", where <value> can be:
    • any string
  • 4k
    selection 4k model policy:
    • auto - select 4k if exists suitable model and input video is 4k
    • forced_2k - always 2k model
    • forced_4k - 4k if exsists: VMAF v0.6.1-2
      NOTE: this param does not affects custom model
      Default value: auto
      Usage: -set "4k=<value>", where <value> can be:
    • auto
    • forced_2k
    • forced_4k
  • Confidence interval
    turn on additional VMAF features: 95%-confidence interval output and other statistical information
    Default value: false
    Usage: -set "confidence_interval=<value>", where <value> can be:
    • true
    • false
  • Confidence interval size
    the length of confidence interval if turned on, percent
    Default value: 95.000000000000000
    Usage: -set "ci_size=<value>", where <value> can be:
    • any floating point number
  • Per-model values
    output values for all bootstrap models if confidence interval is on
    Default value: false
    Usage: -set "permodel_values=<value>", where <value> can be:
    • true
    • false
  • Bootstrap type
    output values for all bootstrap models if confidence interval is on
    Default value: common
    Usage: -set "bootstrap_type=<value>", where <value> can be:
    • common
    • residue
  • Visualize algorithm (if on)
    if visualization turned on you can select feature to visualize. It’s impossible to calculate distribution of real VMAF value, so you can only visualize one of supposed features
    Default value: default
    Usage: -set "visualize_alg=<value>", where <value> can be:
    • default
    • adm
    • ansnr
    • motion
    • vif
    • adm
    • motion
    • vif
  • Use phone model
    turn on postprocessing of metric value that produces more precise results for handheld devices. Select ‘both’ to see both results with and without postprocessing
    Default value: no
    Usage: -set "phone_model=<value>", where <value> can be:
    • no
    • yes
    • both
  • Disable clipping values
    turn off clipping value to range set by model (0..100 for example)
    Default value: false
    Usage: -set "disable_clip=<value>", where <value> can be:
    • true
    • false
  • Model internal datatype (integer or float)
    turn off clipping value to range set by model (0..100 for example)
    Default value: float
    Usage: -set "datatype=<value>", where <value> can be:
    • float
    • integer
  • Usage: -metr vmaf_legacy [over <color component>] -dev OpenCL0

hdr-psnr

PSNR

https://videoprocessing.ai/vqmt/metrics/#psnr

  • Color components: PU-L, PU encoding BT.709 L, PU encoding BT.2020 L, ICtCp BT.2100 PQ ICtCp
  • Type: reference metric
  • Usage: -metr hdr-psnr [over <color component>]

This metric can be configured using next parameter(s):

  • Peak luminance (nits)
    PSNR peak luminance value. 0 means display luminance
    Default value: 0.00000000000000000
    Usage: -set "peak_lum=<value>", where <value> can be:
    • value in range 0..10000

hdr-ssim

HDR SSIM

https://videoprocessing.ai/vqmt/metrics/#ssim

  • Color components: PU-L, PU encoding BT.709 L, PU encoding BT.2020 L
  • Type: reference metric
  • Usage: -metr hdr-ssim [over <color component>] -dev CPU

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
    • 0.00000000000000000
    • 999999.00000000000
  • Usage: -metr hdr-ssim [over <color component>] -dev CPU
  • Usage: -metr hdr-ssim [over <color component>] -dev CPU
  • Usage: -metr hdr-ssim [over <color component>] -dev OpenCL0

hdr-msssim

HDR MS-SSIM

https://videoprocessing.ai/vqmt/metrics/#msssim

  • Color components: PU-L, PU encoding BT.709 L, PU encoding BT.2020 L
  • Type: reference metric
  • Usage: -metr hdr-msssim [over <color component>] -dev CPU

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
  • Usage: -metr hdr-msssim [over <color component>] -dev CPU
    https://videoprocessing.ai/vqmt/metrics/#msssim
  • Usage: -metr hdr-msssim [over <color component>] -dev CPU
  • Usage: -metr hdr-msssim [over <color component>] -dev OpenCL0

This metric can be configured using next parameter(s):

  • Combining mode
    The mode of combining values for components of image:
    • default - in case of YUV image custom weight for Y component will be used, equal weights for U, V components will be used. In case if other color models equvivalent weights will be used
    • ffmpeg - use area of component. Weights will depend on subsampling mode of image
      Default value: default
      Usage: -set "combining_mode=<value>", where <value> can be:
    • default
    • ffmpeg
  • Y weight
    If combining mode is default this is weight of value for component Y of YUV. Weights of U, V and other components is supposes to be 1.
    Default value: 4.0000000000000000
    Usage: -set "y_weight=<value>", where <value> can be:
    • 0.00000000000000000
    • 999999.00000000000

hdr-vqm

HDRVQM

https://sites.google.com/site/narwariam/home/research/hdr-vqm

  • Color components: PU-L, PU encoding BT.709 L, PU encoding BT.2020 L
  • Type: reference metric
  • Usage: -metr hdr-vqm [over <color component>]

This metric can be configured using next parameter(s):

  • Fixation frames
    Number of frame for calculation of spatio-temporal tubes. 0 - auto (0.6 seconds, based on fps of first video)
    Default value: 0
    Usage: -set "fixation_frames=<value>", where <value> can be:
    • value in range 0..50
  • FFT cols
    Video will be scaled to this value (bicubic). 0 - auto (clothest to video size)
    Default value: 1024
    Usage: -set "fft_cols=<value>", where <value> can be:
    • value in range 64..32768
  • FFT rows
    Video will be scaled to this value (bicubic). 0 - auto (clothest to video size)
    Default value: 512
    Usage: -set "fft_rows=<value>", where <value> can be:
    • value in range 64..32768
  • Pooling Percentage
    Pooling Percentage for long-term pooling
    Default value: 0.30000001192092896
    Usage: -set "pooling_precent=<value>", where <value> can be:
    • value in range 0.10000000000000001..1.0000000000000000
  • Dispaly: cols
    Columns of the target display
    Default value: 1920.0000000000000
    Usage: -set "display_cols=<value>", where <value> can be:
    • value in range 500..10000
  • Dispaly: rows
    Rows of the target display
    Default value: 1080.0000000000000
    Usage: -set "display_rows=<value>", where <value> can be:
    • value in range 500..10000
  • Dispaly: area
    Area of the target display
    Default value: 6100.0000000000000
    Usage: -set "display_area=<value>", where <value> can be:
    • value in range 500.00000000000000..100000.00000000000
  • Dispaly: distance
    Viewer’s dispaly distance, cm
    Default value: 178.00000000000000
    Usage: -set "display_distance=<value>", where <value> can be:
    • value in range 10.000000000000000..100.00000000000000

delta-ictcp

Delta

https://videoprocessing.ai/vqmt/metrics/#delta

  • Color components: ICtCp BT.2100 PQ ICtCp
  • Type: reference metric
  • Usage: -metr delta-ictcp [over <color component>]

This page is automatically generated by 14.1 r12839 on 2022-06-24. In case of any question or suggestion, please mail us: video-measure@compression.ru

24 Jun 2022
See Also
Site structure