Disparity and depth map generation.
Monoscopic video to 3D
- Project: Dr. Dmitriy Vatolin
- Implementation, algorithm: Sergey Grishin, Karen Simonyan, Alexander Parshin, Konstantin Strelnikov
This technology provides high quality depth map estimation for video. Our algorithm doesn’t require stereo content. Only information from single view video is used. Algorithm estimates motion in video sequence, generates disparity map and depth map. Similar technology can be used to convert stereo video to multiview 3D video.
Description
Algorithm uses spatial and temporal information from consecutive frames of monoscopic video
![]() |
![]() |
Previous frame | Current frame |
to estimate disparity map or depth map:
![]() |
Depth map vs current frame |
Also stereo pair or multiview 3D video can be generated using the depth map:
![]() |
Stereo pair |
Higher quality of multiview 3D video and depth/disparity map is achieved when stereoscopic material is used as the source of conversion.
More examples
![]() |
![]() |
Previous frame | Current frame |
![]() |
![]() |
Generated depth map vs current frame | Generated stereo pair |
Contacts
For questions and proposition please contact us
E-mail: 3dvideo@compression.ru
-
MSU Benchmark Collection
- MSU Video Upscalers Benchmark 2022
- MSU Video Deblurring Benchmark 2022
- MSU Video Frame Interpolation Benchmark 2022
- MSU HDR Video Reconstruction Benchmark 2022
- MSU Super-Resolution for Video Compression Benchmark 2022
- MSU No-Reference Video Quality Metrics Benchmark 2022
- MSU Full-Reference Video Quality Metrics Benchmark 2022
- MSU Video Alignment and Retrieval Benchmark
- MSU Mobile Video Codecs Benchmark 2021
- MSU Video Super-Resolution Benchmark
- MSU Shot Boundary Detection Benchmark 2020
- MSU Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- VQMT
- MSU Datasets Collection
- Metrics Research
- Video Quality Measurement Tool 3D
- Video Filters
- Other Projects