Detection of 3D movie scenes shot on converged axes
- Author: Kirill Malyshev
- Supervisor: dr. Dmitriy Vatolin
During the shooting of a close-up with a stereo rig, where the distance between camera axes is bigger than the gap between the visual axes of the eyes, novice operators often have to set up the cameras to make the optical axes concurrent. In other words, they have to shoot on converged axes instead of parallel ones. This method can still be used, but it requires a lot of (competent) effort. The discussion of the details goes beyond the scope of this article, so let’s just mention that shooting of close-ups on converged axes causes the vertical parallax and rather unpleasant distortions of the objects’ shapes in the image:
Shooting of close-ups on converged axes causes the vertical parallax and distortions of the objects’ shapes in the image. At the same time, the picture, normally perceived in real life (since the brain is able to fix such distortions), will look uncomfortable in the movies. This effect can be significantly reduced, but you have to get the idea of particular scene geometry and the range of problems that you are going to face after shooting. Content shot on parallel axes also has to be checked during the post-reduction stage, but this is more simple than shooting on converged axes.
Keystone distortion is caused in the stereoscopic video when the optical axes of the cameras are not parallel during shooting. When comparing left and right views of a stereoscopic image captured this way, you can notice a distinctive feature of keystone distortion — when moving from the left view to the right, the left side of the image is compressed to the center, and the right side of the image expands from the center.
A neural network approach is used to detect converged axes.
A synthetic dataset was created using GTA V computer game to train the model. A mod was written that allows to automate the movement of a playable character and capture stereoscopic frames at a given frequency. To get various data, camera convergence angle, the direction of the camera, the weather and the time of day were chosen randomly for each frame.
For more realistic views, most of the frames were modified with noise and/or blur.
The architecture of the model is a convolutional neural network. The input receives calculated disparity map for left view and confidence map for it. The model returns one number — the value of the angle between converged optical axes in degrees. As a loss function, MSE was used.
A total of 12 full-length stereoscopic movies were tested:
- Avatar 3D (2009)
- Cirque du Soleil: Worlds Away (2012)
- The Darkest Hour (2011)
- Dolphin Tale (2011)
- Flying Swords of Dragon Gate (2011)
- Drive Angry (2011)
- The Final Destination (2009)
- The Three Musketeers (2011)
- Step Up 3D (2010)
- A Very Harold & Kumar 3D Christmas (2011)
- 3D Sex & Zen: Extreme Ecstasy (2011)
- Pirates of the Caribbean: On Stranger Tides (2011)
In the examples below, you can see visualizations of disparity maps with keystone distortions. Red indicates the disparity vectors in the left view, blue — in the right one.
Step Up 3D
- 54 distorted scenes were found in Drive Angry and 23 in Pirates of the Caribbean: On Stranger Tides. For all tested movies, the AUC reaches 0.937.
- The average speed of the model on the GeForce GTX 1050 Ti is 20 frames per second.
MSU Benchmark Collection
- MSU Super-Resolution for Video Compression Benchmark 2022
- MSU Video Quality Metrics Benchmark 2022
- MSU Video Upscalers Benchmark 2022
- MSU Video Alignment and Retrieval Benchmark
- MSU Mobile Video Codecs Benchmark 2021
- MSU Video Super-Resolution Benchmark
- MSU Shot Boundary Detection Benchmark 2020
- MSU Deinterlacer Benchmark
- The VideoMatting Project
- Video Completion
- Codecs Comparisons & Optimization
- Video Quality Measurement Tool 3D
MSU Datasets Collection
- Video Filters