Depth Estimation of Monocular VR Scenes based on Improved Attention Combined with Deep Neural Network Models
Main Article Content
Abstract
The boundary blurring issue with the existing unsupervised monocular depth estimation techniques is addressed by a suggested network design based on a dual attention module. This architecture is able to overcome the boundary blurring issue in depth estimation by making effective use of the remote contextual information of picture features. The model framework comprises of a pose estimation network and a depth estimation network to estimate depth and camera pose transformations simultaneously. The complete framework is trained using an unsupervised method based on view synthesis. The depth estimation network incorporates a dual attention module, comprising a position attention module and a channel attention module. This allows the network to estimate the depth information more precisely by representing the distant spatial locations and the contextual information between various feature maps. Based on the KITTI and Make3D datasets, the experimental findings demonstrate that this method may successfully solve the depth estimation border ambiguity problem and increase the accuracy of monocular depth estimation.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.