Monocular based 3D depth estimation and SLAM integration
Monocular based 3D depth estimation and SLAM integration
Blog Article
In various practical scenarios, zippy paws adventure backpack autonomous vehicles must navigate through unfamiliar areas to reach their destinations.This navigation is facilitated by two-dimensional (2D) and three-dimensional (3D) maps.Simultaneous localization and mapping (SLAM) systems enable autonomous vehicles to map their surroundings while in motion.Traditionally, SLAM systems rely on physical sensors like LiDAR to measure distances.However, these sensors are costly and consume significant power, particularly when used with drones.
Consequently, the use of monocular cameras for depth estimation of surrounding objects has gained considerable interest from both academia and industry.In this study, we integrate a recently developed deep learning monocular depth estimation model into the ORB-SLAM2 system.The integrated system has been tested by estimating trajectories and constructing 3D point cloud maps of unknown areas.In addition, preliminary experiments were conducted using a live drone.These experiments demonstrated the ability of the proposed system to produce more accurate point-cloud maps which improve the crystal beaded candle holder trajectory errors by 34-54% compared to contemporary approaches.