Towards accurate earth-referenced LiDAR mapping with an autonomous mobile robot.
Type of content
There is enormous potential in the geospatial industry for mobile robotics to auto- mate terrain mapping. They can reduce operator-induced errors and perform tasks autonomously without supervision. This thesis seeks to quantify the accuracy and speed of an autonomous mapping robot by comparing it to conventional survey methods.
To do this, a proof-of-concept robot was developed from a “Jackal” Unmanned Ground Vehicle (UGV), controlled by the Robot Operating System (ROS). High-accuracy GNSS, IMU and wheel encoder data was fused in an open-source Unscented Kalman Filter (UKF) to localize the robot in UTM coordinates. The ROS navigation stack was used to achieve autonomous navigation.
This robot was used to map two sites, by measuring it’s own position as it autonomously navigated between predefined waypoints. This achieved a mean vertical accuracy of 8-17 mm, and mapped the sites in approximately 13-14 minutes. A selection of conventional GNSS RTK, total station and aerial photogrammetry methods by comparison achieved an accuracy in the range 0-70 mm, and took either a few minutes or approximately an hour to complete.
The robot was then upgraded with a LiDAR unit, and a novel method was developed for accurately aligning and registering the scans to produce a point cloud that could be compared to one collected by a scanning total station. This method, termed Anchor Cloud Mapping (ACM) was inspired by the methods survey GNSS and total stations use for calibration. The core principles are that the robots trajectory is divided into independent sections, where the clouds in each section are registered using a keyscan, mesh-based, Generalized Iterative Closest Point method. Each cloud is then pivoted around a calibrated stationary robot pose, and aligned to best fit the UKF trajectory. When compared to a Trimble SX10 scanning total station (which is accurate to 2.5 mm), the robot/ACM cloud has a median point-to-point accuracy of 51-59 mm, and collects several times more points which are more evenly distributed throughout the environment. The robot can autonomously survey an area in minutes while the SX10 requires several hours.