Precision of pose estimation using corner detection.
Type of content
The aim of this research was to develop a method for recording ground truth with performance comparable to motion capture, in order to produce high-quality outdoor visual odometry datasets. A novel fiducial marker system was developed, featuring a smooth pattern which is used in an optimisation process to produce refined estimates. On average, precision was increased by 27 % compared to traditional fiducial markers. To investigate the limit of the increase in pose estimation precision possible with this method, the marker was modelled as a dense grid of checkerboard corners and the Cramér-Rao lower bound of the corresponding estimator was derived symbolically. This gave a lower bound for the variance of a pose estimated from a given image. The model was validated in simulation and using real images.
The distribution of the error for a common checkerboard corner detector was evaluated to determine whether modelling it using independent and identically distributed Gaussian random variables was valid. In a series of experiments where images were collected from a tripod, a robot arm, and a slider-type electric actuator, it was determined that the error is usually normally distributed but its variance depends on the amount of lens blur in the image, and that any amount of motion blur can produce correlated results. Furthermore, in images with little blur (less than approximately one pixel) the estimates are biased, and both the bias and the variance are dependent on the location of the corner within a pixel. In real images, the standard deviation of the noise was around 80 % larger at the pixel edges than at the centre. The intensity noise from the image sensor was also found not to be identically distributed: in one camera, the standard deviation of the intensity noise varied by a factor of approximately four within the region around a checkerboard corner.
This research suggests that it is possible to significantly increase fiducial marker pose estimation precision, presents a novel approach for predicting and evaluating pose estimation precision, and highlights sources of error not considered in prior work.