A moment-preserving approach for depth from defocus

作者:

Highlights:

摘要

For range sensing using depth-from-defocus methods, the distance D of a point object from the lens can be evaluated by the concise depth formula D=P(Q−db), where P and Q are constants for a given camera setting and db is the diameter of the blur circle for the point object on the image detector plane. The amount of defocus db is traditionally estimated from the spatial parameter of a Gaussian point spread function using a complex iterative solution. In this paper, we use a straightforward and computationally fast method to estimate the amount of defocus from a single camera. The observed gray-level image is initially converted into a gradient image using the Sobel edge operator. For the edge point of interest, the proportion of the blurred edge region pe in a small neighborhood window is then calculated using the moment-preserving technique. The value of pe increases as the amount of defocus increases and, therefore, is used as the description of degradation of the point-spread function. In addition to the use of the geometric depth formula for depth estimation, artificial neural networks are also proposed in this study to compensate for the estimation errors from the depth formula. Experiments have shown promising results that the RMS depth errors are within 5% for the depth formula, and within 2% for the neural networks.

论文关键词:Depth from defocus,Range sensing,Moment-preserving,Neural networks

论文评审过程:Received 10 September 1996, Accepted 18 June 1997, Available online 7 June 2001.

论文官网地址:https://doi.org/10.1016/S0031-3203(97)00068-X