APP下载

Calibration method for multi-line structured light vision sensor based on Plücker line

2020-04-28QINGuanyuWANGXiangjunYINLei

QIN Guan-yu, WANG Xiang-jun, YIN Lei

(1. State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China;2. MOEMS Education Ministry Key Laboratory, Tianjin University, Tianjin 300072, China)

Abstract: For the rapid calibration of multi-line structured light system, a method based on Plücker line was proposed. Most of the conventional line-structured light calibration methods extract the feature points and transform the coordinates of points to obtain the plane equation. However, a large number of points lead to complicated operation which is not suitable for the application scenarios of multi-line structured light. To solve this issue, a new calibration method was proposed that applied the form of Plücker matrix throughout the whole calibration process, instead of using the point characteristics directly. The advantage of this method is that the light plane equation can be obtained quickly and accurately in the camera coordinate frame. Correspondingly a planar target particularly for calibrating multi-line structured light was also designed. The regular lines were transformed into Plücker lines by extending the two-dimensional image plane and defining a new image space. To transform the coordinate frame of Plücker lines, the perspective projection mathematical model was re-expressed based on the Plücker matrix. According to the properties of the line and plane in the Plücker space, a linear matrix equation was efficiently constructed by combining the Plücker matrices of several coplanar lines so that the line-structured light plane equation could be furtherly solved. The experiments performed validate the proposed method and demonstrate the significant improvement in the calibration accuracy, when the test distance is 1.8 m, the root mean square (RMS) error of the three-dimensional point is within 0.08 mm.

Key words: multi-line structured light; Plücker line; calibration; perspective projection; plane fitting

0 Introduction

Vision measurement is extensively used in various fields[1-5]. Among the various vision measurement methods, vision-based structured light measurement methods have significant advantages of non-contact and high precision. Therefore, structured light is widely used in the areas such as surface reconfiguration, vision navigation and workpiece inspection[6-8]. As a special structured light vision sensor, the multi-line structured light vision sensor is mainly composed of a laser projector and a camera. The laser splits a plurality of light planes through the diffractive optical elements (DOE), and a deformed line can be obtained after the intersection of the light plane and the space object surface. After obtained by using a camera, this deformed line can be processed to obtain the coordinate of feature line in image at a “sub-pixel” accuracy, by which the three-dimensional (3D) information at the feature line can be gotten finally. In addition to the advantages of non-contact, high precision and anti-interference of structured light, multi-line structured light measurement provides multiple light planes simultaneously, which expands the measurement area and fits the large scene measurements.

Before the measurement, the line-structured light sensor should be calibrated to obtain the calibration parameters. The calibration method of the line-structured light sensor is mainly divided into two parts, the calibration of camera parameters and the calibration of the structural parameters which includes the position and orientation of the camera relative to the light source. The calibration of the camera parameters is based on the correspondence between the feature points on the calibration target and the coordinates in the image, and the intrinsic parameters are estimated by nonlinear methods.

Zhang proposed an intrinsic camera parameter calibration method based on coplanar calibration target, which solved the calibration problem of camera parameters in structured light vision sensors[9]. For the calibration of structural parameters, scholars have proposed many methods. Dewar proposed a wire-drawing method that obtains the image coordinates of the bright spots generated by the intersection of the filament and the light plane through the camera, and uses other instruments to measure the spatial coordinates of the bright spots[10]. Although this method takes the coordinates directly, it must rely on other instruments. Huynh et al.[11]and Xu et al.[12]proposed a calibration method based on cross-ratio invariance. The coordinates of each point on the stripe can be obtained by three known collinear feature points on the 3D target, thereby calibrating the light plane. Zhou et al.[13]proposed a method for on-site calibration with planar calibration target based on cross-section invariance. A large number of feature points on the light plane can be obtained by moving the target multiple times. This method is less expensive and suitable for on-site calibration. Duan proposed a more sophisticated zigzag-like surface target method, which used external equipment to adjust the structured light plane perpendicular to the target, and the operation was quite complicated[14]. Xu et al. proposed a calibration method with a multi-sphere calibration board, which gets rid of the cumbersome translation stage accessories in the traditional plane-translation method while has the limitation of the poor accuracy for ball contour determination with a low resolution camera[15]. Wei et al.[16]proposed a calibration method using a one-dimensional (1D) calibration target. In contrast to the planar target, the 1D target is easily machined with high accuracy and can be used under limited space. Liu et al. used a single ball target illuminated by the light stripe from the laser projector[17]. Because of particularity of the sphere, the position of the ball target cannot make any influence on the calibration result. But the ball target should be machined with high accuracy. Various targets are derived to the structured light calibration. However, for the multi-line structured light, these conventional methods are not quite suitable. Multi-line structured light has more light planes and a wider range of projection. So it is difficult for methods based on the conventional planar targets and 1D target to extract all the light stripes. And machining a high-precision 3D target that covers all the light stripes is highly cost and not realistic.

Therefore, a new calibration method was proposed based on Plücker line especially for multi-line structured light vision sensor. With discarding the concept of points, this method extended the image dimension, introduced the Plücker matrix into the whole calculation process and re-expressed the perspective projection model based on the Plücker line. In order to facilitate the extraction of multiple stripes, a new type of planar calibration target with center blank was used. The target can be moved freely within the measurement space of the sensor, so that Plücker matrices of several lines on the same light plane can be obtained. Ultimately the plane equation was solved with least squares fitting. This method is low in cost, simple to operate, and most importantly, it is very suitable for calibration of multi-line structured light vision sensors.

1 Model of multi-line structured light vision sensor

Fig.1 shows the geometry model of multi-line structured light vision sensor.

Fig.1 Model of multi-line structured light vision sensor

Based on the perspective projection model of the camera, there is

(1)

whereKis the matrix of the camera intrinsic parameters.

Pis on the planeΠ, one of the light planes. Suppose the light plane equation in CCF is expressed as

Axc+Byc+Czc+D=0,

(2)

whereA,B,CandDare the four coefficients of the light plane equation.

By combining Eqs.(1) and (2), there are

(3)

Therefore, the point of intersection betweenOcpand the light planeΠcan be applied to determine the 3D coordinates ofPuniquely. By analogy, the 3D coordinates of all points on the four light planes can be obtained.

2 Multi-line structured light vision sensor calibration method

The intrinsic parameters of camera in the multi-line structured light vision sensor are calibrated by Zhang’s calibration method[9].

The calibration procedure of structural parameters consists of the following steps:

Step 1: According to the feature points on the calibration target and their corresponding image coordinates, the Perspective-n-Point(PNP)[18]method is used to obtain the relative pose (rotation matrixRand translation vectorT) of calibration target plane to the camera;

Step 2: Due to the simple design of calibration target, using Steger’s method[19-20]can simultaneously extract the center of every line of multi-line structured light in the image. Then extend the image plane dimension, thereby introducing the Plücker line and converting the extracted multiple center lines into the Plücker matrix from;

Step 3: Substitute the relative pose obtained in step 1 into the perspective projection model based on the Plücker line. Solve the Plücker matrix of each corresponding structured light in CCF through the Plücker matrix of every line in the image obtained in step 2;

Step 4: Move the calibration target several times in the measurement space of the sensor and repeat step 1 to step 3, therefore obtaining several Plücker matrices of structured lines on the same light plane in CCF. Based on the properties of line and plane in the Plücker form, equations of each light plane are fitted by using the least squares method.

2.1 Calibration target

Due to the large number of light stripes in the multi-line structured light vision sensor, the conventional checkerboard calibration target that is full of black and white lines greatly affects the extraction of the structured light stripes. Therefore, a new type of calibration target is used as shown in Fig.2, which has a simple pattern and a blank central portion. Such design not only provides circular features for matching, but also facilitates the center extraction of the light stripes.

Fig.2 Theoretical and practical schematic of proposed calibration target

2.2 Construction of Plücker line

2.2.1 Plücker matrix

The Plücker matrix is a representation of spatial line, which expresses the four-degree freedom of the spatial line in a 4×4 matrix, and converts the calculation on lines into linear operation between matrices. Therefore, Plücker matrix is ideal for linar operations, which is widely used in the calibration for multi-line structured light vision sensor.

(4)

Suppose the homogeneous coordinates of planeΠ:Ax+By+Cz+D=0 isπ=[A,B,C,D]T, then the necessary and sufficient condition for the lineLon the planeΠisLπ=0. This property is very beneficial for fitting a plane according to several lines.

2.2.2 Extension of image dimension

Fig.3 Calibration process for multi-line structured light vision sensor

(5)

2.3 Perspective projection model based on Plücker line

2.3.1 ISCF and WCF

In the conventional perspective projection model of camera, the homogeneous coordinate ofP(in Fig.3) in WCF and homogeneous coordinate ofpin ICF satisfy

(6)

Since the homogeneous coordinates in ISCF are used in Eq (5) and in order to maintain the equivalence in Eq.(6), the matrix [h1h2h3]Tis extended to the following 4×4 matrix,

(7)

With the equivalence in Eq.(7), it is obvious to obtaini1,i2,i4as 0 andi3as any value. It is needed that the matrix should be a reversible matrix in the subsequent conversion, so seti3as 1.

(8)

According to Eqs.(5) and (8), there is

(9)

whereLwis the Plücker matrix of lineLin WCF. Eq.(9) represents the projection relations of Plücker line from WCF to ISCF.

2.3.2 WCF and CCF

From perspective projection model of camera, there is

(10)

which expresses the transformation of a point between WCF and CCF. To introduce such transformation into Plücker line, bring Eq.(10) into the definition of Plücker matrix shown in Eq.(4), there is

(11)

which represents the transformation relations of Plücker line between WCF and CCF.

2.3.3 Perspective projection model

Table 1 compares the perspective projection mathematical models of conventional point and Plücker line, whereHis the transform matrix derived from Eq.(8) andEis the projection matrix of the camera according to Eq.(10). It can be seen that there are more matrix calculations in the proposed perspective projection model of Plücker line compared to the conventional perspective projection model of single-point, but the linear operation is always maintained, and the form is simple and clear.

Table 1 Perspective projection models of conventional point and Plücker line

Perspectiveprojection modelI(S)CF^WCFWCF^CCF Pointm=K[R|T]pwpc=Epw Plücker linel=HLwHTLc=ELwET

2.4 Fitting of light planes

Fig.4 Schematic diagram of fitting light plane

Based on the properties of Plücker line mentioned in 2.2.1, the Plücker matrices of several linesL1,L2,…,Lnon the light planeΠcan be spliced together to form a matrix equation as

(12)

The four coefficients of light planeΠcan be solved by the singular value decomposition (SVD) method[22]. This solution can also be derived for multiple light planes.

3 Experiments

In order to verify the accuracy of the proposed method, an accuracy verification system for multi-line structured light is designed based on standard machined workpiece. First, the multi-line structured light is calibrated using the theory in Section 2. The multi-line structured light vision sensor is shown in Fig.5, which is composed of a camera and a light projector (wave length of 520 nm) with a DOE. The camera manufactured by Dahua Technology is equipped with a lens of 12 mm. The image resolution is 1 280 pixels×1 024 pixels. The DOE manufactured by HOLOEYE can split a beam of light into nine line-structured lights. Therefore, there will be nine light plane equations in the result.

Fig.5 Schematic diagram of experimental device

3.1 Calibration results

As mentioned above, the parameters of the camera are calibrated based on Zhang’s method[9]. The calibration result of focus and distortion are as shown in Table 2.

Table 2 Camera intrinsic parameters

In terms of structural parameters of proposed multi-line structured light vision sensor, the experimental scene is shown in Fig.6. The camera and light projector lied in a distance to the calibration target. The light stripes were all distributed over the target. The circle feature can be extracted by EDCircles method[23-24]. Eighteen images in the range of 1-3 m were used as calibration materials. The calibration results are shown in Table 3, where nine plane equations correspond to the nine light planes of the sensor. The relative position of nine calibrated light planes to the camera can be visually observed in Fig.7.

Fig.6 Experimental scene

Table 3 Calibration result of nine light planes

3.2 Calibration verification

To evaluate the accuracy of the proposed calibration method, the accuracy verification system for multi-line structured light based on standard machined workpiece was designed, as shown in Fig.8(a). The machining accuracy is 0.1 mm. The projected area of experimental workpiece is shown in Fig.8(b). The area was located at the intersection of two planes, with five stripes over one plane and four stripes over another plane. Based on the line-structured light mentioned in section 1, we furtherly made a 3D reconstruction to the light stripes in the area.

Fig.8 Schematic diagram of experimental workpiece and test area

As the structured light was projected on the intersection area of two planes, we can obtain the 3D coordinates of every point on stripes and fit the equations of two planes according to the reconstruction result of the nine light stripes on the workpiece, which is shown in Fig.9(b).

Fig.9(a) shows the surface topography of test area scanned by laser scanner. The scanning accuracy is 0.03 mm. According to the scanning result, the flatness of the two planes in the test area is within 0.05 mm and the angle between the planes is 141.00°. We considered the measurement by laser scanner as truth value and compared with the result measured by multi-line structured light vision sensor calibrated by the proposed method. To verify the robustness of proposed method, we projected the multi-line structured light onto the test area, reconstructed the nine light stripes for ten times and analyzed the errors of results. Table 4 indicates the maximum and average error of distances between the 3D points on the nine light stripes to the fitted plane in all ten experiments respectively. It is observed that the root mean square (RMS) error of the all light stripes is within 0.08 mm.

Fig.9 Surface topography of experimental workpiece measured by laser scanner and reconstruction result of light stripe and fitted planes

Table 4 Error from 3D points to fitted plane

Table 5 compares the angle between the planes reconstructed in ten experiments with the true value. It can be seen that the maximum error of the angle is 0.12°, while the minimum error is just 0.09°. Although the errors in two experiments are higher than 0.1°, the other experiments have quite high precision. The RMS error of all ten experiments is 0.368 9°.

Table 5 Error of angle between fitted planes

To furtherly verify the accuracy of our method, Zhou’s method[13]was used to calibrate the multiple light planes one by one and the calibration result was applied to the same experiment. Fig.10 shows the RMS errors of the distances between the 3D points on nine light stripes and the fitted planes using our method and Zhou’s method. In terms of each light stripe, the RMS error of Zhou’s method is more highly distributed than the proposed method, which means the proposed method has a higher precision on calibration of light planes compared to Zhou’s method. In addition, the errors obtained by the two methods have the same trend with the light stripe index. This performance has a certain relationship with the structure of the used DOE.

Fig.10 RMS errors of distances between 3D points on light-stripes and fitted plane after calibration using proposed method and Zhou’s method

The RMS errors of the angle between the two planes calculated by the proposed method and Zhou’s method are shown in Fig.11. The RMS error of the proposed method is generally lower than Zhou’s method. The experimental results show that the proposed calibration method for multi-line structured line light is accurate and significantly improves the measurement precision.

Fig.11 Angle error in ten experiments after calibration using proposed method and Zhou’s method

4 Conclusion

The new calibration method for the multi-line structured vision sensor, that involved a new planar target with a blank central portion, applied the Plücker matrix throughout the whole calibration process instead of point to get rid of the cumbersome coordinate frame conversion. The target is easy to use and low-cost. Furthermore, the calibration method proposed in this paper provides a good accuracy. According to the calibration result and the reconstruction of 3D points on each light stripe, we calculated the distance between points and fitted plane and the angle between planes separately. Under such condition, the proposed method can achieve a calibration accuracy of 0.08 mm.