Next: Line Detection
Up: Results
Previous: Results
Calibration
To measure the accuracy, the results of calibration (inner and outer geometry) are used to reproject the world coordinate points to image coordinates. In an ideal model, the image points and the reprojected points would perfectly match. Unfortunately, this does not happen.
Figure 4.1 shows the reprojection using maya. The translation vector and the inverse rotation matrix was used to move and rotate the camera. The intrinsic parameters of the real camera and the camera used for reprojection in MAYA match. Unfortunately, distortion can not be modeled in MAYA, however the calibration pattern fits almost perfectly. The result can be used as a visualization to demonstrate the potentials but cannot be used to measure the accuracy of the calibration. The world coordinates have to be reprojected using a mathematical formulation in which lens distortion is modeled, i.e. the same model as used for calibration. The transformation that projects the world coordinates to image coordinates passes through the following stages.
(4.1) 
In other words, the 3D world coordinates are transformed (translated and rotated) to 3D camera coordinates. The intrinsic parameters of the camera are used to project the camera coordinates onto the 2D image plane. A 2D
2D transformation models the distortion affect.
To calibrate the cameras, at least two images of the calibration pattern are needed. Figure 4.2 shows ten images recorded with the left camera.
The patterns inside the images have different attitudes and positions in every image. This is needed, otherwise the calibration method would be unable to solve the calibration problem correctly. Once the calibration is done, the attitude of the calibration patterns, and more precisely, the translation and rotation in reference to the camera, is known. Figure 4.3 shows the reconstructed calibration patterns.
We can use this information to reproject the calibration patterns onto the images. Figure 4.4 shows the result of reprojection of the upper left corner for each image. It is a zoomed view to see the deviation. The depicted cutout has a size of 5x5px. Calculated edge points are marked as circles and reprojected points are marked as crosses.
The estimator for the standard deviation of the difference between the original point and the reprojected one is called pixel error. It can be used as an estimation of the accuracy of the reprojection. Table 4.1 shows the reprojection error for every calibration image. It can be seen that the reprojection is very accurate. The calibration works with subpixel accuracy, the mean pixel error is below 0.17 pixels. The mean and median of x and y are very small, and can be further decreased by adding more calibration images.
Figure 4.5 shows the reprojection errors of the 10 calibration images for the left camera. The calibration pattern has 48 internal corners, thus 48 points are plotted in an own color for every image. All deviations from the center are below 0.6 px.

Equation 4.2 shows the pixel error in closed form
In order to ensure unbiasedness, note that the divisor in Equation 4.2 is and not , as it would be with knowledge about the true mean . The calculation of the estimator is similar to the calculation of the mean .
The quality of the calibration has no influence to the quality of the line detection, which is investigated in the next section, but it has a strong influence on the correspondence analysis and the 3D reconstruction. Because the line detection works on rectified images, which are computed using the fundamental matrix and results of the prior calibration. If the rectification does not work properly, the attitudes of the lines are adulterated and thus correspondences may not be established. In addition, the disparity between two lines is also falsified. This has a direct influence to the 3D reconstruction, as well as other inner camera parameters have an influence to the final result (e.g. depth depends on effective focal length , and depends on principal point). If the camera coordinates are transformed into world coordinates, the error in the extrinsic camera parameters is reflected in the world coordinates.
Next: Line Detection
Up: Results