10 – Formal Methods and Definitions

This page is for those who are wondering about what’s going on under the hood of Argus and OpenCV.  It also will include notes about error analysis.

Camera matrix

The camera matrix, which we call K, is a mapping from 3D coordinates to the image plane in the special case when the center of projection is at the origin. If we assume the principal point of the image (the center of the image coordinate system) is perfectly aligned with center of projection with both falling on the positive z-axis, we can write the mapping from homogenous 3D coordinates to image plane coordinates as:

\displaystyle \left(\begin{array}{c} X\\ Y\\ Z\\ 1 \end{array}\right) \rightarrow \left(\begin{array}{c} f X\\ f Y\\ Z \end{array}\right) = \left(\begin{array}{cccc} f & & & 0\\ & f & & 0\\ & & 1 & 0 \end{array}\right) \left(\begin{array}{c} X\\ Y\\ Z\\ 1 \end{array}\right)

where f is the focal length of the camera or the distance between the center of projection and the image plane (image plane is Z = f). This formula assumes that the principal point is aligned with the origin of coordinates in the image plane, so more generally the mapping becomes:

\displaystyle \left(\begin{array}{c} X\\ Y\\ Z\\ 1 \end{array}\right) \rightarrow \left(\begin{array}{c} fX + Z p_x\\ fY + Zp_y\\ Z \end{array}\right) = \left(\begin{array}{cccc} f & & p_x & 0\\ & f & p_y & 0\\ & & 1 & 0 \end{array}\right) \left(\begin{array}{c} X\\ Y\\ Z\\ 1 \end{array}\right)

Now we define the camera matrix as:

\displaystyle K = \left(\begin{array}{ccc} f & & p_x\\ & f & p_y\\ & & 1 \end{array}\right)

and the mapping more concisely:

\displaystyle x = K [I | 0] X

where x is the image plane coordinate, X the 3D homogenous coordinate and [I | 0] is a 3×4 matrix made by concatenating the 3×3 identity matrix with a zero column vector.

Estimating the Camera Matrix via OpenCV

Argus uses OpenCV algorithms to estimate the intrinsic camera matrix described above. We describe here in some detail the methods which OpenCV uses.

In order to calibrate a camera using Argus or OpenCV, one must film a grid. OpenCV uses methods not mentioned here to mark pixel coordinates of the grid. The software then takes advantage of what’s called planar homography or the projective mapping from one plane (the grid) to another (the image plane) to estimate the camera matrix. \

Let {Q = [X, Y, Z, 1]^T} be the homogenous coordinates of a point on the grid in world coordinates. Let {q = [x, y, 1]} be the homogenous coordinates of a point in the image plane. We can then express the action of the homography between the planes as:

\displaystyle q = s HQ

where H is a 3×3 homography matrix, and s is an arbitrary scale factor conventionally factored out of H. {H} is essentially the projection matrix which we define here further on. It contains the camera matrix and a rotation translation matrix. In other words, H rotates and translates the world coordinates back to image coordinates and the projects into the image plane with the camera matrix K. Thus, it can be written:

\displaystyle H = K [R | t]

However, one does not need any information about the camera matrix to estimate H. H is simply a transform between point on the grid and points in the image plane:

\displaystyle p_{dst} = H p_{src}

where {p_{dst} = \left(\begin{array}{c} x_{dst}\\ y_{dst}\\ 1 \end{array}\right)} and {p_{src} = \left(\begin{array}{c} x_{src}\\ y_{src}\\ 1 \end{array}\right)}.

As such, for each image of a grid, OpenCV uses the constraint above for all points found in the grid to form a linear system and solve for the homography matrix. Then, OpenCV uses this information to solve for the camera intrinsic matrix, K.

If we set z = 0 in the coordinates of the grid plane, the homography relationship becomes slightly simplified in that we no longer need the third column of the rotation matrix:

\displaystyle H = [h_1, h_2, h_3] = sK[r_1, r_2, t]

This equivalent to the system:

\displaystyle h_1 = sKr_1

\displaystyle h_2 = sKr_2

\displaystyle h_3 = sKt

By definition the rotation vectors are orthogonal:

\displaystyle r_1^T r_2 = 0 \Rightarrow h_1^T K^{- T} K^{- 1} h_2 = 0

and as scale is extracted, they are also orthonormal:

\displaystyle h_1^T K^{- T} K^{- 1} h_1 = h_2^T K^{- T} K^{- 1} h_2

Let {B = K^{- T} K^{- 1} = \left(\begin{array}{ccc} B_{11} & B_{12} & B_{13}\\ B_{21} & B_{22} & B_{23}\\ B_{31} & B_{32} & B_{33} \end{array}\right)}. B has the following closed form solution:

\displaystyle B = \left(\begin{array}{ccc} \frac{1}{f_x^2} & 0 & \frac{- c_x}{f_x^2}\\ 0 & \frac{1}{f_y^2} & \frac{- c_y}{f_y^2}\\ \frac{- c_x}{f_x^2} & \frac{- c_y}{f_y^2} & \frac{c_x^2}{f_x^2} + \frac{c_y^2}{f_y^2} + 1 \end{array}\right)

Thus, using B, both constraints have the form {h_i^T B h_j}. Because B is symmetric, the constraint can be written as on six-dimensional vector dot product:

\displaystyle h_i^T B h_j = v_{ij}^T b = \left(\begin{array}{c} h_{i 1} h_{j 1}\\ h_{i 1} h_{j 2} + h_{i 2} h_{j 1}\\ h_{i 2} h_{j 2}\\ h_{i 3} h_{j 1} + h_{i 1} h_{j 3}\\ h_{i 3} h_{j 2} + h_{i 2} h_{j 3}\\ h_{i 3} h_{i 3} \end{array}\right)^T \left(\begin{array}{c} B_{11}\\ B_{12}\\ B_{22}\\ B_{13}\\ B_{23}\\ B_{33} \end{array}\right)^T

Our two constraints can now be written as:

\displaystyle \left(\begin{array}{c} v_{12}^T\\ (v_{11} - v_{22})^T \end{array}\right) b = 0

For N images, this gives us the 2N x 6 linear system:

\displaystyle V b = 0

Finally, after solving this system, the parameters of the camera matrix, K, can be pulled out from the closed-form solution for B:

\displaystyle f_x = \sqrt{\lambda / B_{11}}

\displaystyle f_y = \sqrt{\lambda B_{11} / (B_{11} B_{22} - B_{12}^2)}

\displaystyle c_x = - B_{13} f_x^2 / \lambda

\displaystyle c_y = (B_{12} B_{13} - B_{11} B_{23}) / (B_{11} B_{22} - B_{12}^2)

where:

\displaystyle \lambda = B_{33} - (B_{13}^2 + c_y (B_{12} B_{13} - B_{11} B_{23})) / B_{11}

Distortion according to the Pinhole model

The pinhole model defines two types of lens distortion, radial and tangential. Radial distortion is described three terms of a truncated Taylor series expansion ({k_1, k_2, k_3}). The relation between distorted and undistorted pixel coordinates with radial distortion is:

\displaystyle x_{corrected} = x (1 + k_1 r^2 + k_2 r^4 + k_3 r^6)

\displaystyle y_{corrected} = y (1 + k_1 r^2 + k_2 r^4 + k_3 r^6)

Where r is the distance of the point (x, y) from the optical center ({c_x, c_y}). Tangential distortion is desribed by two parameters ({t_1, t_2}). The correction relationship for tangential distortion is:

\displaystyle x_{corrected} = x + [2 t_1 y + t_2 (r^2 + 2 x^2)]

\displaystyle y_{corrected} = y + [t_1 (r^2 + 2 y^2) + 2 t^2 x]

After solving for camera intrinsics, OpenCV sets up a non-linear system based on these constraints to solve for distortion coefficients.  OpenCV employs the Levenburg Marquardt algorithm to minimize sum of squares of pixel reprojection errors. It computes ideal pixel locations from the estimated camera intrinsics assuming a perfect pinhole camera.

Fundamental Matrix

The fundamental matrix is the algebraic representation of epipolar geometry in a two-camera system. It represents the map between a point in one image and the corresponding epipolar line in the other image. Thus, as it is a map between a two-dimensional projective plane and a one-dimensional projective space, it has a rank of 2.

For a pair of corresponding points in two images, {x} and {x^\prime}, it satisfies the following relationship:

\displaystyle {x^\prime}^TFx = 0

Essential Matrix

The essential matrix satisfies a similar relationship as the fundamental matrix, the only difference being that it works on normalized image coordinates. A normalized image coordinate can be expressed as {x_{normalized} = K^{- 1} x} which is equivalent to subtracting out the optical center and dividing by the focal length.

Argus estimates the essential matrix using an OpenCV implementation of the 8-point algorithm. We describe this algorithm here:

8-Point algorithm for estimating the essential matrix

Let {E = (E_{ij}) 1 \leqslant j, j \leqslant 3} be the essential matrix between two views.

Let {e} be the column {(E_{11}, E_{12}, E_{13}, E_{21}, E_{22}, E_{23}, E_{31}, E_{32}, E_{33})^T}

\displaystyle x = (x, y, 1) denotes a normalized image coordinate in homogenous coordinates

\displaystyle (x_j, x_j^\prime) is the jth correspondence between the two views

{M^\prime = \left(\begin{array}{ccccccccc} {x_1^\prime}x_1 & {x_1^\prime}y_1 & {x_1^\prime} & {y_1^\prime}x_1 & {y_1^\prime}y_1 & {y_1^\prime} & x_1 & y_1 & 1\\ {x_2^\prime}x_2 & {x_2^\prime}y_2 & {x_2^\prime} & {y_2^\prime}x_2 & {y_2^\prime}y_2 & {y_2^\prime} & x_2 & y_2 & 1\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ {x_n^\prime}x_n & {x_n^\prime}y_n & {x_n^\prime} & {y_n^\prime}x_n & {y_n^\prime}y_n & {y_n^\prime} & x_n & y_n & 1\end{array}\right) }

We assume OpenCV’s implementation simplifies the problem somewhat by setting {E_{33} = 1}. The system then becomes:

{e^\prime = (E_{11}, E_{12}, E_{13}, E_{21}, E_{22}, E_{23}, E_{31}, E_{32})^T} {M^\prime = \left(\begin{array}{ccccccccc} {x_1^\prime}x_1 & {x_1^\prime}y_1 & {x_1^\prime} & {y_1^\prime}x_1 & {y_1^\prime}y_1 & {y_1^\prime} & x_1 & y_1 & \\ {x_2^\prime}x_2 & {x_2^\prime}y_2 & {x_2^\prime} & {y_2^\prime}x_2 & {y_2^\prime}y_2 & {y_2^\prime} & x_2 & y_2 & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \\ {x_n^\prime}x_n & {x_n^\prime}y_n & {x_n^\prime} & {y_n^\prime}x_n & {y_n^\prime}y_n & {y_n^\prime} & x_n & y_n & \end{array}\right) }

Let c be a column vector of ones of length n.

In the case of {n \geqslant 8}, {M^\prime} is generically non-singular. E is obtained by solving the system:

\displaystyle e^\prime = {M^\prime}^{- 1} c \ \ \ \ \ (1)

and then imposing the rank 2 constraint.

The rank 2 constraint can be imposed by taking the solution from (1) and finding {E^\prime} such that the Frobenius norm {\| E - E^\prime \|} is minimized. This can be done through Singular Value Decomposition (SVD), but it is unknown what implementation OpenCV’s algorithm employs at this step.

Decomposition of the Essential matrix to find Projection matrices

Projection matrices define the transform between 3D points and image points. For a stereo camera system, it is convenient to define the pair of projection matrices, {P} and {P^\prime} as follows:

\displaystyle P = K [I | 0] and P^\prime = K^{\prime}[R | t]

where R and t are the rotation and translation matrices between the coordinate frame of one camera to the other. In other words, you define the coordinate system with respect to one cameras center of projection. The center of projection is the origin and the positive z-axis points out into the field of view. This is why the the first projection matrix is simply the camera matrix times the identity concatenated with a zero vector.

Argus estimates the essential matrix, E, and then retrieves the two projection matrix {P^\prime} from an SVD of E. According to Hartley and Zisserman, if the SVD of E is {E = U\left(\begin{array}{ccc} 1 & & \\ & 1 & \\ & & 0 \end{array}\right)V^T} then there are four possibilities for {P^\prime}.

P^\prime = [UWV^T | t ] or {P^\prime = [U W V^T | - t]} or {P^\prime = [UW^TV^T | t]} or {P^\prime = [UW^TV^T | - t]}

where t, the translation vector, is simply the third column of the matrix U, and {W = \left(\begin{array}{ccc} 0 & - 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{array}\right)}.

Argus chooses the correct projection matrix by reconstructing for all 4 and selecting the one which gives the most points with positive z, i.e. points that are in front of the first camera.

Reconstruction though triangulation

Argus again uses OpenCV to reconstruct 3D points after estimating the projection matrices. For n cameras, Argus reconstructs the points for n – 1 camera pairs where all pairs contain the last camera. One camera must be contained in all the pairs to keep the coordinate system consistent. Argus takes the reconstructed points from all n – 1 pairs, normalizes them by dividing by the average distance from the origin, and averages them for the final estimation of a reconstructed point.

The linear system which OpenCV solves through the least squared method is described by Hartley and Zisserman:

\displaystyle A X = 0

where:

\displaystyle A = \left(\begin{array}{c} xp^{3T} - p^{1T}\\ yp^{3T} - p^{2T}\\ {x^\prime}{p^\prime}^{3T} - {p^\prime}^{1T}\\ {y^\prime}{p^\prime}^{3T} - {p^\prime}^{2T} \end{array}\right)

where {p^{i T}} is the ith row of the projection matrix P.

Finding offsets through cross-correlation of audio-signals

Argus uses the cross-correlation of the audio samples from two videos to obtain their offset or the lag in time between when one video started and the other. To accomplish this in a computationally efficient way, Argus employs convolution of the two signals using Fast Fourier Transform (FFT). The signals are assumed to be mono and audio sample rate must be known. A overview of method is described here:

Given two signals {S_1 (t)} and {S_2 (t)} which are time series of audio samples, one can find the cross-correlation by employing the fact that it is equivalent the convolution of the two series with time reversed for one.

\displaystyle S_1 \star S_2 = S_1 (- t) \ast S_1 (t)

Then one makes significant computational gains when realizing:

\displaystyle \mathcal{F} \{ S_1 \star S_2 \} =\mathcal{F} (S_1 (- t)) \mathcal{F} (S_2 (t))

Where {\mathcal{F}} denotes the Fourier transform. Finally the cross-correlation function becomes:

\displaystyle S_1 \star S_2 =\mathcal{F}^{- 1} \{ \mathcal{F} (S_1 (- t)) \mathcal{F} (S_2 (t) \}

Then finding the offset is simply a matter of choosing the lag associated with the supremum of {S_1 \star S_2}. The actual value of that supremum is ideally 1 meaning the signals are perfectly correlated for that lag, but due to environmental noise and phase shifts, this is never the case.

Sources used:

Hartley, Richard, and Andrew Zisserman. Multiple View Geometry in Computer Vision. Cambridge: Cambridge UP, 2003. Print.

Bradski, Gary R., and Adrian Kaehler. Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol, CA: O’Reilly, 2008. Print.

Computing the uncertainty of the 8 point algorithm for fundamental matrix estimation

Frédéric Sur, Nicolas Noury, Marie-Odile BergerMark Everingham and Chris Needham and Roberto Fraile. 19th British Machine Vision Conference – BMVC 2008, Sep 2008, Leeds, United Kingdom. pp.10, 2008, Electronic Proceedings of the 19th British Machine Vision Conferencee 2008 (Leeds, September 2008). <http://www.comp.leeds.ac.uk/bmvc2008/proceedings/papers/269.pdf>

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.