next up previous contents
Next: Quality estimate Up: Description of the algorithm Previous: Regularization   Contents

Muti-grid approach and optimization

The minimization of the cost function $ J$ is performed in nested subspaces:

$\displaystyle \mathcal{C}_{16}\subset\mathcal{C}_{8}\subset\mathcal{C}_{4}\subset\mathcal{C}_2\subset\mathcal{C}_1,$ (4.11)

where $ \mathcal{C}_q$ is the set of admissible displacement fields at the scale $ q$ , containing piecewise affine vector fields with respect to each space variable, on squares of size $ q\times q$ pixels.

The difference with hierarchical techniques issued from the optical flow family (see e.g. [84,92]) is that we do not linearize the cost function. This should help to find large displacements, where the domain of linearity of the luminance function is not valid.

The space $ \mathcal{C}_{16}$ is typically of small dimension, hence the minimization of $ J$ on $ \mathcal{C}_{16}$ is fast and robust when a zero vector field is used as initial guess. The optimal vector field obtained at a given scale in the space $ \mathcal{C}_q$ is used as initial guess to find the minimum at the finer scale in the space $ \mathcal{C}_{q/2}$ . This process is iteratively repeated, until an optimal solution is identified on the finest grid.

All the optimizations of the nonlinear cost function are performed by a Gauss-Newton method. When an initial guess $ (u^0,v^0)$ is given (a constant null field, in our experiments), the $ k$ -th iteration read

$\displaystyle (u^{k},v^{k}):=(u^{k-1},v^{k-1})+(du^{k},dv^{k}),$ (4.12)

where $ (du^k,dv^k)$ solves

$\displaystyle (DF^TDF+\alpha S^TS)(du,dv)=-DF^TF-\alpha S^TS(u,v),$ (4.13)

where $ F=F(I_0,I_1;u^{k-1},v^{k-1})$ is the error, $ DF=DF(I_0,I_1;u^{k-1},v^{k-1})$ is the Jacobian matrix of the error, and $ S$ is the linear operator associated to the regularization term.

Another innovation of the present work is the efficient computation of the product $ DF^TDF$ of the Jacobian of the first term of the cost function (4.2) by its transpose. This efficient computation comes from the observation that this matrix is sparse and can be assembled like a finite-element matrix using one loop over the data.

Let $ V\in\mathcal{C}_q$ be a vector field. Let $ ({\bf e}_i)$ be the canonical orthonormal basis of $ \mathcal{C}_q$ , containing vector fields that are equal to 0 at every but one control point, where the vector field is directed along the horizontal or vertical axis. The $ (k,l)^{\textrm{th}}$ coefficient of the matrix $ DF^TDF$ is

$\displaystyle (DF^TDF)_{k,l} = (DF^TDF {\bf e}_k\vert{\bf e}_l) = (DF {\bf e}_k\vert DF {\bf e}_l)_{L^2(\Omega)}.$ (4.14)

Since the elementary displacements $ {\bf e}_k$ are non zero at only one control point, the matrix $ DF^TDF$ has a sparse structure. If we consider the following formulation of the jacobian matrix:

$\displaystyle DF(u,v).d(x,y) = \nabla I_1(x+u(x,y),y+v(x,y)).d(x,y),$ (4.15)

then the matrix $ DF^TDF$ can be assembled like a finite-element matrix:
$\displaystyle DF^TDF$ $\displaystyle =$ $\displaystyle \sum_{k,l}(DF^TDF)_{k,l}\,{\bf e}_k\otimes {\bf e}_l$  
  $\displaystyle =$ $\displaystyle \sum_{k,l}\int_\Omega(\nabla I_1(x')\,\vert\,{\bf e}_k(x))\,(\nabla I_1(x')\,\vert\,{\bf e}_l(x))\,{\bf e}_k\otimes {\bf e}_l\ dx$  
  $\displaystyle =$ $\displaystyle \sum_{k,l}\sum_{R\in\mathcal{R}_q}\int_R(\nabla I_1(x')\,\vert\,{...
...e}_k(x))\,(\nabla I_1(x')\,\vert\,{\bf e}_l(x))\,{\bf e}_k\otimes {\bf e}_l\ dx$  
  $\displaystyle =$ $\displaystyle \sum_{R\in\mathcal{R}_q}\sum_{k,l}\int_R(\nabla I_1(x')\,\vert\,{...
...}_k(x))\,(\nabla I_1(x')\,\vert\,{\bf e}_l(x))\,{\bf e}_k\otimes {\bf e}_l\ dx,$ (4.16)

where we write $ x'=x+V(x)$ , and where $ \mathcal{R}_q$ represents the set of all squares of the $ q\times q$ grid. There are $ 8$ quantities of the form $ (\nabla I_1(x+V(x))\vert{\bf e}_k(x))$ to be computed for each element of $ \mathcal{R}_q$ , and the matrix $ DF^TDF$ can be assembled by reading once the data. The vector field $ DF^TF$ can be assembled rapidly in a similar way, and the term $ S^TS$ is easy to compute.

Finally, equation (4.13) is solved using a conjugate gradient method wihtout preconditioning.


next up previous contents
Next: Quality estimate Up: Description of the algorithm Previous: Regularization   Contents
Back to home page