next up previous contents
Next: Image data assimilation Up: Back and forth nudging Previous: Numerical experiments   Contents

Conclusion

The BFN algorithm appears to be a very promising data assimilation method. It is extremely easy to implement: no linearization of the model equations, no computation of the adjoint state, no optimization algorithm. The only necessary work is to add a relaxation term to the model equations. The key point in the backward integration is that the nudging term (with the opposite sign to the forward integration one) makes it numerically stable. Hence the nudging (or relaxation) term has a double role: it forces the model to the observations and it stabilizes the numerical integration. It is simultaneously a penalization and regularization term.

The BFN algorithm has been compared with the variational method on several types of non-linear and turbulent systems. The conclusion of the various experiments is that the BFN algorithm is better than the variational method for the same number of iterations (and hence for the same computing time). It converges in a small number of iterations. Of course the initial condition is usually poorly identified by the BFN scheme, but on the other hand, the final state of the assimilation period is much better identified by the BFN algorithm than by the variational assimilation algorithm, which is a key point for the prediction phase that starts at the end of the assimilation period. Hence the prediction phase is usually better when it comes after an assimilation period treated by the BFN algorithm, rather than by a variational assimilation method.

The two algorithms can be combined, in the sense that one can perform several BFN iterations before switching to the variational method and this will considerably accelerate the convergence of the variational method. Finally the BFN algorithm enables one to consider the problem of imperfect models at no additional cost, as the model equations are not strong constraints in this nudging method (while they usually are in a variational method) and the relaxation term can be seen as a model error term.

Finally, several theoretical results explain and justify the efficiency of this algorithm in simple situations.

The main perspective is the following: the determination of the nudging coefficients (or matrices) should be improved, particularly by a numerical stability study of the backward integration. This will give the optimal nudging coefficients that make the backward integration stable, while preserving the extreme simplicity of the algorithm.


next up previous contents
Next: Image data assimilation Up: Back and forth nudging Previous: Numerical experiments   Contents
Back to home page