next up previous contents
Next: Image processing Up: Habilitation Previous: Contents   Contents

Introduction

In this work, several problems have been studied with the common goal of providing robust, particularly easy to implement, fast and powerful algorithms. The efficiency of the algorithms is required by the operational context of the methods, and by a need to process more and more data in an increasingly short time. The other constraint that we particularly took into account is the ease of use and implementation of the methods we have developed.

In chapter 2, we will tackle various problems in image processing by an original approach in the field: topological asymptotic analysis, or more simply the topological gradient.

There has recently been a renewed interest in image processing thanks to new applications in telecommunications and medicine: on one hand, new technologies in telecommunications and diffusion of information, which now involve sending and receiving massive flows of numerical data (e.g. images), and on the other hand the medical world, in which huge progress has been made, in particular for the early detection of tumors, thanks to more powerful imaging techniques.

Our study is motivated by several observations. First, the topological gradient is generally used for structural mechanics, design, and shape optimization problems. Also, it has been successfully applied in electromagnetism for the detection of cracks or hidden objects. However, many image processing problems rely on the good identification of a subset of the image, for instance edges or characteristic objects. This common feature seemed interesting to us, and allowed us to adapt the topological gradient method, initially used for crack detection, to several image processing problems (restoration, classification, segmentation, inpainting).

The second interesting aspect is the speed of the method. In various fields, topological asymptotic analysis has made it possible to obtain good results very quickly. However, medical imaging and audiovisual diffusion (e.g. satellite television or internet broadcasting) both require the processing time to be negligible. If the processing time is too large, it will delay the medical diagnosis, or the flow of data. It is thus important to build extremely fast schemes for solving these various problems, in real time for movies and a negligible time (e.g. smaller than one second) for images.

As we will see hereafter, the topological gradient method actually adapts perfectly to image processing problems, allowing us to obtain very interesting results for a particularly small computation cost.

In chapter 3, we will study data assimilation for environmental and geophysical problems, and more particularly within the framework of atmospheric and oceanic observations. For several years, one of the major concerns has been to appreciably improve our knowledge of these turbulent systems, one of the major goals being the ability to predict their evolution with a high reliability.

Several different challenges appear in data assimilation: short-range (e.g. a few days) weather forecasting, the study of global warming and climate change, detection of extreme climatic phenomena several weeks in advance, ...For all these problems, the goals are almost similar. They consist of estimating quickly and with a very high degree of accuracy the state of a turbulent system, from the combined knowledge of models and data: on one hand mathematical equations modeling the coupled atmosphere-ocean system, and on the other hand observations of different nature (e.g. in situ, or satellite observations), corresponding to various physical quantities.

Beyond the extreme size of the problem to be solved (several billions of values to be identified from hundreds of millions of observations) and the computational time needed to solve it, another factor appears: the cost of development and use of a data assimilation method. Presently, it is extremely difficult to implement such a method, even on a relatively simple problem. This motivated us to study the possibility of improving one of the simplest methods of data assimilation, nudging (also known as Newtonian relaxation), in order to obtain much better results without complicating the method.

By applying the nudging method to the backward (in time) problem, we noted that it is possible to stabilize the backward system, which is unstable because of the irreversibility of the physical problem. Thus, as detailed in chapter 3, we can go back in time, and obtain a more reliable estimate of the system at a previous time, from which forecasts may be deduced. By applying alternatively and repeatedly the standard nudging method to the forward and backward models, we obtain an iterative algorithm that is very easy to implement and provides definitely better results than the standard nudging. Indeed, the results are of similar quality, and are often obtained much more quickly than by using the standard variational data assimilation method.

Chapter 4 presents a study at the interface of these two fields: the assimilation of images. Presently, a huge quantity of observations coming from satellite images is essentially not used to improve the knowledge of the system state. However, sequences of images obtained by satellites definitely show various characteristic structures (hurricanes, swirls, currents of hot water, pollution, ...) moving and evolving in time.

Several approaches can be considered to solve this kind of problem, and we made the choice to try to identify and extract velocity fields from the sequences of images. That appeared to us to be the most adapted choice for rapidly extracting conventional data (i.e. directly related to the model variables), and then being able to use them in a standard assimilation system.

The idea that we develop in chapter 4 is based on the constant brightness assumption, which consists of looking for a displacement field that transports an image to another one. The originality of our approach lies in the nonlinearization of the cost function to be minimized, combined with a fast method to assemble the Jacobian matrix. Finally, a multi-grid approach makes it possible to guarantee the quality of the minimum. Thanks to all these techniques, we are able to extract complete velocity fields in a very short time, and it is also possible to provide a quality estimate of the identified fields, which can be viewed as error statistics of these pseudo-observations within the framework of data assimilation.

Finally some general conclusions and research perspectives are given in chapter 5.


next up previous contents
Next: Image processing Up: Habilitation Previous: Contents   Contents
Back to home page