Two way parallelization of data assimilation algorithms in OpenDA

From Master Projects
Revision as of 13:18, 28 November 2014 by Schrofer (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


About Two way parallelization of data assimilation algorithms in OpenDA

  • This project has been fulfilled.
  • This project fits in the following Bachelor programs: {{#arraymap:|, |xXx|bachelorproject within::xXx|,}}
  • This project fits in the following masterareas: {{#arraymap:High Performance Distributed Computing, Computational Intelligence and Selforganisation, Parallel and Distributed Computer Systems|, |xXx|project within::xXx|,}}


Description

OpenDA is a generic software framework for data assimilation and model calibration techniques (www.openda.org). Data assimilation methods incorporate observations into a dynamical computer model of a real system in order to improve the quality of the predictions. The dynamical models involved in operational forecasting can be very computational demanding. The amount of computations in a data assimilation application using such a model can be orders higher. Parallel computing is necessary to reduce the computational time to usable size and distribute the huge amounts of data over various computers. For this project the focus will be on the parallelization of Ensemble based algorithms where an ensemble of model realizations is used to represent the model error statistics. The challenging aspect of this project is the form of parallelism. For a scalable solution we need a two way parallelization. The model runs in each cycle of the Ensemble based method are best parallelized by running various models at the same time. The update step, where the observations are combined with the model results need to be parallelized in a domain decomposition way. This two way parallelization should be introduced by introducing generic parallel building blocks in OpenDA. The idea is that the parallel building blocks allow OpenDA algorithms to be developed by programmers with no or very limited knowledge on parallel computing. The newly implemented parallel building blocks can be applied on a large ensemble based data assimilation case study with the SWAN model (http://www.swan.tudelft.nl).


The main research questions are:

- Can parallelization be done transparently for the user?

- Can we achieve the required scalability?

- What is the best communication paradigm to implement parallelization?


This MSc project is an initiative of eScience Center (www.esciencecenter.nl) and VORtech (www.vortech.nl). The student will be working in Amsterdam at eScience Center (preferred) or in Delft at VORtech.