Équipe ICPS - Informatique et Calcul Parallèle Scientifique

Difference between revisions of "Jobs"

From Équipe ICPS - Informatique et Calcul Parallèle Scientifique
Jump to navigation Jump to search
(Created page with "== Offres de post-doc == Pas d'offre en ce moment. == Offres de stages == Pas d'offre en ce moment. == Offres de thèses == Pas d'offre en ce moment. ==Autres offres== ...")
 
Line 7: Line 7:
 
Pas d'offre en ce moment.
 
Pas d'offre en ce moment.
  
== Offres de thèses ==
+
== Offres de contrats doctoraux ==
 +
 
 +
* [[Media:SujetTheseIGGDeformation.pdf|Adaptive Program Optimization]] '''Directeur :''' C. Bastoul
 +
'''Abstract :'''
 +
Compute-intensive applications now range from image processing on smartphones to large simulations on supercomputers. A significant part of their development effort is devoted to making the computation done within a finite budget of time, space or energy. Given the complexity of modern architectures, writing such applications typically requires developers to design a sequential program for algorithmic tuning and debugging purpose and then to create an optimized and parallelized version to scale to the actual problem size and to exploit the target architecture. To minimize the development time, automated approaches exist and provide good results at vectorizing and extracting thread-level parallelism for some classes of very large loops. However, those techniques have been historically designed for precise and long-lasting computations on supercomputers with well-known characteristics, and they are not well adapted to the new range of applications on mainstream parallel architectures, including short and approximate computations, which may be run on very different devices while being compiled only once. The goal of this PhD proposal is to target this issue by researching, designing and evaluating new compiler techniques for automatic optimization and parallelization with adaptive capabilities that build on domain-specific knowledge about the application and on state-of-the-art program optimization techniques.
  
Pas d'offre en ce moment.
 
  
 
==Autres offres==
 
==Autres offres==
  
 
Pas d'offre en ce moment.
 
Pas d'offre en ce moment.

Revision as of 11:24, 16 April 2015

Offres de post-doc

Pas d'offre en ce moment.

Offres de stages

Pas d'offre en ce moment.

Offres de contrats doctoraux

Abstract : Compute-intensive applications now range from image processing on smartphones to large simulations on supercomputers. A significant part of their development effort is devoted to making the computation done within a finite budget of time, space or energy. Given the complexity of modern architectures, writing such applications typically requires developers to design a sequential program for algorithmic tuning and debugging purpose and then to create an optimized and parallelized version to scale to the actual problem size and to exploit the target architecture. To minimize the development time, automated approaches exist and provide good results at vectorizing and extracting thread-level parallelism for some classes of very large loops. However, those techniques have been historically designed for precise and long-lasting computations on supercomputers with well-known characteristics, and they are not well adapted to the new range of applications on mainstream parallel architectures, including short and approximate computations, which may be run on very different devices while being compiled only once. The goal of this PhD proposal is to target this issue by researching, designing and evaluating new compiler techniques for automatic optimization and parallelization with adaptive capabilities that build on domain-specific knowledge about the application and on state-of-the-art program optimization techniques.


Autres offres

Pas d'offre en ce moment.