Équipe ICPS - Informatique et Calcul Parallèle Scientifique

Difference between revisions of "Jobs"

From Équipe ICPS - Informatique et Calcul Parallèle Scientifique
Jump to navigation Jump to search
 
Line 1: Line 1:
 +
'''PhD offer & Post-doc positions'''
 +
Positions may be available, drop us an e-mail.
 +
 
 +
'''Master internship'''
 +
If you want to do a master internship in the group please drop us an e-mail.
 +
 
== Offres de contrats doctoraux 2015 ==
 
== Offres de contrats doctoraux 2015 ==
  
 
* [[Media:Bastoul_sujet_contrat_doctoral.pdf|Adaptive Program Optimization]] <br />'''Directeur :''' C. Bastoul <br />'''Abstract :'''<br />Compute-intensive applications now range from image processing on smartphones to large simulations on supercomputers. A significant part of their development effort is devoted to making the computation done within a finite budget of time, space or energy. Given the complexity of modern architectures, writing such applications typically requires developers to design a sequential program for algorithmic tuning and debugging purpose and then to create an optimized and parallelized version to scale to the actual problem size and to exploit the target architecture. To minimize the development time, automated approaches exist and provide good results at vectorizing and extracting thread-level parallelism for some classes of very large loops. However, those techniques have been historically designed for precise and long-lasting computations on supercomputers with well-known characteristics, and they are not well adapted to the new range of applications on mainstream parallel architectures, including short and approximate computations, which may be run on very different devices while being compiled only once. The goal of this PhD proposal is to target this issue by researching, designing and evaluating new compiler techniques for automatic optimization and parallelization with adaptive capabilities that build on domain-specific knowledge about the application and on state-of-the-art program optimization techniques.
 
* [[Media:Bastoul_sujet_contrat_doctoral.pdf|Adaptive Program Optimization]] <br />'''Directeur :''' C. Bastoul <br />'''Abstract :'''<br />Compute-intensive applications now range from image processing on smartphones to large simulations on supercomputers. A significant part of their development effort is devoted to making the computation done within a finite budget of time, space or energy. Given the complexity of modern architectures, writing such applications typically requires developers to design a sequential program for algorithmic tuning and debugging purpose and then to create an optimized and parallelized version to scale to the actual problem size and to exploit the target architecture. To minimize the development time, automated approaches exist and provide good results at vectorizing and extracting thread-level parallelism for some classes of very large loops. However, those techniques have been historically designed for precise and long-lasting computations on supercomputers with well-known characteristics, and they are not well adapted to the new range of applications on mainstream parallel architectures, including short and approximate computations, which may be run on very different devices while being compiled only once. The goal of this PhD proposal is to target this issue by researching, designing and evaluating new compiler techniques for automatic optimization and parallelization with adaptive capabilities that build on domain-specific knowledge about the application and on state-of-the-art program optimization techniques.

Latest revision as of 15:40, 3 January 2017

PhD offer & Post-doc positions Positions may be available, drop us an e-mail.

Master internship If you want to do a master internship in the group please drop us an e-mail.

Offres de contrats doctoraux 2015

  • Adaptive Program Optimization
    Directeur : C. Bastoul
    Abstract :
    Compute-intensive applications now range from image processing on smartphones to large simulations on supercomputers. A significant part of their development effort is devoted to making the computation done within a finite budget of time, space or energy. Given the complexity of modern architectures, writing such applications typically requires developers to design a sequential program for algorithmic tuning and debugging purpose and then to create an optimized and parallelized version to scale to the actual problem size and to exploit the target architecture. To minimize the development time, automated approaches exist and provide good results at vectorizing and extracting thread-level parallelism for some classes of very large loops. However, those techniques have been historically designed for precise and long-lasting computations on supercomputers with well-known characteristics, and they are not well adapted to the new range of applications on mainstream parallel architectures, including short and approximate computations, which may be run on very different devices while being compiled only once. The goal of this PhD proposal is to target this issue by researching, designing and evaluating new compiler techniques for automatic optimization and parallelization with adaptive capabilities that build on domain-specific knowledge about the application and on state-of-the-art program optimization techniques.