DESCRIPTION :
Modern numerical simulation codes, especially those combining classical scientific computing and machine learning components, rely on a large number of hyperparameters that strongly impact accuracy, stability, and computational efficiency. These parameters are often tuned manually using empirical rules and expert intuition, an approach that becomes increasingly inadequate as code complexity grows. This internship aims at exploring the use of large language models (LLMs) as optimization agents to assist and partially automate hyperparameter selection in hybrid simulation codes. The LLM interacts with a numerical solver for a convection-diffusion equation discretized using finite differences and coupled with a neural network-based preconditioner. At each iteration, the LLM proposes a new set of parameters, simulations are executed, and performance metrics (convergence, stability, computational cost, accuracy) are summarized into a reward signal. Guided by this feedback, the LLM
iteratively refines its proposals. The work will first focus on hyperparameters of the numerical solver and learning components, and may later be extended toward constrained AutoML for the neural network architecture. Throughout the process, certification criteria are enforced to ensure that the numerical behavior of the solver is not degraded. The convection-diffusion problem serves as a simple testbed to establish a robust methodology before targeting more complex simulation codes.
Mission confiée
Modern numerical simulation codes, especially those combining classical scientific computing and machine learning components, rely on a large number of hyperparameters that strongly impact accuracy, stability, and computational efficiency. These parameters are often tuned manually using empirical rules and expert intuition, an approach that becomes increasingly inadequate as code complexity grows. This internship aims at exploring the use of large language models (LLMs) as optimization agents to assist and partially automate hyperparameter selection in hybrid simulation codes. The LLM interacts with a numerical solver for a convection-diffusion equation discretized using finite differences and coupled with a neural network-based preconditioner. At each iteration, the LLM proposes a new set of parameters, simulations are executed, and performance metrics (convergence, stability, computational cost, accuracy) are summarized into a reward signal. Guided by this feedback, the LLM
iteratively refines its proposals. The work will first focus on hyperparameters of the numerical solver and learning components, and may later be extended toward constrained AutoML for the neural network architecture. Throughout the process, certification criteria are enforced to ensure that the numerical behavior of the solver is not degraded. The convection-diffusion problem serves as a simple testbed to establish a robust methodology before targeting more complex simulation codes.
Principales activités
* SOTA on optimization with LLM
* Construct of simple 1D finite difference code with neural network based preconditionning
* Conception of metric for solver evaluation
* Optimization of the hyparameters by LLM and Feedback-Guided Prompting
* AutoML extension using the LLM
* Validation
Code d'emploi : Mannequin Photo (h/f)
Domaine professionnel actuel : Employés du Service de la Promotion des Ventes
Temps partiel / Temps plein : Plein temps
Type de contrat : Stage/Jeune diplômé
Compétences : Réseaux de Neurones Artificiels, Simulation Informatique, Python (Langage de Programmation), Machine Learning, NumPy, Informatique Scientifique, SciPy, Pytorch, Large Language Models, Architecture, Systèmes Automatisés, Gestion de la Performance, Analyse Numérique, Équation Différentielle Partielle, Simulations, Banc d'Essai, Métrique
Courriel :
emmanuel.franck@inria.fr
Téléphone :
0139635511
Type d'annonceur : Employeur direct