A reinforcement learning strategy for p-adaptation in high order solvers

Huergo Perea, David ORCID: https://orcid.org/0009-0008-9091-5824, Rubio Calzado, Gonzalo ORCID: https://orcid.org/0000-0002-6231-4801 and Ferrer Vaccarezza, Esteban ORCID: https://orcid.org/0000-0003-1519-0444 (2024). A reinforcement learning strategy for p-adaptation in high order solvers. "Results in Engineering", v. 21 ; p. 101693. ISSN 25901230. https://doi.org/10.1016/j.rineng.2023.101693.

Descripción

Título: A reinforcement learning strategy for p-adaptation in high order solvers
Autor/es:
Tipo de Documento: Artículo
Título de Revista/Publicación: Results in Engineering
Fecha: 1 Marzo 2024
ISSN: 25901230
Volumen: 21
Materias:
Palabras Clave Informales: High-order discontinuous Galerkin; Mesh adaptatio; Mesh adaptation; P-adaptation; Proximal Policy Optimization; Reinforcement Learning
Escuela: E.T.S. de Ingeniería Aeronáutica y del Espacio (UPM)
Departamento: Matemática Aplicada a la Ingeniería Aeroespacial
Licencias Creative Commons: Reconocimiento - Sin obra derivada - No comercial

Texto completo

[thumbnail of 10249070.pdf] PDF (Portable Document Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (1MB)

Resumen

Reinforcement learning (RL) has emerged as a promising approach to automating decision processes. This paper explores the application of RL techniques to optimise the polynomial order in the computational mesh when using high-order solvers. Mesh adaptation plays a crucial role in improving the efficiency of numerical simulations by increasing accuracy while reducing the cost. Here, actor-critic RL models based on Proximal Policy Optimization offer an approach for agents to learn optimal mesh modifications based on evolving conditions. The paper provides a strategy for p-adaptation in high-order solvers and includes insights into the main aspects of RL-based mesh adaptation, including the formulation of appropriate reward structures and the interaction between the RL agent and the simulation environment. The proposed strategy does not require a high-fidelity solution during the training process and the formulation is general for any computational mesh and partial differential equation (PDE), solved in a discontinuous Galerkin solver. We discuss the impact of RL-based mesh p-adaptation on computational efficiency and accuracy. We apply the RL p-adaptation strategy to a one-dimensional inviscid Burgers' equation, focusing our analysis on smooth solutions of the equation to showcase its effectiveness. The RL strategy reduces the computational cost and improves accuracy over uniform adaptation, while minimising human intervention.

Proyectos asociados

Tipo
Código
Acrónimo
Responsable
Título
Gobierno de España
PID2022-137899OB-I00
Sin especificar
Sin especificar
Sin especificar
Comunidad de Madrid
APOYO-JOVENES-21-53NYUB-19-RRX1A0
Sin especificar
Sin especificar
Sin especificar

Más información

ID de Registro: 86484
Identificador DC: https://oa.upm.es/86484/
Identificador OAI: oai:oa.upm.es:86484
URL Portal Científico: https://portalcientifico.upm.es/es/ipublic/item/10249070
Identificador DOI: 10.1016/j.rineng.2023.101693
URL Oficial: https://www.sciencedirect.com/science/article/pii/...
Depositado por: iMarina Portal Científico
Depositado el: 22 Ene 2025 09:47
Ultima Modificación: 22 Ene 2025 09:47