Enhancing Physics-Informed Neural Networks with Domain-aware Fourier Features: Towards Improved Performance and Interpretable Results

Domain-aware Fourier Features (DaFFs) are a novel positional encoding technique that embeds a physical domain's geometry and boundary conditions directly into Physics-Informed Neural Networks (PINNs). This method eliminates complex loss balancing, reduces computational costs, and enables more interpretable models through adapted Layer-wise Relevance Propagation (LRP) frameworks. DaFFs produce physically consistent attribution maps, making PINNs both faster and more trustworthy than vanilla or Random Fourier Feature (RFF) approaches.

Enhancing Physics-Informed Neural Networks with Domain-aware Fourier Features: Towards Improved Performance and Interpretable Results

Domain-Aware Fourier Features: A Breakthrough for Training and Interpreting Physics-Informed Neural Networks

A novel method for training Physics-Informed Neural Networks (PINNs) promises to solve two of the field's most persistent challenges: difficult optimization and poor interpretability. Researchers have introduced Domain-aware Fourier Features (DaFFs), a new positional encoding technique that embeds the physical domain's geometry and boundary conditions directly into the network's input layer. This innovation eliminates the need for complex loss balancing and explicit boundary condition penalties, leading to faster training, lower computational costs, and, crucially, models that are easier to explain using advanced attribution methods.

Overcoming PINN Training Bottlenecks with Intelligent Encoding

PINNs integrate scientific knowledge by adding governing partial differential equations (PDEs) to the model's loss function. However, this fusion often creates a hard-to-optimize objective, requiring careful tuning of loss term weights and boundary condition enforcement. The newly proposed DaFFs tackle this by moving domain-specific information from the loss function to the input encoding. Unlike standard Random Fourier Features (RFFs), DaFFs are deterministically constructed from the problem's domain, encapsulating its specific characteristics upfront.

This architectural shift has a profound impact on the training process. By pre-encoding the domain's physical constraints, the neural network no longer needs to learn them from penalized loss terms. This simplifies the optimization landscape, reduces the number of hyperparameters to tune, and cuts the computational overhead associated with calculating complex boundary losses. The result is a more streamlined and efficient path to an accurate solution.

Enhancing Interpretability with an LRP Framework for PINNs

Beyond performance, a significant contribution of this work is a tailored framework for explaining PINN decisions. The researchers adapted Layer-wise Relevance Propagation (LRP), a popular explainable AI (XAI) technique, to work with the physics-informed architecture. This allows scientists to extract relevance attribution scores, showing which parts of the input space (e.g., specific spatial or temporal coordinates) the model deems most important for its predictions.

When applied, this explainability framework revealed a stark contrast between models. PINNs using DaFFs produced attribution maps that were physically consistent and aligned with domain knowledge. In contrast, attributions from vanilla PINNs and PINN-RFFs were more scattered and less relevant to the underlying physics. This demonstrates that DaFFs do not just create faster models, but also more trustworthy and interpretable ones.

Quantifiable Gains in Accuracy and Speed

The empirical results underscore the method's superiority. In benchmark tests, PINN-DaFFs achieved errors orders of magnitude lower than both baseline vanilla PINNs and those using random Fourier features. Furthermore, convergence to a high-accuracy solution was significantly faster, showcasing the efficiency gains from the simplified optimization problem. These performance metrics confirm that embedding domain knowledge directly into the input features is a more effective inductive bias than enforcing it solely through the loss function.

Why This Matters for Scientific Machine Learning

  • Solves Key Training Pain Points: DaFFs eliminate the need for manual loss balancing and explicit boundary loss terms, which are major hurdles in deploying PINNs for complex problems.
  • Makes AI for Science More Interpretable: The integrated LRP framework provides a critical tool for debugging models and building trust, ensuring predictions are based on physically sound reasoning.
  • Improves Computational Efficiency: By simplifying the loss function and accelerating convergence, the method reduces the substantial computational cost typically associated with training PINNs.
  • Lays a Foundation for Robust Models: The combination of higher accuracy, faster training, and improved explainability paves the way for more reliable and adoptable physics-informed learning systems across engineering and scientific disciplines.

常见问题