Scientific Machine LearningGeometric Deep LearningBayesian Deep LearningMiscellaneous

Scientific Machine Learning

Scientific machine learning (SciML) is a field of AI for Science that aims to enhance and accelerate computer simulations and modeling for physical phenomena using AI. In particular, deep geometric mechanics integrates insights from analytical mechanics, differential geometry, and scientific computing into deep learning. This allows for automatic modeling of physical phenomena (such as wave propagation and crystal growth) whose detailed mechanisms and equations are not yet understood, enabling fast and accurate computer simulations and the discovery of physical laws from data.

Scientific Machine Learning

Deep Geometric Mechanics

Poisson-Dirac Neural Networks for Modeling Coupled Dynamical Systems across Domains

Most deep learning-based physics models focus only on mechanical systems and treat systems as monolithic. These limitations reduce their applicability to electrical and hydraulic systems, and to coupled systems. To address these limitations, we propose Poisson-Dirac Neural Networks (PoDiNNs), which is based on the Dirac structure that unifies the port-Hamiltonian and Poisson formulations from geometric mechanics. This framework enables a unified representation of various dynamical systems across multiple domains as well as their interactions and degeneracies arising from couplings.

  • Razmik Arman Khosrovian, Takaharu Yaguchi, Hiroaki Yoshimura, and Takashi Matsubara, "Poisson-Dirac Neural Networks for Modeling Coupled Dynamical Systems across Domains", The Thirteenth International Conference on Learning Representations (ICLR2025), Singapore, Apr. 2025. (accepted)
    OpenReview arXiv
  • Razmik Khosrovian, Takaharu Yaguchi, Hiroaki Yoshimura, and Takashi Matsubara, "Modeling Coupled Systems by Neural Networks with Poisson Structures and Ports", International Conference on Scientific Computing and Machine Learning 2025 (SCML2025), Kyoto, 7 Mar. 2025. (oral)

Neural Differential Equations for Finding and Preserving Invariant Quantities

When modeling physical phenomena from data using deep learning, many studies have been conducted to improve modeling performance by incorporating known physical laws, such as Hamiltonian neural networks. However, when learning unknown dynamical systems with neural networks, it is often unclear what methods to use since the associated laws are also unknown. Therefore, we proposed FINDE, a method that applies projection methods to discover various types of invariants from data and ensure high-precision time series predictions.

  • Takashi Matsubara and Takaharu Yaguchi, "FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities," The Eleventh International Conference on Learning Representations(ICLR2023), Kigali, May 2023.
    OpenReview arXiv Poster Code

Fast and Memory-Efficient Gradient Computation of Neural ODEs Using Symplectic Adjoint Method

Neural ODEs, which learn ordinary differential equations (ODEs) with neural networks, can model continuous-time dynamical systems and continuous probability density functions with high accuracy. However, because the same network must be applied repeatedly, training with backpropagation requires a large memory footprint. Although the adjoint method is memory efficient, it requires considerable computation cost to suppress numerical errors. In this study, we combined the symplectic numerical integration method used in the adjoint method with an appropriate checkpointing method to achieve both memory efficiency and fast gradient computation.

  • Takashi Matsubara, Yuto Miyatake, and Takaharu Yaguchi, "The Symplectic Adjoint Method: Memory-Efficient Backpropagation of Neural-Network-Based Differential Equations," IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 8, pp. 10526-10538, Feb. 2023.
    IEEE
  • Takashi Matsubara, Yuto Miyatake, and Takaharu Yaguchi, "Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory," Advances in Neural Information Processing Systems (NeurIPS2021), Online, Dec. 2021.
    OpenReview arXiv Poster Movie Code

Deep Physical Simulation Satisfying Energy Conservation and Dissipation Laws Using Automatic Discrete Differentiation

The governing equations of physical phenomena are often defined in continuous time, but computational simulations must be performed in discrete time, leading to discretization errors that break physical laws such as energy conservation and dissipation, resulting in unreliable outcomes. To avoid this, the discrete gradient has been studied, but it requires manual equation transformation, making it difficult to apply to machine learning. This study proposed the automatic discrete differentiation algorithm, enabling the application of discrete gradient methods to deep learning. This allows for simulations that strictly preserve energy conservation and dissipation laws while learning the target dynamical system from data using deep learning.

  • Takashi Matsubara, Takehiro Aoshima, Ai Ishikawa, and Takaharu Yaguchi, "Deep Energy-Based Discrete-Time Physical Model for Reproducing Energetic Behavior," IEEE Transactions on Neural Networks and Learning Systems, 28 Jan. 2025. (accepted)
    IEEE
  • Takashi Matsubara, Ai Ishikawa, and Takaharu Yaguchi, "Deep Energy-Based Modeling of Discrete-Time Physics," Advances in Neural Information Processing Systems (NeurIPS), Online, Dec. 2020. (oral)
    Proceeding arXiv Poster Movie Code

Operator Learning

Fast Learning of PINNs Using Number Theoretic Evaluation Points

Physics-informed neural networks (PINNs) use neural networks as basis functions to represent solutions of partial differential equations (PDEs), allowing for flexible solution representation and easy extension to inverse problems. In PINNs, the network is trained to satisfy the PDE at a finite number of evaluation points, so the choice of evaluation points directly affects learning efficiency and accuracy. This study applied the good lattice point method based on number theoretic approaches to PINNs and proposed several techniques to meet its prerequisites. As a result, learning was accelerated by 2 to 7 times for low-dimensional PDEs.

  • Takashi Matsubara and Takaharu Yaguchi, "Number Theoretic Accelerated Learning of Physics-Informed Neural Networks," The Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI2025), Philadelphia, 28 Feb. 2025. (oral)
    arXiv Slide Poster