• <menu id="cq4ey"><strong id="cq4ey"></strong></menu>
  • 當前位置: 首頁 > 正文
    國家天元數學中部中心高性能計算系列講座 | Dr. Marta D'Elia(Sandia National Labs.)
    發布時間:2021-10-18 16:57:19

    TitleData-Driven Learning of Nonlocal Models:Bridging Scales with Nonlocality

    Time2021-10-28  09:00-10:00

    SpeakerDr. Marta D'Elia  (Sandia National Labs.)

    Zoom id890 9977 6164   Password: 1028

    Abstract: Nonlocal models are characterized by integral operators that embed lengthscales in their definition. As such, they are preferable to classical partial differential equation models in situations where the dynamics of a system is affected by the small scale behavior, yet the small scales would require prohibitive computational cost to be treated explicitly. In this sense, nonlocal models can be considered as coarse-grained, homogenized models that, without resolving the small scales, are still able to accurately capture the system’s global behavior. However, nonlocal models depend on “kernel functions” that are often hand tuned. We propose to learn optimal kernel functions from high fidelity data by combining machine learning algorithms, known physics, and nonlocal theory. This combination guarantees that the resulting model is mathematically well-posed and physically consistent. Furthermore, by learning the operator rather than a surrogate for the solution, these models generalize well to settings that are different from the ones used during training, hence enabling transfer learning. We apply this learning technique to find homogenized nonlocal models for molecular dynamics displacements. Here, the machine-learned nonlocal operator embeds material properties in the kernel function and allows for accurate predictions at much coarser scales than the molecular or micro scale. We also apply the same kernel-learning technique to design new stable and resolution-independent deep neural networks, referred to as Nonlocal Kernel Networks (NKN). Stability of NKNs is obtained by imposing constraints derived from the nonlocal vector calculus, whereas deep training is performed by means of a shallow-to-deep initialization technique. We demonstrate the accuracy and stability of NKNs on PDE-learning and image-classification problems.

    Copyright 2019 ? 天元數學中部中心 National Tianyuan Mathematics Central Center

  • <menu id="cq4ey"><strong id="cq4ey"></strong></menu>