General

The MLPotential engine in the Amsterdam Modeling Suite can calculate the potential energy surface using several different types of machine learning (ML) potentials. To use the ML potentials, you need to first separately install them.

The supported models can be found on the page Models & Backends.

Quickstart guide

To set up a simple MLPotential job using the graphical user interface, see the

What’s new in AMS2024.1?

  • New models: AIMNet2-B973c and AIMNet2-wB97MD3. These are suitable for molecular systems containing H, B, C, N, O, F, Si, P, S, Cl, As, Se, Br, I. These are currently the only ML potential models that support charged systems (ions), and that predict atomic charges and dipole moments and that give IR intensities when calculating normal modes.

  • Train custom M3GNet models with ParAMS and Simple Active Learning (and use them in the MLPotential engine)

  • Auto detection of GPU.

  • When using the ANI (or AIMNet2) models, the mlpotential.txt file is no longer produced but the engine uncertainty (standard deviation of committee prediction) is written to the standard output and stored on the binary .rkf results files.

What’s new in AMS2023.1?

  • New model: M3GNet-UP-2022 based on M3GNet. This is a universal potential (UP) that can be used for the entire periodic table of elements up to, but excluding, Curium (Cm, 96).

  • New backend: M3GNet

  • PiNN is no longer a backend in MLPotential, but you can use it through Engine ASE.

Theory of ML potentials

With machine learning potentials, it is possible to quickly evaluate the energies and forces in a system with close to first-principles accuracy. Machine learning potentials are fitted (trained, parameterized) to reproduce reference data, typically calculated using an ab initio or DFT method. Machine learning potentials are sometimes referred to as machine learning force fields, or as interatomic potentials based on machine learning.

Several types of machine learning potentials exist, for example neural-network-based methods and kernel-based methods.

Several types of neural network potentials exist. It is common for such potentials to calculate the total energy as a sum of atomic contributions. In a high-dimensional neural network potential (HDNNP), as proposed by Behler and Parrinello 1, each atomic contribution is calculated by means of a feed-forward neural network, that takes in a representation of the chemical environment around the atom as input. This representation, or atomic environment descriptor or fingerprint, consists of a vector of rotationally, translationally, and permutationally invariant functions known as atom-centered symmetry functions (ACSF).

Graph convolutional neural network potentials (GCNNPs), or message-passing network neural potentials, similarly construct the total energy by summing up atomic contribution, but the appropriate representations of local atomic chemical environments are learned from the reference data.

Kernel-based methods make predictions based on how similar a system is to the systems in the training set.

There are also other types of machine learning potentials. For more detailed information, see for example references 2 and 3.

Support

SCM provides technical (non-scientific) support for installation and running simulations via the AMS driver.

See also: Frequently Asked Questions

Technical information

Each of the supported backends can be used as ASE (Atomic Simulation Environment) calculators. The MLPotential engine is an interface to those ASE calculators. The communication between the AMS driver and the backends is implemented with a named pipe interface. The MLPotential engine launches a python script, ase_calculators.py, which initializes the ASE calculator. The exact command that is executed is written as WorkerCommand in the output.

References

1

J. Behler, M. Parrinello. Phys. Rev. Lett. 98 (2007) 146401 https://doi.org/10.1103/PhysRevLett.98.146401

2

J. Behler. J. Chem. Phys. 145 (2016) 170901. https://doi.org/10.1063/1.4966192

3

T. Mueller, A. Hernandez, C. Wang. J. Chem. Phys. 152 (2020) 050902. https://doi.org/10.1063/1.4966192

4

C. Chen, S. P. Ong. Nature Computational Science 2, 718–728 (2022). arXiv.2202.02450.

5

D. M. Anstine, R. Zubatyuk, O. Isayev. https://doi.org/10.26434/chemrxiv-2023-296ch

6

C. Devereux et al., J. Chem. Theory Comput. 16 (2020) 4192-4202. https://doi.org/10.1021/acs.jctc.0c00121

7

J. S. Smith et al., Nat. Commun. 10 (2019) 2903. https://doi.org/10.1038/s41467-019-10827-4

8

J. S. Smith et al., J. Chem. Phys. 148 (2018) 241733. https://doi.org/10.1063/1.5023802

9

https://wiki.fysik.dtu.dk/ase/index.html

10

Y. Shao et al., J. Chem. Inf. Model. 60 (2020) 1184-1193. https://doi.org/10.1021/acs.jcim.9b00994

11

K. T. Schütt et al., J. Chem. Theory Comput. 15 (2019) 448-455. https://doi.org/10.1021/acs.jctc.8b00908

12

S. Chmiela et al. Comp. Phys. Commun. 240 (2019) 38-45. https://doi.org/10.1016/j.cpc.2019.02.007

13

X. Gao et al. J. Chem. Inf. Model (2020). https://doi.org/10.1021/acs.jcim.0c00451