Installation & Uninstallation

Tip

If AMS does not support your preferred ML potential, you may be able to install it into the AMS Python environment yourself and use it through engine ASE.

The Amsterdam Modeling Suite requires the installation of additional Python packages to run the machine learning potential backends.

If you set up an MLPotential job via the graphical user interface, you will be asked to install the packages if they have not been installed already when you save your input. You can also use the package manager. A command-line installation tool can also be used, for instance to install the torchani backend:

"$AMSBIN"/amspackages install torchani

You can use the command line installer to install these packages on a remote system, so that you can seamlessly run MLPotential jobs also on remote machines.

The packages are installed into the AMS Python environment, and do not affect any other Python installation on the system. For the installation, an internet connection is required, unless you have configured the AMS package manager for offline use .

To uninstall a package, e.g. torchani, run:

"$AMSBIN"/amspackages remove torchani

Installing GPU enabled backends using AMSpackages

New in version AMS2023.101.

Various versions of the ML potential packages are available through the AMSpackages, with different system dependencies such as GPU drivers. The option can be selected under the “ML options” menu in the graphical package manager (SCM -> Packages). You can choose from the following options,

  • CPU, will install CPU-only backends, including PyTorch and Tensorflow-CPU.

  • GPU (Cuda 11.6), will install GPU enabled backends, including Tensorflow, and a CUDA 11.6 specific version of pyTorch.

  • GPU (Cuda 11.7), will install GPU enabled backends, including Tensorflow, but will include CUDA 11.7 enabled pyTorch instead.

The default is CPU. Note that this is the only option available under MacOS.

Using the package manager on the the command line or in shell scripts you can use the --alt flag, together with one of the options. On the command line the options are denoted as mlcpu, mlcu116 and mlcu117 respectively. To install GPU enabled versions of the ML potential backends on the command line, for instance using the CUDA 11.7 enabled version of PyTorch:

$ "$AMSBIN"/amspackages --alt mlcu117 install mlpotentials
Going to install packages:
nvidia-cuda-runtime-cu11 v[11.7.99] - build:0
tensorflow v[2.9.1] - build:0
All ML Potential backends v[2.0.0] - build:0
torch v[1.13.1+cu117] - build:0
nvidia-cudnn-cu11 v[8.5.0.96] - build:0
M3GNet ML Backend v[0.2.4] - build:0
sGDML Calculator patch v[0.4.4] - build:0
TorchANI Calculator patch v[2.2] - build:0
SchNetPack ML Backend v[1.0.0] - build:0
nvidia-cuda-nvrtc-cu11 v[11.7.99] - build:0
nvidia-cublas-cu11 v[11.10.3.66] - build:0
ANI Models for TorchANI backend v[2.2] - build:0
TorchANI NN module patch v[2.2] - build:0
TorchANI ML backend v[2.2] - build:0
sGDML ML backend v[0.4.4] - build:0

Alternatively, to install a single backend for instance torchani:

"$AMSBIN"/amspackages --alt mlcu117 install torchani

To change the default value, you can set an environment variable SCM_AMSPKGS_ALTERNATIVES. For advanced configuration options of the package installation, see also the package manager instructions.

Installing packages using pip

The package manager installs trusted and tested versions of packages from our website, but if you require a different version you can use pip to install packages from https://pypi.org:

"$AMSBIN"/amspython -m pip install -U torch

Note

Packages installed through pip alone by the user will not show up as installed in the package manager, but they will be detected and used if possible.

If you install a package into your amspython environment, using amspython -m pip install, the package manager will not display it in its overview. However, it will allow you to make use of it for running calculations with the ML Potential module. If you want to make sure that the version you installed will be detected, you can use

$ "$AMSBIN"/amspackages check --pip torch
05-11 10:47:57 torch is not installed!
05-11 10:47:57 User installed version located through pip: torch==1.8.1

Not all versions of the packages on PyPI work with our ML potential backends.

Installing NequIP using pip

NequIP is a popular equivariant machine learning potential, and is technically supported by the MLPotential engine. However, it cannot be installed through AMSpackages.

To install NequIP into the AMS Python environment, you may take the below instructions as a starting point. However, there is no guarantee that they will work for you on your system. SCM does not provide support for installing NequIP.

Tested with: AMS2024.101, Ubuntu Linux 22.04, February 13 2024

  • To install NequIP, first install TorchANI through the package manager:

amspackages install torchani
  • Next, install the NequIP package and related packages. Note that these versions are only a recommendation and might not work on every system.

amspython -m pip install nequip==0.5.5 --no-dependencies
amspython -m pip install e3nn==0.5.1 --no-dependencies
amspython -m pip install opt-einsum==3.3.0 --no-dependencies
amspython -m pip install opt-einsum-fx==0.1.4 --no-dependencies
amspython -m pip install sympy==1.11.1 --no-dependencies
amspython -m pip install mpmath==1.2.1 --no-dependencies
amspython -m pip install torch-runstats==0.2.0 --no-dependencies
amspython -m pip install scikit-learn==1.2.0 --no-dependencies
amspython -m pip install joblib==1.3.2 --no-dependencies
amspython -m pip install threadpoolctl==3.2.0 --no-dependencies
amspython -m pip install torch-ema==0.3 --no-dependencies
  • For using the Allegro plugin for NequIP (see $AMSHOME/scripting/scm/params/examples/Allegro for an example) install the Allegro package from source (only python files):

cd <some place where it is convenient to install programs>
git clone --depth 1 https://github.com/mir-group/allegro.git
cd allegro
amspython -m pip install . --no-dependencies

Debugging installation and available resources

A tool is provided to investigate the current installation of ML backends and frameworks. This tool will also report the resources that would be found by AMS if a calculation was performed with default settings.

The tool is used as follows:

$AMSBIN/amspython $AMSHOME/Utils/check_ml_backends.py
The following sections are part of the output:
  • Installed machine learning frameworks: which frameworks are installed

  • Installed machine learning backends: which backends are installed

  • Machine learning framework details: details by the frameworks about CPU and GPU usage

Example output:

Installed machine learning frameworks:
PyTorch    : installed!
TensorFlow : installed!

Installed machine learning backends:
AIMNet2    : installed!
ANI2       : installed!
NequIP     : installed!
M3GNet     : installed!

Machine learning framework details (simulating an AMS calculation):

#####################PyTorch setup###################
PyTorch 1.13.1+cpu found the following devices:
Number of threads were not limited, using all available CPU cores.
Using CPU only.
#####################################################

###################TensorFlow setup##################
TensorFlow 2.9.1-cpu found the following devices:
PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')
NumThreads was not specified in the MLPotential engine block, so TensorFlow will use all available cores.
#####################################################

If there are any issues, before contacting support, please run the following command:

$AMSBIN/amspython $AMSHOME/Utils/check_ml_backends.py --debug

and then report the output with your question.

This will give us additional details on why a framework or backend was not considered installed and reports about potential issues in the environment.