2.2. Getting Started: Lennard-Jones

This example illustrates how to fit a Lennard-Jones potential. The systems are snapshots from a liquid Ar MD simulation. The forces and energies (the reference data) were calculated with dispersion-corrected DFTB.

Important: This tutorial requires AMS2023 or later. Using AMS2022? See the AMS2022 ParAMS tutorials.

Note

In this tutorial the training data has already been prepared. See how it was generated further down.

../../_images/LJ_Ar_snapshot_and_correlation_plot.png

Fig. 2.1 Left: One of the systems in the job collection. Right: predicted (with parametrized Lennard-Jones) forces compared to reference (dispersion-corrected DFTB) forces.

Tip

Each step of the tutorial covers

  • How to use the ParAMS graphical user interface

  • How to run or view results from the command-line

2.2.1. Lennard-Jones Parameters, Engine, and Interface

The Lennard-Jones potential has the form

\[V(r) = 4\epsilon \left[\left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^6 \right] = \epsilon \left[\left(\frac{r_{\text{min}}}{r}\right)^{12} - 2\left(\frac{r_{\text{min}}}{r}\right)^6 \right]\]

where \(\epsilon\) and \(\sigma\) are parameters. The Lennard-Jones engine in AMS has the two parameters Eps (\(\epsilon\)) and RMin (distance at which the potential reaches a minimum), where \(\text{Rmin} = 2^{1/6}\sigma\).

In ParAMS, those two parameters can be optimized with the Lennard-Jones parameter interface, which is used in this example. The parameters then have the names eps and rmin (lowercase).

2.2.2. Files

Download LJ_Ar_example.zip and unzip the file, or make a copy of the directory $AMSHOME/scripting/scm/params/examples/LJ_Ar.

The directory contains four files (parameter_interface.yaml, job_collection.yaml, training_set.yaml, and params.in) that you can view and change in the ParAMS graphical user interface (GUI). If you prefer, you can also open them in a text editor.

Start the ParAMS GUI: First open AMSjobs or AMSinput, and then select SCM → ParAMS
Select File → Open and browse to any of the example files. Select one, this will also import the other files in the same directory.
../../_images/LJ_Ar_GUI_initial_screen.png
LJ_Ar
├── job_collection.yaml
├── parameter_interface.yaml
├── params.in
├── lennardjones.py
├── README.txt
└── training_set.yaml

2.2.3. Workflows

ParAMS 2023 provides three workflows:

  • GUI: Recommended main interface allowing you to easily setup tasks, visualize results, and submit local or remote jobs.

  • Command-line: Console interface for systems without GUI support (e.g. submitting jobs on a cluster).

  • Scripting: Python/PLAMS interface to ParAMS allowing you to integrate it into data workflows and easily setup multiple configurations. Use the $AMSBIN/amspython program to execute python scripts.

All of them are based on the new params.in input file which follows the AMS-style input language (see Input file: params.in for more details). In this tutorial we will demonstrate how to complete the optimization in all three ways.

The python script (lennardjones.py) has already been prepared for you, and we will be referring to it in the Scripting tabs throughout the tutorial.

2.2.4. ParAMS input

2.2.4.1. Parameter interface (parameter_interface.yaml)

The parameters are shown on the Parameters tab.

../../_images/LJ_Ar_GUI_parameters.png

All the parameters in a parameter interface have names. For Lennard-Jones, there are only two parameters: eps and rmin.

Every parameter has a value and an allowed range of values. Above, the value for eps is shown to be 0.0003, and the allowed range to between 1e-5 and 0.01. This means that the eps variable will only be varied between \(10^{-5}\) and \(10^{-2}\) during the parametrization.

Similarly, the initial value for rmin is set to 4.0, and the allowed range is between 1.0 and 8.0.

Note

You can edit the parameter values and the allowed ranges directly in the table.

The Active attribute of the parameters is set checked, meaning that they will be optimized. To not optimize a parameter, untick the Active checkbox.

The Val % column indicates how close to the Min or Max that the current value is. It is calculated as 100*(value-min)/(max-min). For example, for rmin, it is 100*(4.0-1.0)/(8.0-1.0) = 42.9. It has no effect on the parametrization; it only lets you quickly see if the value is close to the Min or Max.

2.2.4.2. Job Collection (job_collection.yaml)

The Jobs panel contains three entries. They have the JobIDs Ar32_frame001, Ar32_frame002, and Ar32_frame003.

../../_images/LJ_Ar_GUI_jobs.png
Select one of the jobs in the table.
This updates the molecule area to the left to show the structure.
You can customize how the molecule is shown just like in AMSinput, AMSmovie, etc. For example, select View → Periodic → Periodic View Type → Repeat Unit Cells to see some periodic images.
Select View → Periodic → RPeriodic View Type → epeat Unit cells again to only view one periodic image.

The Jobs panel has four columns:

  • JobID: The name of the job. You can rename a job directly in the table by first selecting the row, and then clicking on the job id.

  • Detail: Some information about the job. SinglePoint + gradients means that a single point calculation on the structure is performed, also calculating the gradients (forces). Double-click in the Detail column for a job to see and edit the details. You can also toggle the Info panel in the bottom half.

  • Reference Engine: This column can contain details about the reference calculation, and is described more in the tutorials Import training data (GUI) and Generate reference values.

  • ParAMS Engine: This column can contain details about job-specific engine settings. It is described in the tutorial GFN1-xTB: Lithium fluoride

2.2.4.3. Training Set (training_set.yaml)

The Training Set panel contains five entries: two of type Energy and three of type Forces.

../../_images/LJ_Ar_GUI_training_set.png

The first Energy entry has the Detail Ar32_frame001-Ar32_frame002. This means that for every iteration during the parametrization, the current Lennard-Jones parameters will be used to calculate the energy of the job Ar32_frame001 minus the energy of the job Ar32_frame002. The number should ideally be as close as possible to the reference value, which is given as 0.204 eV in the Value column. The greater the deviation from the reference value, the more this entry will contribute to the loss function.

Double-click in the Detail column for the first entry to see some more details.

../../_images/LJ_Ar_GUI_training_set_details.png

This brings up a dialog where you can change

  • What is calculated (the Energy text box). For example, energy("Ar32_frame001") extracts the energy of the Ar32_frame001 job. You can combine an arbitrary number of such energies with normal arithmetic operations (+, -, /, *). For details, see the Import training data (GUI) tutorial.

  • Sigma: A number signifying an “acceptable prediction error”. Here, it is given as 0.0544 eV, which is the default value for energies. A smaller sigma will make the training set entry more important (contribute more to the loss function). For beginning ParAMS users, we do not recommend to modify Sigma, but instead to modify the Weight.

  • Weight: A number signifying how important the training set entry is. A larger weight will make the training set entry more important (contribute more to the loss function). The default weight is 1.0.

Note

The Sigma for a training set entry is not the σ that appears in the Lennard-Jones equation. For more information about Sigma and Weight, see Sigma vs. weight: What is the difference?.

  • Unit: The unit that Sigma and the reference value are expressed in. Here, it is set to eV. See how to set preferred units.

  • Value: The reference value (expressed in the unit above).

Click OK. This brings you back to the Training Set panel.

The W column contains the Weight of the training set entry. You can edit it directly in the table.

The Value column contains the reference value.

The Prediction column contains the predicted value (for the “best” parameter set) for running or finished parametrizations. It is now empty.

The Loss % column contains the how much an entry contributes to the loss function (in percent) for running or finished parametrizations. It is now empty.

Many different quantities can be extracted from a job. The third entry is of type Forces for the job Ar32_frame001. The reference data are the atomic forces (32 × 3 force components) from the job Ar32_frame001. The Value column gives a summary: [-0.3844, 0.3482] (32×3), meaning that the most negative force component is -0.3844 eV/Å, and the most positive force component is +0.3482 eV/Å.

To see all force components, either double-click in the Details column or switch to the Info panel at the bottom.

2.2.4.4. ParAMS settings (params.in)

Important: This description applies to AMS2023. Using AMS2022? See the AMS2022 ParAMS tutorials.

Click OptimizationPanel to open the input panel.

../../_images/LJ_Ar_GUI_settings.png

The Main panel on the flyout contains all of the most important options. You can see that the two options in the params.in input file appear here:

  • Optimizer: Scipy

  • Time limit: 120 s

Optimizer: This option selects the optimizer which should be used on the problem. For simple optimization problems like Lennard-Jones, the Nelder-Mead method from SciPy can be used. For more complicated problems, like ReaxFF optimization, a more advanced optimizer like the CMA-ES is recommended (see for example the ReaxFF (basic): H₂O bond scan tutorial).

Time limit: This option specifies the time limit of the optimization. If the parametrization takes longer than two minutes (120 seconds), it will stop. If you remove the option (by clearing the contents of the field) then there is no time limit.

Max Optimizers Converged: This is an alternative termination option. It specifies that we would like to stop the optimization as soon as the optimizer is done. Since we have two Exit Conditions, the optimization will stop when either condition is met.

Note

The Max Optimizers Converged condition is required in AMS2023. In AMS2022 you could only run a single optimizer at a time. Thus, the optimization would always end when the optimizer ended. In AMS2023 you can now run multiple optimizers in parallel or sequentially. Therefore, you need to specify exactly when you would like to exit. Without this condition, if the optimizer stopped before the two minute time limit, ParAMS would start a new optimizer.

There are also many other options. For this tutorial we will remain with the basics.

2.2.5. Run the example

Save the project with a new name. If the name does not end with .params, then .params will be added to the name.

File → Save As with the name lennardjones.params
File → Run
This brings up an AMSjobs window.
../../_images/LJ_Ar_GUI_running_amsjobs.png
Go back to the ParAMS window.
While the parameterization is running, go to the Graphs panel at the bottom.
There, you can see some graphs automatically updating. These show you the current “best” results in a variety of different ways. They will be explained in more detail later.
../../_images/LJ_Ar_GUI_graphs.png

Tip

Using AMSjobs you can also submit ParAMS jobs to remote queues (compute clusters).

2.2.6. Parametrization results

2.2.6.1. The best parameter values

The best (optimized) parameters are shown on the Parameters tab in the bottom half of the window. They get automatically updated as the optimization progresses.

../../_images/LJ_Ar_GUI_optimized_parameters.png

You can also find them in the file lennardjones.results/optimization/training_set_results/best/lj_parameters.txt (or lennardjones.results/optimization/training_set_results/best/parameter_interface.yaml):

Engine LennardJones
    Eps 0.00019604583935927278
    RMin 3.653807860077536
EndEngine

Tip

To use the optimized parameters in a new simulation, open AMSinput and switch to the Lennard-Jones panel. Enter the values of the parameters and any other simulation details.

2.2.6.2. Correlation plots

Go to the Graphs panel. There are three curves shown:

../../_images/LJ_Ar_GUI_graphs.png
  • The loss function vs. evaluation number

  • The RMSE (root mean squared error) of energy predictions vs. evaluation number

  • A scatter plot of predicted vs. reference energies

There are many different types of graphs that you can plot. To plot a correlation plot between the predicted and reference forces:

In the drop-down that currently says Energy, choose Forces
Click on one of the plotted points
This selects the corresponding Job in the table above
In the 3D area the atom for the force component you clicked is selected
../../_images/LJ_Ar_GUI_forces.png

The black diagonal line is the line y = x. In this case, the predicted forces are very close to the reference forces! Let’s compare to the initially predicted forces, i.e. let’s compare the optimized parameters eps = 0.000196 and rmin = 3.6538 (from evaluation 144) to the initial parameters eps = 0.0003 and rmin = 4.0 (from evaluation 0):

In the Data From dropdown above the correlation plot, tick Training(initial): forces.
This adds a set of green datapoints with the initial predictions.
../../_images/LJ_Ar_GUI_forces_initial.png

There is a third drop-down where you can choose whether to plot the Best or Latest training data. In this example, both correspond to evaluation 144. In general, the latest evaluation doesn’t have to be the best. This is especially true for other optimizers like the CMA optimizer (recommended for ReaxFF).

2.2.6.3. Error plots

Go to the Graphs panel.

../../_images/LJ_Ar_GUI_running_loss.png

The left-most plot shows the evolution of the loss function with evaluation number. The goal of the parametrization is to minimize the loss function.

By default, the loss function value is saved every 10 evaluations or whenever it decreases.

You can also choose between the RMSE and MAE (mean absolute error) for energies and forces (and other predicted properties):

Go to graph where the drop-down says Stats → Forces
This plots the MAE of the forces vs. evaluation number
In the drop-down that says MAE, choose RMSE
This plots the RMSE of the forces vs. evaluation number
../../_images/LJ_Ar_GUI_rmse_forces.png

By default, the RMSE and MAE are saved every 10 evaluations or whenever the loss function decreases.

2.2.6.4. Parameter plots

In the bottom table, switch to the Graphs panel.
In one of the drop-downs, choose Active Parameters: eps.
This plots the value of the eps parameter vs. evaluation number
../../_images/LJ_Ar_GUI_eps.png

The parameters are by default saved every 500 evaluations, or whenever the loss function decreases.

You can similarly plot the rmin parameter.

Mouse-over the graph to see the value of the parameter at that iteration.

The best parameter values are shown on the Parameters tab.

2.2.6.5. Editing and Saving Plots

All of the above GUI plots can be customized to your liking by double clicking anywhere on one of the plot axes. This brings up a window allowing you to configure various graph options like scales, labels, limits, titles, etc.

../../_images/edit_plots.png

To save the plot:

File → Save Graph As Image
Select one of the three plots to save
Choose file name and Save

If you would like to extract the raw data from the plot in .xy format:

File → Export Graph As XY
Select one of the three plots to save
Choose file name and Save

2.2.6.6. Predicted values

Switch to the Training Set panel in the upper table.

../../_images/LJ_Ar_GUI_training_set_results.png

The Prediction column contains the predicted values (for the best evaluation, i.e., the one with the lowest loss function value). For the forces, only a summary of the minimum and maximum value is given. To see all predicted, values select one of the Forces entries and go to the Info panel at the bottom and scroll down.

You can also view the predicted values in a different way:

Switch to the Results panel in the bottom table
In the drop-down, select Training best: energy
This shows the file lennardjones.results/optimization/training_set_results/best/scatter_plots/energy.txt
../../_images/LJ_Ar_GUI_table_energy.png

This shows a good agreement between the predicted and reference values for the relative energies in the training set.

In the drop-down, select Training best: forces
This shows the file lennardjones.results/optimization/training_set_results/best/scatter_plots/forces.txt
../../_images/LJ_Ar_GUI_table_forces.png

energy.txt and forces.txt are the files that are plotted when making Correlation plots.

There is one file for every extractor in the training set.

2.2.6.7. Loss contributions

In the top table, switch to the Training Set panel.

../../_images/LJ_Ar_GUI_training_set_results.png

The last column Loss % contains the Loss Contrubution. For each training set entry, it gives the fraction that the entry contributes to the loss function value.

Here, for example, the two Energy entries only contribute 0.96% to the loss function, and the three Forces entries contribute 99.04%. If you notice that some entries have a large contribution to the loss function, and that this prevents the optimization from progressing, you may consider decreasing the weight of those entries.

2.2.6.8. Summary statistics

In the bottom table, switch to the Results panel.
In the drop-down, select Training best: stats
This shows the file lennardjones.results/optimization/training_set_results/best/stats.txt
../../_images/LJ_Ar_GUI_table_stats.png

This file gives the mean absolute error (MAE) and root-mean-squared error (RMSE) per entry in the training set. The column N gives how many numbers were averaged to calculate the MAE or RMSE.

For example, in the row forces the N is 288 (the total number of force components in the training set), and the MAE taken over all force components is 0.00320 eV/Å. In the row Ar32_frame003, the N is 96 (the number of force components for job Ar32_frame003), and the MAE is 0.00367 eV/Å.

Further down in the file are the energies. In the row energy the N is 2 (the total number of energy entries in the training set). The entry Ar32_frame003-Ar32_frame002 has N = 1, since the energy is just a single number. In that case, the MAE and RMSE would be identical, so the file gives the absolute error in the MAE column and the signed error (reference - prediction) in the RMSE column.

The file also gives the weights, loss function values, and loss contributions of the training set entries. The total loss function value is printed below the table.

2.2.6.9. All output files

The GUI stores all results in the directory jobname.results (here lennardjones.results).

The results from the optimization are stored in the optimization directory. All the inputs to the optimization are stored in the settings_and_initial_data directory:

jobname.results
├── settings_and_initial_data
│   └── data_sets
└── optimization
    ├── summary.txt
    ├── glompo_optimizer_printstreams
    ├── optimizer_001
    │   ├── end_condition.txt
    │   └── training_set_results
    │       ├── best
    │       │   ├── pes_predictions
    │       │   └── scatter_plots
    │       ├── history
    │       │   ├── 000000
    │       │   │   ├── pes_predictions
    │       │   │   └── scatter_plots
    │       │   └── 000144
    │       │       ├── pes_predictions
    │       │       └── scatter_plots
    │       └── latest
    │           ├── pes_predictions
    │           └── scatter_plots
    └── training_set_results
        ├── best
        │   ├── pes_predictions
        │   └── scatter_plots
        ├── initial
        │   ├── pes_predictions
        │   └── scatter_plots
        └── latest
            ├── pes_predictions
            └── scatter_plots
  • The settings_and_initial_data directory contains compressed versions of the job collection, training set, and parameter interface. It also contains a detailed params.in file representing the chosen optimization settings. This directory is a totally self-contained copy of all the optimization inputs and can be shared, archived or used to rerun the job on other machines.

  • From AMS2023, ParAMS can start multiple optimizers during a single optimization. The results from each optimizer will be placed in a directory labelled optimizer_xxx, where xxx is the unique optimizer identification number. Such folders contain an end_condition.txt file detailing the reason they stopped, and directories for each of the data sets being evaluated. In this case, only the training_set was used.

  • The optimization/training_set_results directory contains global detailed results for the training set, combined across all optimizers.

The training_set_results directories contain the following:

  • The running_loss.txt file records the loss function value, evaluation number, time, and (in the global results) the optimizer which did the evaluation.

  • The running_active_parameters.txt file records the parameter values per evaluation number.

  • The running_stats.txt file records the MAE and RMSE per extractor vs. evaluation number.

  • The best subdirectory contains detailed results for the iteration with the lowest loss function value (globally or for a specific optimizer).

  • The history subdirectory contains detailed results that are stored regularly during the optimization (by default every 500 iterations). Only optimizer specific results have history directories.

  • The initial subdirectory contains detailed results for the first iteration (with the initial parameters). Only the global level results have the initial directory.

  • The latest subdirectory contains detailed results for the latest iteration.

In this tutorial, only one optimizer was started, so the contents of the global level results and optimizer_001 will be the same. Further, both the best and latest evaluations were evaluation 144, your results will likely have a different number. In general, the latest evaluation doesn’t have to be the best. This is especially true for other optimizers like the CMA optimizer (recommended for ReaxFF).

Each detailed result subdirectory contains the following:

  • active_parameters.txt : List of the active parameter values

  • data_set_predictions.yaml : File storing the training set with both the reference values and predicted values.

  • engine.txt: an AMS Engine settings input block for the parameterized engine.

  • evaluation.txt: the evaluation number

  • optimizer_id.txt: the number of the optimizer which produced the result (global level only)

  • lj_parameters.txt: The parameters in a format that can be read by AMS. Here, it is identical to engine.txt. For ReaxFF parameters, you instead get a file ffield.ff. For GFN1-xTB parameters, you get a folder called xtb_files.

  • loss.txt: The loss function value

  • parameter_interface.yaml: The parameters in a format that can be read by ParAMS

  • pes_predictions: A directory containing results for PES scans: bond scans, angle scans, and volume scans. It is empty in this tutorial. For an example, see ReaxFF (basic): H₂O bond scan.

  • scatter_plots: Directory containing energy.txt, forces.txt, etc. Each file contains a table of reference and predicted values for creating scatter/correlation plots.

  • stats.txt: Contains MAE/RMSE/Loss contribution for each training set entry sorted in order of decreasing loss contribution.

2.2.6.10. summary.txt

The results/optimization/summary.txt file contains a summary of the job collection, training set, and settings:

Optimization() Instance Settings:
=================================
Workdir:                           LJ_Ar/optimization/optimization
JobCollection size:                3
Interface:                         LennardJonesParameters
Active parameters:                 2
Optimizer:                         Scipy
Parallelism:                       ParallelLevels(optimizations=1, parametervectors=1, jobs=1, processes=1, threads=1)
Verbose:                           True
Callbacks:                         Logger
                                   Timeout
                                   Stopfile
PLAMS workdir path:                /tmp

Evaluators:
-----------
Name:                              training_set (_LossEvaluator)
Loss:                              SSE
Evaluation interval:              1

Data Set entries:                  5
Data Set jobs:                     3
Batch size:                        None

Use PIPE:                          True
---
===
Start time: 2021-12-06 10:07:21.681185
End time:   2021-12-06 10:07:32.125530

2.2.7. Close the ParAMS GUI

When you close the ParAMS GUI (File → Close), you will be asked whether to save your changes.

This question might appear strange since you didn’t make any changes after running the job.

The reason is that ParAMS auto-updates the table of parameters while the parametrization is running, and also automatically reads the optimized parameters when you open the project again. To revert to the initial parameters, choose File → Revert Parameters.

If you save the changes, this will save the optimized parameters as the “new” starting parameters, which could be used as a starting point for further parametrization.

We do not recommend to overwrite the same folder several times with different starting parameters. Instead, if you want to save after having made changes to the parameters, use File → Save As and save in a new directory.

2.2.8. Appendix: Creation of the input files

How to run the reference calculations and import the results into ParAMS is explained in the Import training data (GUI) tutorial. The data for this Lennard-Jones tutorial was generated following the section MD with fast method followed by Replay with reference method.

2.2.9. Next steps