6. Frequently Asked Questions

Can ParAMS run on multiple compute nodes?

No. ParAMS can only be run on a single compute node. However, it can run in parallel on that node. See Parallelization.

How do I delete a parameter (block) in the GUI?

It is currently not possible to delete parameters.

Why are MaxIterations and PretendConverged set for geometry optimization jobs?

If you use the GUI or a Results Importer to import a job, setting the Task to GeometryOptimization, you’ll find that the settings for the GeometryOptimization default to

    MaxIterations 30
    PretendConverged Yes

This means that during the parametrization, only a maximum of 30 iterations are allowed. The reason to limit the number of iterations is that during the parametrization, there may be unrealistic sets of parameters for which a geometry optimization would “never” converge. By limiting the number of iterations, the parametrization will not get stuck.

PretendConverged Yes means that if the maximum of 30 iterations is reached, ParAMS will simply use the last geometry (and its energy). If you wouldn’t set PretendConverged, the geometry optimization would be considered as an error (because it didn’t converge in MaxIterations), giving an infinite loss function value.

You can easily change the MaxIterations for many jobs at once. In the GUI, select all the geometry optimization jobs you want to edit, and double-click the Details of one of them. Change the MaxIterations in the window, and click OK. That will change it for all jobs you originally selected.

If you use the ResultsImporter class, you can set MaxIterations in the settings.

What does “UserWarning: At iteration ___ (training_set), received warning: I/O operation on closed file” mean?

This warning can appear when you are running an optimization in parallel and if you log frequently to disk, especially if you have a slow disk.

It might affect the files in the training_set_results/latest directory. However, the files are likely to be overwritten at the next logging time, in which case there is no problem.

To avoid this warning, you can try to

  • Decrease the number of parametervectors (in ParallelLevels) that you parallelize over
  • Increase the logger_every, i.e., log less frequently.
  • Make sure to run the optimization in a directory on a fast local disk
Why am I getting an ‘ill-defined region’ warning?

This warning usually means that the CMA optimizer is stuck in a parameter region that repeatedly causes one or multiple of your training set jobs to fail. Mostly, this will be due to unphysical parameters, but too many or too tight Constraints can also be the reason. The warning can resolve itself after some time, in which case it can be ignored. However, if the issue persists and CMA is not able leave the problematic region your optimization might stop early without producing any improved results. When this happens, consider the following

  • Increase the CMA-ES sigma value
  • Start the optimization with different initial parameters, as defined by the parameterinterface.active parameters (e.g. a different force field)
  • Check that none of your training set jobs are prone to crashes
  • When in use, check your Constraints

Note that if you start your Optimization with the skip_x0=True argument, such warnings are expected as there is no guarantee that the initial set of parameters is making any physical sense.

How do I delete a reference value in the ParAMS GUI?

When you delete a reference value for a training set entry in the ParAMS GUI, the value will automatically be fetched from the reference jobs.

If you want to delete the reference value in order to Calculate reference values with ParAMS with a new reference engine, then the reference value will be deleted when you change the reference engine for a job.

If the reference jobs have not been run or do not exist, you can delete the reference value.

How do I manually evaluate a set of parameters?

Below is a minimal self-contained example of how manual evaluation of parameters can be implemented. Replace the random sampling of X with your parameters.

import numpy as np
from scm.params import *
from scm.plams import from_smiles, Settings

# Prepare the training data:
jc = JobCollection()
jc.add_entry('water', JCEntry(molecule=from_smiles('O'), settings='go'))
ds = DataSet()
# Run the reference calculation:
s = Settings()
results = jc.run(s)

# Manually evaluate 10 random points
ljp = LennardJonesParameters() # your favourite parameter interface
X   = np.array([np.random.uniform(p.range[0], p.range[1], size=10) for p in ljp.active]).T # replace with something more meaningful
fX  = [] # stores all loss function values
for x in X:
    ljp.active.x = x # manually set the parameters
    results = jc.run(ljp) # run all jobs
    fx = ds.evaluate(results) # evaluate the data set, calculating the loss

Alternatively, you can also use a Data Set Evaluator.

The reference dihedral angle is given as 0° in the output (scatter_plots/dihedral.txt)

The output gives all reference dihedral angles as 0°, and the prediction as the difference to the reference value. This is because the dihedral extractor uses a comparator to compare the prediction to the reference value. This is to ensure that if the reference value is 1° and the prediction is 359°, the difference is actually only 2° and not 358°.

You can access the actual reference value in the input (training_set.yaml), and get the actual prediction by adding the difference from scatter_plots/dihedral.txt.

My questions is not listed here

For further support, contact us at support@scm.com.