10. Frequently Asked Questions

10.1. General questions

Can ParAMS run on multiple compute nodes?

No. ParAMS can only be run on a single compute node. However, it can run in parallel on that node. See Parallelization.

How do I start the ParAMS GUI from the command-line?

$AMSBIN/params -gui or, to open an existing project: $AMSBIN/params -gui jobname.params.

How do I delete a parameter (block) in the GUI?

It is currently not possible to delete parameters.

How do I delete a reference value in the ParAMS GUI?

When you delete a reference value for a training set entry in the ParAMS GUI, the value will automatically be fetched from the reference jobs.

If you want to delete the reference value in order to Generate reference values with a new reference engine, then the reference value will be deleted when you change the reference engine for a job.

If the reference jobs have not been run or do not exist, you can delete the reference value.

How do I manually evaluate a set of parameters?

This can be done with a Task: SinglePoint job.

10.2. Job settings

Why are MaxIterations and PretendConverged set for geometry optimization jobs?

If you use the GUI or a Results Importer to import a job, setting the Task to GeometryOptimization, you’ll find that the settings for the GeometryOptimization default to

GeometryOptimization
    MaxIterations 30
    PretendConverged Yes
End

This means that during the parametrization, only a maximum of 30 iterations are allowed. The reason to limit the number of iterations is that during the parametrization, there may be unrealistic sets of parameters for which a geometry optimization would “never” converge. By limiting the number of iterations, the parametrization will not get stuck.

PretendConverged Yes means that if the maximum of 30 iterations is reached, ParAMS will simply use the last geometry (and its energy). If you wouldn’t set PretendConverged, the geometry optimization would be considered as an error (because it didn’t converge in MaxIterations), giving an infinite loss function value.

You can easily change the MaxIterations for many jobs at once. In the GUI, select all the geometry optimization jobs you want to edit, and double-click the Details of one of them. Change the MaxIterations in the window, and click OK. That will change it for all jobs you originally selected.

If you use the ResultsImporter class, you can set MaxIterations in the settings.

How do I update the geometry optimization settings for all jobs?

This is easiest to do with a Python script:

#!/usr/bin/env amspython

from scm.plams import *
from scm.params import *

def modify_settings(job_collection:JobCollection, task:str, new_settings:Settings):
    for jid in job_collection:
        jce = job_collection[jid]
        if jce.settings.input.ams.task.lower() == task.lower():
            jce.settings.update(new_settings)

def main():
    jc = JobCollection('job_collection.yaml')

    new_settings = Settings()
    new_settings.input.ams.GeometryOptimization.MaxIterations = 55
    new_settings.input.ams.GeometryOptimization.Method = 'FIRE'

    modify_settings(jc, 'geometryoptimization', new_settings)
    modify_settings(jc, 'pesscan', new_settings)

    jc.store('modified_job_collection.yaml')


if __name__ == '__main__':
    main()

10.3. Errors and warnings

“UserWarning: At iteration ___ (training_set), received warning: I/O operation on closed file”

This warning can appear when you are running an optimization in parallel and if you log frequently to disk, especially if you have a slow disk.

It might affect the files in the training_set_results/latest directory. However, the files are likely to be overwritten at the next logging time, in which case there is no problem.

To avoid this warning, you can try to

  • Decrease the number of parametervectors (in ParallelLevels) that you parallelize over

  • Increase the logger_every, i.e., log less frequently.

  • Make sure to run the optimization in a directory on a fast local disk

‘Ill-defined region’

This warning usually means that the CMA optimizer is stuck in a parameter region that repeatedly causes one or multiple of your training set jobs to fail. Mostly, this will be due to unphysical parameters, but too many or too tight Constraints can also be the reason. The warning can resolve itself after some time, in which case it can be ignored. However, if the issue persists and CMA is not able leave the problematic region your optimization might stop early without producing any improved results. When this happens, consider the following

  • Increase the CMA-ES sigma value

  • Start the optimization with different initial parameters

  • Check that none of your training set jobs are prone to crashes

  • When in use, check your Constraints

Note that if you start your Optimization with the skip_x0=True argument, such warnings are expected as there is no guarantee that the initial set of parameters makes any physical sense.

AttributeError: ‘AMSWorkerResults’ object has no attribute ‘readrkf’, scm.params.core.dataset.DataSetEvaluationError: Error evaluating ‘….’

Potential solution #1: This error means that you run a job through the pipe (pipe = efficient communication between params and AMS) but use an extractor (for example bandgap, bandstructure) that is not compatible with the pipe.

To solve this issue, you need to disable the pipe:

  • Input file: Set DataSet%UsePipe No

  • GUI: On the Details → Technical panel, uncheck the Use pipe option. Do this for both the training set and validation set.

If you use one of these extractors it is likely that you are parametrizing a relative expensive compute engine (DFTB). Then the performance overhead of not using the pipe is negligible.

Potential solution #2: If you’re using the vibfreq (vibrations, frequencies) extractor you may have forgotten to set

Properties
    NormalModes Yes
End

in the job. This will cause the job to run over the pipe, but the frequencies then cannot be extracted.

To solve this issue, make sure that you have enabled the normal modes (frequencies) in the job settings.

10.4. Optimization questions

I want exactly total 12 optimization results, but only running at most 4 at a time to fit on my available cores

# Have at most 4 optimizers running at the same time
ParallelLevels
    Optimizations 4
End

# Even if less than 4 optimizers are running, after starting a total of 12 optimizers,
# never start any more
ControlOptimizerSpawning
    MaxOptimizers 12
End

There are no ASE parameters shown in the GUI?

The GUI cannot be used to create ASE parameters. You need to first create the parameter_interface.yaml file and then load that.

It is easiest to create the file with a Python script. See the ASE Calculator parametrization tutorial.

10.5. Results questions

How can I compare similarity of parameter values between two ReaxFF force fields?

The easiest way is to create a file with one line per parameter, print the parameter name and value, and compare the resulting output files with an external tool like xxdiff or WinMerge (not provided by SCM nor supported by SCM).

For example to print a sorted version of ‘CHO.ff’:

#!/usr/bin/env amspython
from scm.params import ReaxFFParameters
interf = ReaxFFParameters('CHO.ff')
lines = [f'{p.block} {p.name} {p.value}' for p in interf]
for line in sorted(lines):
    print(line)

The reference dihedral angle is given as 0° in the output (scatter_plots/dihedral.txt)

The output gives all reference dihedral angles as 0°, and the prediction as the difference to the reference value. This is because the dihedral extractor uses a comparator to compare the prediction to the reference value. This is to ensure that if the reference value is 1° and the prediction is 359°, the difference is actually only 2° and not 358°.

You can access the actual reference value in the input (training_set.yaml), and get the actual prediction by adding the difference from scatter_plots/dihedral.txt.