# 4.6.3. ResultsImporters API¶

class ResultsImporter(job_collection=None, data_set=None, settings=None)
__init__(job_collection=None, data_set=None, settings=None)

Class for constructing job_collection.yaml, data_set.yaml, and job_collection_engines.yaml files

job_collection : JobCollection or str or None

If None, a new JobCollection will be created

If a str, will be read from that file.

data_set : DataSet, dict of DataSets, or None

If None, a new DataSet will be created

If a str, will be read from that file.

If dict, then if the dictionary has no key ‘training_set’ one will be created with a new DataSet

settings : str, or Settings or dict

Some settings affecting the way the results importer works.

If a string is given, it should be the path to a results_importer_settings.yaml file. The settings are then loaded from that file. Otherwise, the settings are read from the Settings or dict.

If a setting is not set, it will take some default value.

• ‘remove_bonds’: bool (default True). Whether to delete the bonds from the jobs in the job collection. The bonds are not needed for ReaxFF or DFTB parametrization.
• ‘trim_settings’ : bool (default True). If the reference job is a GeometryOptimization but the newly added job is a SinglePoint, then remove the GeometryOptimization settings block from the settings. Similarly remove unused MolecularDynamics/PESScan blocks.
• ‘default_go_settings’ : Settings() containing default settings in the GeometryOptimization block of the AMS input. E.g. {‘MaxIterations’: 30, ‘PretendConverged’: True}
• ‘units’: Settings() containing the preferred units for different extractors. Units can be specified either with a known string or with a 2-tuple (string, float). For example {'energy': 'eV', 'forces': ('custom_unit', 12.34)}. The unit for ‘energy’ is also used for relative_energies. The known units are given in the PLAMS documentation.
results_importer_settings = Settings()
results_importer_settings.trim_settings = True
results_importer_settings.default_go_settings.MaxIterations = 20
results_importer_settings.units.energy = 'kcal/mol'
results_importer_settings.remove_bonds = False

sc = ResultsImporter(settings=results_importer_settings)

add_singlejob(amsjob, properties, name=None, task='SinglePoint', data_set=None, subgroup=None, settings=None, extra_engine=None)

This function adds an entry to the job collection and an entry to the reference engine collection using amsjob as a template The items of properties are extractors.

Returns a list of expressions added to the data_set.

amsjob : AMSJob or path to ams.results folder or path to VASP results folder or path to Quantum ESPRESSO .out file
Job with a finished reference calculation
data_set : str or None
for example ‘training_set’ or ‘validation_set’. A value of None means ‘training_set’
subgroup : str or None
Set the SubGroup metadata for the DataSetEntries
task : str
The Task of the added job. Defaults to SinglePoint, no matter what the Task of the reference amsjob is. If you want GeometryOptimizations to be run during the parametrization, you must set task=’GeometryOptimization’ here.
settings : Settings

Custom settings for the added job. By default, all settings from the reference job are inherited.

Any settings provided with this argument will override the inherited settings from the reference job.

sett = Settings()
sett.input.ams.GeometryOptimization.MaxIterations = 50

ri = ResultsImporter()
ri.add_singlejob(..., task='GeometryOptimization', settings=sett)

properties : list or dict of extractors

If given as a list, the default settings (weights, sigma, unit) are used for each data_set entry.

The arguments to the extractor should be specified without the jobid.

Example: properties = ['energy', 'angle((0,1,2))'] will add two data_set entries, one for the energy and one for the angle between the first three atoms

Properties can also be a dict, where the keys are the extractors as above and the values are dicts containing the settings for the data set entry.

Example:

properties = {
'energy': {
'weight': 1.0,
'sigma': 1.0
'unit': 'eV',
},
'forces': {
'weight': ...,
'sigma': ....,
'unit': 'Ha/bohr',
'weights_scheme': WeightsSchemeGaussian(normalization='numentries'),
}
},
'pes': { # this will translate the "min" to a fixed index (recommended)
'relative_to': 'min',
},
'pes(relative_to="min")': { # the pes returned from this extractor might be relative to different datapoints
},
'angle((0,1,2))': {
'weight': ...,
}
}

add_pesscan_singlepoints(amsjob, properties, name=None, task='SinglePoint', start=0, end=None, step=1, indices=None, data_set=None, subgroup=None, settings=None, extra_engine=None)

The reference job must be a PESScan. This method extracts the converged points and adds them to the job collection as single points. Returns a list of expressions added to the data_set.

To add a job with Task ‘PESScan’ (for use with the pes* extractors), do NOT use this method but instead the add_singlejob() method.

amsjob : AMSJob or path to ams.results folder
The job must have had Task PESScan
properties : list or dictionary

Allowed keys are ‘energy’ (not recommended) and ‘relative_energies’ (recommended).

The ‘forces’ are not supported since AMS only writes the constrained gradients to the ams.rkf files.

name : str
Jobs in the job collection will get ID “name_frame003” etc.
task : str
Task, only ‘SinglePoint’ makes sense
start : int
start step (0-based)
end : int or None
end step (0-based). If None, the entire trajectory is used
step : int
Use every step frames
indices : list of int
Manually specified list of indices. Overrides start/end/step if not None.
data_set : str
Dataset (‘training_set’, etc.)
subgroup : str
Set a custom SubGroup metadata key for the data_set entries.

For the property ‘relative_energies’ you can set the ‘relative_to’ option to specify the reference point. Allowed values:

‘min’ : smallest energy from indices subset

‘max’ : largest energy from indices subset

‘first’ : first energy from indices subset

‘last’ : last energy from indices subset

‘min_global’ : smallest energy in the trajectory

‘max_global’ : largest energy in the trajectory

‘first_global’ : first energy in the trajectory

‘last_global’ : last energy in the trajectory

If specifying e.g. ‘min_global’, then the smallest energy in the trajectory is always included, even if it is not covered by the indices subset.

Example:

add_pesscan_singlepoints('/path/to/ams.rkf', properties={
'energy': {
'weight': 1.5
},
'relative_energies': {
'weight': 2.0,
'sigma' : 0.1,
'unit'  : 'eV',
'relative_to': 'min_global'
},
}

add_neb_singlepoints(amsjob, properties, name=None, task='SinglePoint', data_set=None, subgroup=None, images='highest', extra_engine=None)

Method for extracting frames from an NEB calculation, and adding singlepoint jobs for each of the frames. Returns a list of expressions added to the data_set.

amsjob : AMSJob or path to ams.results folder
The job must contain History and NEB sections on ams.rkf
properties : list or dictionary
Allowed keys are [‘energy’, ‘relative_energies’]
name : str
Jobs in the job collection will get ID “name_frame003” etc.
task : str
Task, only ‘SinglePoint’ makes sense.
data_set : str
Dataset (‘training_set’, etc.)
subgroup : str
Set custom SubGroup metadata for the data_set entries.
images : str, ‘highest’ or ‘all’
Whether to include only the highest energy image or all images
add_trajectory_singlepoints(amsjob, properties, name=None, task='SinglePoint', start=0, end=None, step=1, N=None, indices=None, data_set=None, subgroup=None, settings=None, extra_engine=None)

ResultsImporter for extracting frames from a trajectory file, and adding singlepoint jobs for each of the frames. Returns a list of expressions added to the data_set.

To add a job with task ‘GeometryOptimization’ or ‘MolecularDynamics’, do NOT use this method but instead use the add_singlejob() method.

amsjob : AMSJob or path to ams.results folder
The job must contain a History section on ams.rkf, e.g. from a geometry optimization or MD simulation
properties : list or dictionary
Allowed keys are [‘energy’, ‘relative_energies’, ‘forces’, ‘stresstensor’] Not all extractors are supported, since each individual frame does not constitute an AMSResults
name : str
Jobs in the job collection will get ID “name_frame003” etc.
task : str
Task, only ‘SinglePoint’ makes sense
start : int
start step (0-based)
end : int or None
end step (0-based). If None, the entire trajectory is used
step : int
Use every step frames
N : int
Get N equally spaced frames in the interval [start, end). Overrides step if set.
indices : list of int
Manually specified list of indices. Overrides start/end/step if not None.
data_set : str
Dataset (‘training_set’, etc.)
subgroup : str
Set custom SubGroup metadata for the data_set entries.

For the property ‘relative_energies’ you can set the ‘relative_to’ option to specify the reference point. Allowed values:

‘min’ : smallest energy from indices subset

‘max’ : largest energy from indices subset

‘first’ : first energy from indices subset

‘last’ : last energy from indices subset

‘min_global’ : smallest energy in the trajectory

‘max_global’ : largest energy in the trajectory

‘first_global’ : first energy in the trajectory

‘last_global’ : last energy in the trajectory

If specifying e.g. ‘min_global’, then the smallest energy in the trajectory is always included, even if it is not covered by the indices subset.

Example:

add_trajectory_singlepoints('/path/to/ams.rkf', properties={
'energy': {
'weight': 1.5
},
'relative_energies': {
'weight': 2.0,
'sigma' : 0.1,
'unit'  : 'eV',
'relative_to': 'min_global'
},
'forces': {}
}

add_pesexploration_singlepoints(amsjob, properties, name=None, task='SinglePoint', indices=None, data_set=None, subgroup=None, settings=None, extra_engine=None)

ResultsImporter for PES exploration reference jobs. Returns a list of expressions added to the data_set.

To add jobs with task ‘PESExploration’ to the job collection (although you most likely do not want to do that because of the computational expense), do NOT use this method but instead the add_singlejob() method.

The method will add

• Forward and reverse reaction barriers from any transition state
• Relative energies between two minima connected by a transition state
• Relative energies between the lowest-energy minimum and all other minima

The “lowest-energy” and “all other minima” refer to minima either explicitly specified in indices, or connected to one of the transition states in indices.

Tip

Most PES explorations contain very many states. Select the subset you’re interested in with the indices, corresponding to state numbers.

amsjob: AMSJob or path to amsjob
An AMSJob or path to AMSJob
properties: list or dict
‘energy’ and/or ‘relative_energies’
name: str
prefix for the jobids of the individual datapoints
task: str
must be ‘SinglePoint’
indices: None or list of int
The indices in this method are 1-based! The state numbering for a PES Exploration is 1-based, and those indices are used for everyday working. Therefore the same indexing scheme is kept.
data_set:
Dataset
subgroup:
Subgroup
settings:
Settings
add_reaction_energy(reactants, products, normalization='r0', normalization_value=1.0, task='SinglePoint', weight=1.0, sigma=None, reactants_names=None, products_names=None, reference=None, unit=None, dupe_check=True, data_set=None, subgroup=None, settings=None, extra_engine=None, metadata=None)

ResultsImporter for adding reaction energies to the data_set.

reactants : list
a list of jobids, or a list of paths to ams.results folders or ams.rkf files, or a list of AMSJobs
products : list
a list of jobids, or a list of paths to ams.results folders or ams.rkf files, or a list of AMSJobs
normalization_species : str
‘r0’ for the first reactant, ‘r1’ for the second reactant, etc. ‘p0’ for the first product, ‘p1’ for the second product, etc. This normalizes the chemical equation such that the coefficient in front of the specified species is normalization_value
normalization_value : float
Normalize the chemical equation such that the coefficient in front of normalization_species is this number
task : str, default ‘SinglePoint’
Set the task for the job collection entries (only if new entries are created from AMSJobs)
weight : float, optional
Weight for the data set entry
sigma : float, optional
Sigma for the data set entry
reactants_names : list
set the job_collection IDs (only if new entries are created from AMSJobs). By default the job name is used.
products_names : list
set the job_collection IDs (only if new entries are created from AMSJobs). By default the job name is used
reference : float or None
The reaction energy. If None, an attempt will be made to calculate this, if all the constituent jobs (reactants and products) were loaded as AMSJobs
unit : str or 2-tuple
Energy unit. If 2-tuple should be of the form (“string”, “conversion_factor_from_au”)
dupe_check : bool
Check for duplicate data set entries
metadata:
a dictionary containing metadata for the data set entry. Note: new key-value pairs may be added to the dictionary by this method.
settings : Settings
Additional job settings

This method is primarily to be used by providing a list of paths to ams.results folders. The method will

• Create reference engines in the EngineCollection based on the engine settings for the provided jobs
• Extract the final structures from the AMSJobs, and add them to the JobCollection with the provided Task and pertinent ReferenceEngineID
• Balance the chemical equation, calculate the reference value, and add an entry to the DataSet
• The metadata for the DataSet entry is augmented by INFO_ReferenceEngineIDs, which gives all the reference engines used to calculate the reference data
store(folder=None, prefix='', binary=False, text=True, backup=True, store_settings=True)
folder : str
If folder is not given, the current working directory is used. If the folder does not exist, it will be created.
prefix : str
Prefix the output file names, e.g. giving “prefixjob_collection.yaml”
binary : bool
Whether to output binary job_collection.pkl and training_set.pkl files
text : bool
Whether to output text files job_collection.yaml and training_set.yaml. The engine_collection.yaml and results_importer_settings.yaml files are always created in text format.
backup : bool
Whether to backup any existing files by appending a .002 suffix or similar to the existing files.
store_settings : bool
Whether to save results_importer_settings.yaml

Saves job_collection.yaml, reference_engines.yaml, training_set.yaml. For each additional data_set (e.g. “validation_set”), a yaml file is also created.

add_engine_from_amsjob(amsjob, name=None)

Reads the engine definition from amsjob and adds it to the engine collection. Returns the ID of the engine in the engine collection.

amsjob: an AMSJob, or path to ams.results

name : str or None
If None, the engine name is created from the engine settings