Job Analysis¶
Note
The JobAnalysis
class is available in AMS2025+
The JobAnalysis
class is a tool which aims to simplify the process of analyzing the status, inputs and outputs of multiple jobs.
It helps to create analysis tables, with jobs as rows and “analysis fields” as columns.
An analysis field is simply a definition for how to extract a value from each job.
These tables can then be easily visualized in a jupyter notebook or script, or exported for use with other analysis packages like pandas.
For a worked example demonstrating the capabilities and uses of the JobAnalysis
class, see Job Analysis.
Adding Jobs¶
Jobs can be added to a JobAnalysis
in two ways, either passed directly on initialization or using add_job()
,
or alternatively loaded from paths using load_job()
.
When loading jobs from a path, an attempt is first made to load from a .dill
file.
If this fails, the tool will attempt to use a series of loaders to create the job.
By default this will be load_external()
, but a custom set of loaders can be provided for other job types.
For example, the following snippet will add two jobs to the JobAnalysis
and then load a further job using the ParAMSJob.load_external
method:
ja = (JobAnalysis(jobs=[job1, job2])
.load_job("path/to/job3", loaders=[ParAMSJob.load_external]))
This also illustrates one of the design features of JobAnalysis
, each operation returns the in-place modified instance, allowing the use of fluent syntax to chain methods.
Adding Analysis Fields¶
Analysis fields can be added to a JobAnalysis
in a number of ways.
Firstly, there are a small number of predefined “standard” analysis fields which are common across jobs e.g. Name
, Path
etc.
These can be added to the analysis via the methods, such as add_standard_field()
/ add_standard_fields()
.
To add these standard fields, the key(s) of the relevant fields must be supplied. A full list of these is available in the tool-tip.
Secondly, custom analysis fields can be added using the methods add_field()
or set_field()
.
When doing this, a unique identifier for the field (the key) must be provided, along with a function defining how to extract a value for the field from a job.
In addition, optional arguments can be provided to set the displayed name for the field, and it’s formatting when returned in a table.
Finally, additional methods are present to facilitate adding fields for values from the job settings
.
These are add_settings_field()
and add_settings_input_fields()
.
The former is used to create an analysis field given a specific nested settings key, the latter automatically creates fields for all settings keys under the settings.input
, which includes AMS input settings.
The key for these settings fields will be the concatenated settings key in Pascal case e.g. ("input", "ams", "task")
will have field key InputAmsTask
.
For example, the following snippet will add the Formula
field to the JobAnalysis
, followed by a custom field displaying the number of atoms, and finally the settings input fields:
(ja
.add_standard_field("Formula")
.add_field("NAtoms", lambda j: len(j.molecule))
.add_settings_input_fields())
Modifying Analysis¶
JobAnalysis
supports some modification and data manipulation methods.
These allow jobs and results to be interrogated and more easily visualized.
Fields can be filtered using filter_fields()
.
This accepts a predicate as an argument, which accepts all the values for a field.
Fields are only retained for which the predicate evaluates to True
.
There are some pre-configured predicates, such as remove_empty_fields()
and remove_uniform_fields()
.
These can be useful for removing noise from the analysis table, for fields which only have empty/the same values.
Jobs can also be filtered using filter_jobs()
.
This accepts a predicate as an argument, which accepts all the field keys and values for a job.
Again, jobs are only retained for which the predicate evaluates to True
.
This can be useful for removing jobs which are not of interest, for example those with a given status.
For example, the following snippet will retain only jobs which did not succeed:
(ja
.filter_field(lambda vals: all([v is not None for v in vals]))
.filter_jobs(lambda data: not data["OK"]))
Fields can also be sorted, renamed and formatted to aid with presentation.
These changes will only be reflected in the generated tables, not when calling get_analysis()
.
For example:
(ja
.rename_field("InputAmsTask", "Task")
.format_field("Energy", ".4f")
.sort_fields(["Name", "Formula", "Energy"])
.sort_jobs(field_keys=["Energy"]))
Extracting Analysis¶
There a various ways to extract data from JobAnalysis
.
The simplest way to get a visual representation is to call get_table()
(or display_table()
if running in a notebook).
This generates a table in either markdown
, html
or rst
formats.
Alternatively, data can be retrieved in code by calling get_analysis()
.
This gets a pure python dictionary of the data, where the analysis field keys are the dictionary keys.
The dictionary values are lists of values, with the elements as the field value for each job.
For more complex or involved analysis, the best approach is to export the data to a pandas dataframe.
Pandas is a fast and powerful data analysis tool, which can perform complex manipulations.
It can be installed via amspackages
.
As a final option, data can also be saved to a csv file using to_csv_file()
.
API¶
- class JobAnalysis(paths=None, jobs=None, loaders=None, standard_fields=('Path', 'Name', 'OK', 'Check', 'ErrorMsg'), await_results=True)[source]¶
Analysis tool for Jobs, which generates tables of data consisting of fields and their respective value for each job.
The jobs and fields which are included in the analysis are customizable, to allow for flexible comparison.
- __init__(paths=None, jobs=None, loaders=None, standard_fields=('Path', 'Name', 'OK', 'Check', 'ErrorMsg'), await_results=True)[source]¶
Initialize new instance of
JobAnalysis
with a set of jobs.>>> ja = JobAnalysis(jobs=[job1, job2], standard_fields=["Name", "OK"]) >>> ja | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Parameters:
paths – one or more paths to folders from which to load jobs to add to the analysis
jobs – one or more jobs to add to the analysis
loaders – custom loading functions to generate jobs from a job folder
standard_fields – keys of standard fields to include in analysis, defaults to
("Path", "Name", "OK", "Check", "ErrorMsg")
await_results – whether to wait for the results of any passed jobs to finish, defaults to
True
- copy()[source]¶
Produce a copy of this analysis with the same jobs and fields.
>>> ja.copy() | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Returns:
copy of the analysis
- property jobs¶
Jobs currently included in analysis.
>>> ja.jobs { '/path/job1': <scm.plams.interfaces.adfsuite.ams.AMSJob object at 0x1085a13d0>, '/path/job2': <scm.plams.interfaces.adfsuite.ams.AMSJob object at 0x15e389970> }
- Returns:
Dictionary of the job path and the
Job
- property field_keys¶
Keys of current fields, as they appear in the analysis.
>>> ja.field_keys ['Name', 'OK']
- Returns:
list of field keys
- get_analysis()[source]¶
Gets analysis data. This is effectively a table in the form of a dictionary, where the keys are the field keys and the values are a list of data for each job.
>>> ja.field_keys { 'Name': ['job1', 'job2'], 'OK': [True, True] }
- Returns:
analysis data as a dictionary of field keys/lists of job values
- _expand_analysis(analysis)[source]¶
Expand analysis fields, converting individual job rows into multiple rows
- Parameters:
analysis – analysis data as a dictionary of field keys/lists of job values
- Returns:
analysis data as a dictionary of field keys/lists of (multiple) job values
- to_dataframe()[source]¶
Converts analysis data to a dataframe. The column names are the field keys and the column values are the values for each job. This method requires the pandas package.
>>> print(ja.to_dataframe()) Name OK 0 job1 True 1 job2 True
- Returns:
analysis data as a dataframe
- to_table(max_col_width=-1, max_rows=30, fmt='markdown')[source]¶
Converts analysis data to a pretty-printed table.
>>> print(ja.to_table()) | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Parameters:
max_col_width – can be integer positive value or -1, defaults to -1 (no maximum width)
max_rows – can be integer positive value or -1, defaults to 30
fmt – format of the table, either markdown (default), html or rst
- Returns:
string representation of the table
- display_table(max_col_width=-1, max_rows=30, fmt='markdown')[source]¶
Converts analysis data to a pretty-printed table which is then displayed using IPython.
>>> ja.display_table() | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Parameters:
max_col_width – can be integer positive value or -1, defaults to -1 (no maximum width)
max_rows – can be integer positive value or -1, defaults to 30
fmt – format of the table, either markdown (default), html or rst
- to_csv_file(path)[source]¶
Write the analysis to a csv file with the specified path.
>>> ja.to_csv_file("./a.csv") >>> with open("./a.csv") as csv: >>> print(csv.read()) Name,OK job1,True job2,True
- Parameters:
path – path to save the csv file
- get_timeline(max_intervals=5, fmt='markdown')[source]¶
Get depiction of timeline of jobs as they were run. Each job is represented as a horizontal bar of symbols, where each symbol indicates a different job status.
These are as follows:
created
:.
started
:-
registered
:+
running
:=
finished
:*
crashed
:x
failed
:X
successful
:>
copied
:#
preview
:~
deleted
:!
e.g.
>>> print(ja.get_timeline()) | JobName | ↓2025-02-03 15:16:52 | ↓2025-02-03 15:17:10 | ↓2025-02-03 15:17:28 | ↓2025-02-03 15:17:46 | ↓2025-02-03 15:18:03 | WaitDuration | RunDuration | TotalDuration | |------------|----------------------|----------------------|----------------------|----------------------|----------------------|--------------|-------------|---------------| | generate | ==================== | ==================== | ==================== | ==========> | | 0s | 1m2s | 1m2s | | reoptimize | | | | ====> | | 0s | 3s | 3s | | score | | | | ===> | | 0s | 2s | 2s | | filter | | | | =* | > | 0s | 1s | 1s |
If multiple status changes occur within the same resolution period, the latest will be displayed.
- Parameters:
max_intervals – maximum number of datetime intervals to display i.e. the width and resolution of the timeline
fmt – format of the table, either markdown (default) or html
- Returns:
string representation of timeline as a markdown (default), html or rst table
- display_timeline(max_intervals=5, fmt='markdown')[source]¶
Get depiction of timeline of jobs as they were run and display using IPython. Each job is represented as a horizontal bar of symbols, where each symbol indicates a different job status.
These are as follows:
created
:.
started
:-
registered
:+
running
:=
finished
:*
crashed
:x
failed
:X
successful
:>
copied
:#
preview
:~
deleted
:!
e.g.
>>> ja.display_timeline() | JobName | ↓2025-02-03 15:16:52 | ↓2025-02-03 15:17:10 | ↓2025-02-03 15:17:28 | ↓2025-02-03 15:17:46 | ↓2025-02-03 15:18:03 | WaitDuration | RunDuration | TotalDuration | |------------|----------------------|----------------------|----------------------|----------------------|----------------------|--------------|-------------|---------------| | generate | ==================== | ==================== | ==================== | ==========> | | 0s | 1m2s | 1m2s | | reoptimize | | | | ====> | | 0s | 3s | 3s | | score | | | | ===> | | 0s | 2s | 2s | | filter | | | | =* | > | 0s | 1s | 1s |
If multiple status changes occur within the same resolution period, the latest will be displayed. :param max_intervals: maximum number of datetime intervals to display i.e. the width and resolution of the timeline :param fmt: format of the table, either markdown (default), html or rst
- add_job(job)[source]¶
Add a job to the analysis. This adds a row to the analysis data.
>>> ja.add_job(job3) | Name | OK | |-------|------| | job_1 | True | | job_2 | True | | job_3 | True |
- Parameters:
job –
Job
to add to the analysis- Returns:
updated instance of
JobAnalysis
- remove_job(job)[source]¶
Remove a job from the analysis. This removes a row from the analysis data.
>>> ja.remove_job(job2) | Name | OK | |-------|------| | job_1 | True |
- Parameters:
job –
Job
or path to a job to remove from the analysis- Returns:
updated instance of
JobAnalysis
- load_job(path, loaders=None)[source]¶
Add job to the analysis by loading from a given path to the job folder. If no dill file is present in that location, or the dill unpickling fails, the loaders will be used to load the given job from the folder.
>>> ja.load_job("path/job3") | Name | OK | |-------|------| | job_1 | True | | job_2 | True | | job_3 | True |
- Parameters:
path – path to folder from which to load the job
loaders – functions to try and load jobs, defaults to
load_external()
followed byload_external()
- Returns:
updated instance of
JobAnalysis
- filter_jobs(predicate)[source]¶
Retain jobs from the analysis where the given predicate for field values evaluates to
True
. In other words, this removes rows(s) from the analysis data where the filter function evaluates toFalse
given a dictionary of the row data.>>> ja | Name | OK | |-------|-------| | job_1 | True | | job_2 | True | | job_3 | False | >>> ja.filter_jobs(lambda data: not data["OK"]) | Name | OK | |-------|-------| | job_3 | False |
- Parameters:
predicate – filter function which takes a dictionary of field keys and their values and evaluates to
True
/False
- Returns:
updated instance of
JobAnalysis
- sort_jobs(field_keys=None, sort_key=None, reverse=False)[source]¶
Sort jobs according to a single or multiple fields. This is the order the rows will appear in the analysis data.
Either one of
field_keys
orkey
must be provided. Iffield_keys
is provided, the values from these field(s) will be used to sort, in the order they are specified. Ifsort_key
is provided, the sorting function will be applied to all fields.>>> ja.sort_jobs(field_keys=["Name"], reverse=True) | Name | OK | |-------|-------| | job_2 | True | | job_1 | True |
- Parameters:
field_keys – field keys to sort by,
sort_key – sorting function which takes a dictionary of field keys and their values
reverse – reverse sort order, defaults to
False
- Returns:
updated instance of
JobAnalysis
- add_field(key, value_extractor, display_name=None, fmt=None, expansion_depth=0)[source]¶
Add a new field to the analysis. This adds a column to the analysis data.
>>> ja.add_field("N", lambda j: len(j.molecule), display_name="Num Atoms") | Name | OK | Num Atoms | |-------|-------|-----------| | job_1 | True | 4 | | job_2 | True | 6 |
- Parameters:
key – unique identifier for the field
value_extractor – callable to extract the value for the field from a job
display_name – name which will appear for the field when displayed in table
fmt – string format for how field values are displayed in table
expansion_depth – whether to expand field of multiple values into multiple rows, and recursively to what depth
- Returns:
updated instance of
JobAnalysis
- set_field(key, value_extractor, display_name=None, fmt=None, expansion_depth=0)[source]¶
Set a field in the analysis. This adds or modifies a column to the analysis data.
>>> ja.set_field("N", lambda j: len(j.molecule), display_name="Num Atoms") | Name | OK | Num Atoms | |-------|-------|-----------| | job_1 | True | 4 | | job_2 | True | 6 |
- Parameters:
key – unique identifier for the field
value_extractor – callable to extract the value for the field from a job
display_name – name which will appear for the field when displayed in table
fmt – string format for how field values are displayed in table
expansion_depth – whether to expand field of multiple values into multiple rows, and recursively to what depth
- Returns:
updated instance of
JobAnalysis
- format_field(key, fmt=None)[source]¶
Apply a string formatting to a given field. This will apply when
to_table
is called.>>> ja.format_field("N", "03.0f") | Name | OK | Num Atoms | |-------|-------|-----------| | job_1 | True | 004 | | job_2 | True | 006 |
- Parameters:
key – unique identifier of the field
fmt – string format of the field e.g.
.2f
- rename_field(key, display_name)[source]¶
Give a display name to a field in the analysis. This is the header of the column in the analysis data.
>>> ja.rename_field("N", "N Atoms") | Name | OK | N Atoms | |-------|-------|---------| | job_1 | True | 004 | | job_2 | True | 006 |
- Parameters:
key – unique identifier for the field
display_name – name of the field
- Returns:
updated instance of
JobAnalysis
- expand_field(key, depth=1)[source]¶
Expand field of multiple values into multiple rows for each job. For nested values, the depth can be provided to determine the level of recursive expansion.
>>> (ja >>> .add_field("Step", lambda j: get_steps(j)) >>> .add_field("Energy", lambda j: get_energies(j))) | Name | OK | Step | Energy | |-------|-------|-----------|--------------------| | job_1 | True | [1, 2, 3] | [42.1, 43.2, 42.5] | | job_2 | True | [1, 2] | [84.5, 112.2] | >>> (ja >>> .expand_field("Step") >>> .expand_field("Energy")) | Name | OK | Step | Energy | |-------|-------|------|--------| | job_1 | True | 1 | 42.1 | | job_1 | True | 2 | 43.2 | | job_1 | True | 3 | 42.5 | | job_2 | True | 1 | 84.5 | | job_2 | True | 1 | 112.2 |
- Parameters:
key – unique identifier of field to expand
depth – depth of recursive expansion, defaults to 1
- Returns:
updated instance of
JobAnalysis
- collapse_field(key)[source]¶
Collapse field of multiple rows into single row of multiple values for each job.
>>> ja | Name | OK | Step | Energy | |-------|-------|------|--------| | job_1 | True | 1 | 42.1 | | job_1 | True | 2 | 43.2 | | job_1 | True | 3 | 42.5 | | job_2 | True | 1 | 84.5 | | job_2 | True | 1 | 112.2 | >>> (ja >>> .collapse_field("Step") >>> .collapse_field("Energy")) | Name | OK | Step | Energy | |-------|-------|-----------|--------------------| | job_1 | True | [1, 2, 3] | [42.1, 43.2, 42.5] | | job_2 | True | [1, 2] | [84.5, 112.2] |
- Parameters:
key – unique identifier of field to collapse
- Returns:
updated instance of
JobAnalysis
- reorder_fields(order)[source]¶
Reorder fields based upon the given sequence of field keys. This is the order the columns will appear in the analysis data.
Any specified fields will be placed first, with remaining fields placed after with their order unchanged.
>>> ja.reorder_fields(["Name", "Step"]) | Name | Step | OK | Energy | |-------|------|-------|--------| | job_1 | 1 | True | 42.1 | | job_1 | 2 | True | 43.2 | | job_1 | 3 | True | 42.5 | | job_2 | 1 | True | 84.5 | | job_2 | 1 | True | 112.2 |
- Parameters:
order – sequence of fields to be placed at the start of the field ordering
- Returns:
updated instance of
JobAnalysis
- sort_fields(sort_key, reverse=False)[source]¶
Sort fields according to a sort key. This is the order the columns will appear in the analysis data.
>>> ja.sort_fields(lambda k: len(k)) | OK | Name | Step | Energy | |-------|-------|------|--------| | True | job_1 | 1 | 42.1 | | True | job_1 | 2 | 43.2 | | True | job_1 | 3 | 42.5 | | True | job_2 | 1 | 84.5 | | True | job_2 | 1 | 112.2 |
- Parameters:
sort_key – sorting function which accepts the field key
reverse – reverse sort order, defaults to
False
- Returns:
updated instance of
JobAnalysis
- remove_field(key)[source]¶
Remove a field from the analysis. This removes a column from the analysis data.
>>> ja.remove_field("OK") | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Parameters:
key – unique identifier of the field
- Returns:
updated instance of
JobAnalysis
- remove_fields(keys)[source]¶
Remove multiple fields from the analysis. This removes columns from the analysis data.
>>> ja.remove_fields(["OK", "N"]) | Name | |-------| | job_1 | | job_2 |
- Parameters:
keys – unique identifiers of the fields
- Returns:
updated instance of
JobAnalysis
- filter_fields(predicate)[source]¶
Retain fields from the analysis where the given predicate evaluates to
True
given the field values. In other words, this removes column(s) from the analysis data where the filter function evaluates toFalse
given all the row values.>>> ja | OK | Name | Step | Energy | |-------|-------|------|--------| | True | job_1 | 1 | 42.1 | | True | job_1 | 2 | 43.2 | | True | job_1 | 3 | 42.5 | | True | job_2 | 1 | 84.5 | | True | job_2 | 1 | 112.2 | >>> ja.filter_fields(lambda vals: all([not isinstance(v, int) or v > 50 for v in vals])) | Name | Energy | |-------|--------| | job_1 | 42.1 | | job_1 | 43.2 | | job_1 | 42.5 | | job_2 | 84.5 | | job_2 | 112.2 |
- Parameters:
key – unique identifier of the field
predicate – filter function which takes values and evaluates to
True
/False
- Returns:
updated instance of
JobAnalysis
- Returns:
updated instance of
JobAnalysis
- remove_empty_fields()[source]¶
Remove field(s) from the analysis which have
None
for all values. This removes column(s) from the analysis data, where all rows have empty values.>>> ja.add_standard_field("ParentName") | Name | OK | ParentName | |-------|-------|------------| | job_1 | True | None | | job_2 | True | None | >>> ja.remove_empty_fields() | Name | OK | |-------|-------| | job_1 | True | | job_2 | True |
- Returns:
updated instance of
JobAnalysis
- remove_uniform_fields(tol=1e-08, ignore_empty=False)[source]¶
Remove field(s) from the analysis which evaluate the same for all values. This removes column(s) from the analysis data, where all rows have the same value.
>>> ja.add_standard_field("ParentName") | Name | OK | ParentName | |-------|-------|------------| | job_1 | True | None | | job_2 | True | None | | job_3 | True | p_job_4 | >>> ja.remove_uniform_fields() | Name | ParentName | |-------|------------| | job_1 | None | | job_2 | None | | job_3 | p_job_4 | >>> ja.remove_uniform_fields(ignore_empty=True) | Name | |-------| | job_1 | | job_2 | | job_3 |
- Parameters:
tol – absolute tolerance for numeric value comparison, all values must fall within this range
ignore_empty – when
True
ignoreNone
values and empty containers in comparison, defaults toFalse
- Returns:
updated instance of
JobAnalysis
- add_standard_fields(keys)[source]¶
Adds multiple standard fields to the analysis.
These are:
Path
: forJob
attributepath
Name
: forJob
attributename
ErrorMsg
: forJob
methodget_errormsg()
ParentPath
: for attributepath
ofJob
attributeparent
ParentName
: for attributename
ofJob
attributeparent
Formula
: for methodget_formula()
ofJob
attributemolecule
Smiles
: for functionto_smiles()
forJob
attributemolecule
GyrationRadius
: for functiongyration_radius()
forJob
attributemolecule
CPUTime
: for methodreadrkf()
withGeneral/CPUTime
forJob
attributeresults
SysTime
: for methodreadrkf()
withGeneral/SysTime
forJob
attributeresults
ElapsedTime
: for methodreadrkf()
withGeneral/ElapsedTime
forJob
attributeresults
>>> ja | Name | |-------| | job_1 | | job_2 | >>> ja.add_standard_fields(["Path", "Smiles"]) | Name | Path | Smiles | |-------|-------------|--------| | job_1 | /path/job_1 | N | | job_2 | /path/job_2 | C=C |
- Parameters:
keys – sequence of keys for the analysis fields
- Returns:
updated instance of
JobAnalysis
- add_standard_field(key)[source]¶
Adds a standard field to the analysis.
These are:
Path
: forJob
attributepath
Name
: forJob
attributename
ErrorMsg
: forJob
methodget_errormsg()
ParentPath
: for attributepath
ofJob
attributeparent
ParentName
: for attributename
ofJob
attributeparent
Formula
: for methodget_formula()
ofJob
attributemolecule
Smiles
: for functionto_smiles()
forJob
attributemolecule
GyrationRadius
: for functiongyration_radius()
forJob
attributemolecule
CPUTime
: for methodreadrkf()
withGeneral/CPUTime
forJob
attributeresults
SysTime
: for methodreadrkf()
withGeneral/SysTime
forJob
attributeresults
ElapsedTime
: for methodreadrkf()
withGeneral/ElapsedTime
forJob
attributeresults
>>> ja | Name | |-------| | job_1 | | job_2 | >>> ja.add_standard_field("Path") | Name | Path | |-------|-------------| | job_1 | /path/job_1 | | job_2 | /path/job_2 |
- Parameters:
key – key for the analysis field
- Returns:
updated instance of
JobAnalysis
- add_settings_field(key_tuple, display_name=None, fmt=None, expansion_depth=0)[source]¶
Add a field for a nested key from the job settings to the analysis. The key of the field will be a Pascal-case string of the settings nested key path e.g.
("input", "ams", "task")
will appear as fieldInputAmsTask
.>>> ja.add_settings_field(("input", "ams", "task"), display_name="Task") | Name | Task | |-------|-------------| | job_1 | SinglePoint | | job_2 | SinglePoint |
- Parameters:
key_tuple – nested tuple of keys in the settings object
display_name – name which will appear for the field when displayed in table
fmt – string format for how field values are displayed in table
expansion_depth – whether to expand field of multiple values into multiple rows, and recursively to what depth
- Returns:
updated instance of
JobAnalysis
- add_settings_fields(predicate=None, flatten_list=True)[source]¶
Add a field for all nested keys which satisfy the predicate from the job settings to the analysis. The key of the fields will be a Pascal-case string of the settings nested key path e.g. (“input”, “ams”, “task”) will appear as field
InputAmsTask
.>>> ja.add_settings_fields(lambda k: len(k) >= 3 and k[2].lower() == "xc") | Name | InputAdfXcDispersion | InputAdfXcGga | |-------|----------------------|---------------| | job_1 | Grimme3 | PBE | | job_2 | Grimme3 | PBE |
- Parameters:
predicate – optional predicate which evaluates to
True
orFalse
given a nested key, by default will beTrue
for every keyflatten_list – whether to flatten lists in settings objects
- Returns:
updated instance of
JobAnalysis
- add_settings_input_fields(include_system_block=False, flatten_list=True)[source]¶
Add a field for each input key in the
settings
object across all currently added jobs.>>> ja.add_settings_input_fields() | Name | InputAdfBasisType | InputAdfXcDispersion | InputAdfXcGga | InputAmsTask | |-------|-------------------|----------------------|---------------|--------------| | job_1 | TZP | Grimme3 | PBE | SinglePoint | | job_2 | TZP | Grimme3 | PBE | SinglePoint |
- Parameters:
include_system_block – whether to include keys for the system block, defaults to
False
flatten_list – whether to flatten lists in settings objects
- Returns:
updated instance of
JobAnalysis
- remove_settings_fields()[source]¶
Remove all fields which were added as settings fields.
>>> ja.add_settings_input_fields().remove_settings_fields() | Name | |-------| | job_1 | | job_2 |
- Returns:
updated instance of
JobAnalysis
- __str__()[source]¶
Get string representation of analysis as Markdown table with a maximum of 5 rows and column width of 12.
>>> str(ja) | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Returns:
markdown table of analysis
- __repr__()[source]¶
Get string representation of analysis as Markdown table with a maximum of 5 rows and column width of 12.
>>> ja | Name | OK | |-------|------| | job_1 | True | | job_2 | True |
- Returns:
markdown table of analysis
- __getitem__(key)[source]¶
Get analysis data for a given field.
>>> ja["Name"] ['job_1', 'job_2']
- Parameters:
key – unique identifier for the field
- Returns:
list of values for each job
- __setitem__(key, value)[source]¶
Set analysis for given field.
>>> ja["N"] = lambda j: len(j.molecule) >>> ja["N"] [4, 6]
- Parameters:
key – unique identifier for the field
value – callable to extract the value for the field from a job
- __delitem__(key)[source]¶
Delete analysis for given field.
>>> del ja["OK"] >>> ja | Name | |-------| | job_1 | | job_2 |
- Parameters:
key – unique identifier for the field
- __getattr__(key)[source]¶
Fallback to get analysis for given field when an attribute is not present.
>>> ja.Name ['job_1', 'job_2']
- Parameters:
key – unique identifier for the field
- Returns:
list of values for each job
- __setattr__(key, value)[source]¶
Fallback to set analysis for given field.
>>> ja.N = lambda j: len(j.molecule) >>> ja.N [4, 6]
- Parameters:
key – unique identifier for the field
value – callable to extract the value for the field from a job