Logging¶
To follow along, either
Download
logging_example.py(run as$AMSBIN/amspython logging_example.py).Download
logging_example.ipynb(see also: how to install Jupyterlab in AMS)
Worked Example¶
Logging in PLAMS¶
PLAMS has built-in logging which aims to simplify tracking the progress and status of jobs. This consists of progress logging to stdout and a logfile, and writing job summaries to CSV files. Each of these is explained below.
Progress Logger¶
PLAMS writes job progress to stdout and a plain text logfile. The location of this logfile is determined by the working directory of the default job manager, and is called logfile.
Users can also write logs to the same locations using the log function. This takes a level argument. By convention in PLAMS, the level should be between 0-7, with 0 being the most and 7 the least important logging.
The level of logging that is written to stdout and the logfile can be changed through the config.LogSettings.
from scm.plams import Settings, AMSJob, from_smiles, log, config, init
# this line is not required in AMS2025+
init()
counter = 0
def get_test_job():
global counter
s = Settings()
s.input.ams.Task = "SinglePoint"
s.input.dftb
counter += 1
return AMSJob(name=f"test{counter}", molecule=from_smiles("C"), settings=s)
PLAMS working folder: /path/plams/examples/Logging/plams_workdir
config.log.stdout = 3
config.log.file = 5
config.jobmanager.hashing = None # Force PLAMS to re-run identical test jobs
job = get_test_job()
job.run()
log("Test job finished", 5)
[22.09|15:41:06] JOB test1 STARTED
[22.09|15:41:06] JOB test1 RUNNING
[22.09|15:41:06] JOB test1 FINISHED
[22.09|15:41:06] JOB test1 SUCCESSFUL
with open(config.default_jobmanager.logfile, "r") as f:
print(f.read())
[22.09|15:41:06] JOB test1 STARTED
[22.09|15:41:06] Starting test1.prerun()
[22.09|15:41:06] test1.prerun() finished
[22.09|15:41:06] JOB test1 RUNNING
[22.09|15:41:06] Executing test1.run
[22.09|15:41:06] Execution of test1.run finished with returncode 0
[22.09|15:41:06] JOB test1 FINISHED
[22.09|15:41:06] Starting test1.postrun()
[22.09|15:41:06] test1.postrun() finished
[22.09|15:41:06] JOB test1 SUCCESSFUL
[22.09|15:41:06] Test job finished
Note that the logs from an AMS calculation can also be forwarded to the progress logs using the watch = True flag.
job = get_test_job()
job.run(watch=True);
[22.09|15:41:06] JOB test2 STARTED
[22.09|15:41:06] JOB test2 RUNNING
[22.09|15:41:07] test2: AMS 2025.205 RunTime: Sep22-2025 15:41:07 ShM Nodes: 1 Procs: 6
[22.09|15:41:07] test2: DFTB: SCC cycle
[22.09|15:41:07] test2: cyc= 1 err=1.1E+00 method=1 nvec= 1 mix=0.075 e= 0.0000
[22.09|15:41:07] test2: cyc= 2 err=1.1E+00 method=1 nvec= 1 mix=0.154 e= 0.0000
[22.09|15:41:07] test2: cyc= 3 err=8.9E-01 method=1 nvec= 2 mix=0.201 e= 0.0000
[22.09|15:41:07] test2: cyc= 4 err=1.7E-02 method=1 nvec= 3 mix=0.207 e= 0.0000
[22.09|15:41:07] test2: cyc= 5 err=6.8E-03 method=1 nvec= 4 mix=0.213 e= 0.0000
[22.09|15:41:07] test2: cyc= 6 err=2.6E-03 method=1 nvec= 5 mix=0.219 e= 0.0000
[22.09|15:41:07] test2: cyc= 7 err=7.2E-05 method=1 nvec= 6 mix=0.226 e= 0.0000
[22.09|15:41:07] test2: cyc= 8 err=6.8E-05 method=1 nvec= 1 mix=0.233 e= 0.0000
[22.09|15:41:07] test2: cyc= 9 err=4.2E-05 method=1 nvec= 2 mix=0.240 e= 0.0000
[22.09|15:41:07] test2: cyc= 10 err=6.2E-07 method=1 nvec= 3 mix=0.247 e= 0.0000
[22.09|15:41:07] test2: cyc= 11 err=5.8E-08 method=1 nvec= 3 mix=0.254 e= 0.0000
[22.09|15:41:07] test2: cyc= 12 err=3.6E-08 method=1 nvec= 4 mix=0.262 e= 0.0000
[22.09|15:41:07] test2: cyc= 13 err=9.0E-11 method=1 nvec= 4 mix=0.270 e= 0.0000
[22.09|15:41:07] test2: SCC cycle converged!
[22.09|15:41:07] test2: NORMAL TERMINATION
[22.09|15:41:07] JOB test2 FINISHED
[22.09|15:41:07] JOB test2 SUCCESSFUL
Job Summary Logger¶
For AMS2025+, PLAMS also writes summaries of jobs to a CSV file, the location of which by default is also determined by the job manager. It is called job_logfile.csv.
jobs = [get_test_job() for _ in range(3)]
jobs[2].settings.input.ams.Task = "Not a task!"
for job in jobs:
job.run()
[22.09|15:41:07] JOB test3 STARTED
[22.09|15:41:07] JOB test3 RUNNING
[22.09|15:41:08] JOB test3 FINISHED
[22.09|15:41:08] JOB test3 SUCCESSFUL
[22.09|15:41:08] JOB test4 STARTED
[22.09|15:41:08] JOB test4 RUNNING
[22.09|15:41:09] JOB test4 FINISHED
[22.09|15:41:09] JOB test4 SUCCESSFUL
[22.09|15:41:09] JOB test5 STARTED
[22.09|15:41:09] JOB test5 RUNNING
[22.09|15:41:17] WARNING: Job test5 finished with nonzero return code
[22.09|15:41:17] WARNING: Main KF file ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test5
... (PLAMS log lines truncated) ...
[22.09|15:41:17] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test5
[22.09|15:41:17] Error message for job test5 was:
Input error: value "Not a task!" found in line 1 for multiple choice key "Task" is not an allowed choice
[22.09|15:41:17] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test5
[22.09|15:41:17] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test5
These CSVs give overall information on the status of all jobs run by a given job manager.
import csv
try:
with open(config.default_jobmanager.job_logger.logfile, newline="") as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(f"{row['job_name']} {row['job_status']}: {row['job_get_errormsg']}")
except AttributeError:
pass
test1 successful:
test2 successful:
test3 successful:
test4 successful:
test5 crashed: Input error: value "Not a task!" found in line 1 for multiple choice key "Task" is not an allowed choice
Job Status Change Callback¶
For AMS2026+, PLAMS also supports users adding a custom callback which fires when a job status changes.
This is very flexible and can be used to set up custom notifications, for instance sending a desktop prompt, email or messaging app notification.
For example, below we show how to raise a desktop notification when a job finishes. Note that this requires the external library plyer, which needs to be installed into your python environment.
First we set up our notification function:
try:
import plyer
except ImportError:
print(
"Install plyer into your python environment to run this example. For example, with 'pip install plyer' or 'pip install plyer[macosx]' for Mac users."
)
def send_desktop_notification(name, path, status, at, **_):
if status == "successful":
plyer.notification.notify(
title=f"PLAMS job {name}",
message=f"Completed successfully at {at:%H:%M:%S UTC}",
timeout=5,
)
elif status in ["crashed", "failed"]:
plyer.notification.notify(
title=f"PLAMS job {name}",
message=f"Errored at {at:%H:%M:%S UTC}",
timeout=5,
)
Then we apply it to all jobs using the global config:
config.job.on_status_change = send_desktop_notification
When jobs are run, notifications should then be raised to the desktop.
jobs = [get_test_job() for _ in range(3)]
jobs[2].settings.input.ams.Task = "Not a task!"
for job in jobs:
job.run()
[22.09|15:41:17] JOB test6 STARTED
[22.09|15:41:17] JOB test6 RUNNING
[22.09|15:41:17] JOB test6 FINISHED
[22.09|15:41:17] JOB test6 SUCCESSFUL
[22.09|15:41:17] JOB test7 STARTED
[22.09|15:41:17] JOB test7 RUNNING
[22.09|15:41:18] JOB test7 FINISHED
[22.09|15:41:18] JOB test7 SUCCESSFUL
[22.09|15:41:18] JOB test8 STARTED
[22.09|15:41:18] JOB test8 RUNNING
[22.09|15:41:26] WARNING: Job test8 finished with nonzero return code
[22.09|15:41:26] WARNING: Main KF file ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test8
... (PLAMS log lines truncated) ...
[22.09|15:41:26] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test8
[22.09|15:41:26] Error message for job test8 was:
Input error: value "Not a task!" found in line 1 for multiple choice key "Task" is not an allowed choice
[22.09|15:41:26] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test8
[22.09|15:41:26] File ams.rkf not present in /path/plams/examples/Logging/plams_workdir/test8
Note that the given callbacks will never block job execution.
However, if many notifications are being sent, or there is a significant overhead to sending a notification, you may need to customize the wait time at the end of the script to allow all notifications to be sent successfully. To do this set the number of seconds for config.atexit_timeout:
config.atexit_timeout = 120
Complete Python code¶
#!/usr/bin/env amspython
# coding: utf-8
# ## Logging in PLAMS
# PLAMS has built-in logging which aims to simplify tracking the progress and status of jobs. This consists of progress logging to stdout and a logfile, and writing job summaries to CSV files. Each of these is explained below.
# ### Progress Logger
# PLAMS writes job progress to stdout and a plain text logfile. The location of this logfile is determined by the working directory of the default job manager, and is called `logfile`.
#
# Users can also write logs to the same locations using the `log` function. This takes a `level` argument. By convention in PLAMS, the level should be between 0-7, with 0 being the most and 7 the least important logging.
#
# The level of logging that is written to stdout and the logfile can be changed through the `config.LogSettings`.
from scm.plams import Settings, AMSJob, from_smiles, log, config, init
# this line is not required in AMS2025+
init()
counter = 0
def get_test_job():
global counter
s = Settings()
s.input.ams.Task = "SinglePoint"
s.input.dftb
counter += 1
return AMSJob(name=f"test{counter}", molecule=from_smiles("C"), settings=s)
config.log.stdout = 3
config.log.file = 5
config.jobmanager.hashing = None # Force PLAMS to re-run identical test jobs
job = get_test_job()
job.run()
log("Test job finished", 5)
with open(config.default_jobmanager.logfile, "r") as f:
print(f.read())
# Note that the logs from an AMS calculation can also be forwarded to the progress logs using the `watch = True` flag.
job = get_test_job()
job.run(watch=True)
# ### Job Summary Logger
# For AMS2025+, PLAMS also writes summaries of jobs to a CSV file, the location of which by default is also determined by the job manager. It is called `job_logfile.csv`.
jobs = [get_test_job() for _ in range(3)]
jobs[2].settings.input.ams.Task = "Not a task!"
for job in jobs:
job.run()
# These CSVs give overall information on the status of all jobs run by a given job manager.
import csv
try:
with open(config.default_jobmanager.job_logger.logfile, newline="") as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(f"{row['job_name']} {row['job_status']}: {row['job_get_errormsg']}")
except AttributeError:
pass
# ### Job Status Change Callback
# For AMS2026+, PLAMS also supports users adding a custom callback which fires when a job status changes.
#
# This is very flexible and can be used to set up custom notifications, for instance sending a desktop prompt, email or messaging app notification.
# For example, below we show how to raise a desktop notification when a job finishes. Note that this requires the external library `plyer`, which needs to be installed into your python environment.
# First we set up our notification function:
try:
import plyer
except ImportError:
print(
"Install plyer into your python environment to run this example. For example, with 'pip install plyer' or 'pip install plyer[macosx]' for Mac users."
)
def send_desktop_notification(name, path, status, at, **_):
if status == "successful":
plyer.notification.notify(
title=f"PLAMS job {name}",
message=f"Completed successfully at {at:%H:%M:%S UTC}",
timeout=5,
)
elif status in ["crashed", "failed"]:
plyer.notification.notify(
title=f"PLAMS job {name}",
message=f"Errored at {at:%H:%M:%S UTC}",
timeout=5,
)
# Then we apply it to all jobs using the global config:
config.job.on_status_change = send_desktop_notification
# When jobs are run, notifications should then be raised to the desktop.
jobs = [get_test_job() for _ in range(3)]
jobs[2].settings.input.ams.Task = "Not a task!"
for job in jobs:
job.run()
# Note that the given callbacks will never block job execution.
#
# However, if many notifications are being sent, or there is a significant overhead to sending a notification, you may need to customize the wait time at the end of the script to allow all notifications to be sent successfully. To do this set the number of seconds for `config.atexit_timeout`:
config.atexit_timeout = 120