Manager

class GloMPOManager[source]

Provides the main interface to GloMPO. The manager runs the optimization and produces all the output.

The manager is not initialised directly with its settings (__init__ accepts no arguments). Either use setup() to build a new optimization or load_checkpoint() to resume an optimization from a previously saved checkpoint file. Alternatively, class methods new_manager() and load_manager() are also provided. Two equivalent ways to set up a new manager are shown below:

manager = GloMPOManager()
manager.setup(...)

manager = GloMPOManager.new_manager(...)
Attributes

aggressive_stop

If True and proc_backend is True, child processes are forcibly terminated via SIGTERM. Otherwise, a termination message is sent to the optimizer to shut itself down.

allow_forced_terminationsbool

True if the manager is allowed to force terminate optimizers which appear non-responsive (i.e. do not provide feedback within a specified period of time).

apply_stoppers_to_bestbool

If True, Stoppers will also be applied to the best optimizer and possible shut it down.

boundsSequence[Bound]

(Min, max) tuples for each parameter being optimized beyond which optimizers will not explore.

checkpoint_controlCheckpointingControl

GloMPO object containing all checkpointing settings if this feature is being used.

checkpoint_historySet[str]

Set of names of checkpoints constructed by the manager.

conv_counterint

Count of the number of optimizers which converged according to their own configuration (as opposed to being terminated by the manager).

convergedbool

True if the exit_conditions have been met, or no new optimizers can be started.

cpu_historyList[float]

History of CPU percentage usage snapshots (taken every status_interval seconds). This is the CPU percentage used only by the process and its children not the load on the whole system.

dt_endsList[datetime.datetime]

Records the end of each optimization session for a problem optimized through several checkpoints.

dt_startsList[datetime.datetime]

Records the start of each optimization session for a problem optimized through several checkpoints.

end_timeoutfloat

Amount of time the manager will wait to join child processes before forcibly terminating them (if children are processes) or allowing them to eventually crash out themselves (if children are threads). The latter is not recommended as essentially these threads can become orphaned and continue to use resources in the background.

exit_conditionsBaseExitCondition

GloMPO object which evaluates whether conditions are met for overall manager termination.

f_counterint

Number of times the optimization task has been evaluated.

is_log_detailedbool

If True optimizers will attempt to call a task’s detailed_call() method and save the expanded return to the log.

last_iter_checkpointint

f_counter of last attempted checkpoint (regardless of success or failure)

last_opt_spawnTuple[int, int]

Tuple of f_counter and o_counter at which the last child optimizer was started.

last_statusfloat

Timestamp when the last logging status message was printed.

last_stopcheckint

Evaluation number at which the last evaluation of the stoppers was executed.

last_time_checkpointfloat

Timestamp of last attempted checkpoint (regardless of success or failure)

load_historyList[Tuple[float, float, float]]

History of system load snapshots (taken every status_interval seconds). This is a system-wide value, not tied to the specific process.

loggerlogging.Logger

GloMPO has built-in logging to allow tracking during an optimization (see Logging Messages). This attribute accesses the manager logger object.

max_jobsint

Maximum number of calculation ‘slots’ used by all the child optimizers. This generally equates to the number of processing cores available which the child optimizers may fill with threads or processes depending on their configuration. Alternatively, each child optimizer may work serially and take one of these slots.

mem_historyList[float]

History of memory usage snapshots (taken every status_interval seconds). Details memory used by the process and its children.

n_parmsint

Dimensionality of the optimization problem.

o_counterint

Number of optimizers started.

opt_crashedbool

True if any child optimizer crashed during its execution.

opt_logBaseLogger

GloMPO object collecting the entire iteration history and metadata of the manager’s children.

opt_selectorBaseSelector

Object which returns an optimizer class and its configuration when requested by the manager. Can be based on previous results delivered by other optimizers.

optimizer_queuequeue.Queue

Common concurrency tool into which all results are paced by child optimizers.

opts_daemonicbool

True if manager children are spawned as daemons. Default is True but can be set to False if double process layers are needed (see Parallelism for more details).

overwrite_existingbool

True if any old files detected in the working directory maybe be deleted when the optimization run begins.

proc_backendbool

True if the manager children are spawned as processes, False if they are spawned as threads.

resultResult

Incumbent best solution found by any child optimizer.

share_best_solutionsbool

If True the manager will send iteration information about the best ever seen solution to all its children whenever this is updated.

spawning_optsbool

True if the manager is allowed to create new children. The manager will shut down if all children terminate and this is False. See Spawn Control for more details.

split_printstreamsbool

True if the printstreams for children are redirected to individual files (see Outputs).

status_intervalfloat

Interval (in seconds) with which a status message is produced for the logger.

stopcheck_counterint

Count of the number of times the manager has evaluated stoppers in an attempt to terminate one of its children.

stopcheck_intervalint

Interval (in terms of number of function evaluations) between evaluations of the Stoppers.

stopped_optsDict[int, float]

Mapping of manager-stopped optimizer ID numbers and timestamps when they were terminated.

stoppersBaseStopper

GloMPO object which evaluates whether an optimizer meets its conditions to be terminated early.

summary_filesint

Logging level indicating how much information is saved to disk. See setup().

t_endfloat

Timestamp of the ending time of an optimization run.

t_startfloat

Timestamp of the starting time of an optimization run.

t_usedfloat

Total time in seconds used by previous optimization runs. This will be zero unless the manager has been loaded from a checkpoint.

taskCallable[[Sequence[float]], float]

Function being minimize by the optimizers.

working_dirpathlib.Path

Working directory in which all output files and directories are created. Note, the manager does not change the current working directory during the run.

x0_generatorBaseGenerator

GloMPO object which returns a starting location for a new child optimizer. Can be based on previous results delivered by other optimizers.

property is_initialised

Returns True if this GloMPOManager instance has been initialised. Multiple initialisations are not allowed.

classmethod new_manager(*args, **kwargs)[source]

Class method wrapper around setup() to directly initialise a new manager instance.

classmethod load_manager(*args, **kwargs)[source]

Class method wrapper around load_checkpoint() to directly initialise a manager from a checkpoint.

setup(task, bounds, opt_selector, working_dir='.', overwrite_existing=False, max_jobs=None, backend='processes', exit_conditions=None, x0_generator=None, stoppers=None, share_best_solutions=False, stopcheck_interval=100, apply_stoppers_to_best=False, status_interval=600, checkpoint_control=None, summary_files=0, is_log_detailed=False, force_terminations_after=- 1, aggressive_stop=False, end_timeout=None, split_printstreams=True)[source]

Generates the environment for a new globally managed parallel optimization job.

Parameters

task

Function to be minimized. Accepts a 1D sequence of parameter values and returns a single value.

bounds

Sequence of tuples of the form (min, max) limiting the range of each parameter.

opt_selector

Selection criteria for new optimizers.

working_dir

If provided, GloMPO wil redirect its outputs to the given directory.

overwrite_existing

If True, GloMPO will overwrite existing files if any are found in the working_dir otherwise it will raise a FileExistsError if these results are detected.

max_jobs

The maximum number of threads the manager may create. Defaults to one less than the number of CPUs available to the system.

backend

Indicates the form of parallelism used by the optimizers.

Accepts:

'processes': Optimizers spawned as multiprocessing.Process

'threads': Optimizers spawned as threading.Thread

'processes_forced': Strongly discouraged, optimizers spawned as multiprocessing.Process and are themselves allowed to spawn multiprocessing.Process for function evaluations. See Parallelism for more details on this topic.

exit_conditions

Criteria used to determine when the job should exit.

x0_generator

An instance of a subclass of BaseGenerator which produces starting points for the optimizer. If not provided, RandomGenerator is used.

stoppers

BaseStopper criteria used for stopping optimizers.

share_best_solutions

If True the manager will send the best ever seen solution to all its children whenever this is updated.

stopcheck_interval

The number of function calls between successive attempts to evaluate optimizer performance and determine if they should be terminated.

apply_stoppers_to_best

If True, stoppers are also applied to the best optimizer. Otherwise, it will not be affected by them.

status_interval

Interval (in seconds) with which status messages are logged.

checkpoint_control

If provided, the manager will use checkpointing during the optimization.

summary_files

Indicates what information the user would like saved to disk. Higher values also save all lower level information:

  1. Nothing is saved.

  2. YAML file with summary info about the optimization settings, performance and the result.

  3. PNG file showing the trajectories of the optimizers.

  4. HDF5 file containing iteration history for each optimizer.

is_log_detailed

If True the optimizers will call task.detailed_call and record the expanded return in the logs. Otherwise, optimizers will use task.__call__.

force_terminations_after

If a value larger than zero is provided then GloMPO is allowed to force terminate optimizers that have either not provided results in the provided number of seconds or optimizers which were sent a stop signal have not shut themselves down within the provided number of seconds.

aggressive_stop

Ignored if backend is 'threads'. If True, child processes are forcibly terminated via SIGTERM. Else a termination message is sent to the optimizer to shut itself down. The latter option is preferred and safer, but there may be circumstances where child optimizers cannot handle such messages and have to be forcibly terminated.

end_timeout

The amount of time the manager will wait trying to smoothly join each child optimizer at the end of the run. Defaults to 10 seconds.

split_printstreams

If True, optimizer print messages will be intercepted and saved to separate files. See SplitOptimizerLogs

Notes

  1. To be process-safe task must be a standalone function which makes no modifications outside itself. If this is not the case it is likely you would need to use a threaded backend.

  2. Do not use bounds to fix a parameter value as this will raise an error. Rather fix parameter values within the task.

  3. An optimizer will not be started if the number of slots it requires (i.e. BaseOptimizer workers) will cause the total number of occupied slots to exceed max_jobs, even if the manager is currently managing fewer than the number of jobs available. In other words, if the manager has registered a total of 30 of 32 slots filled, it will not start an optimizer that requires 3 or more slots.

  4. Checkpointing requires the use of the dill package for serialisation. If you attempt to checkpoint or supply checkpointing_controls without this package present, a warning will be raised and no checkpointing will occur.

  5. Caution

    Use force_terminations_after with caution as it runs the risk of corrupting the results queue, but ensures resources are not wasted on hanging processes.

  6. After end_timeout, if the optimizer is still alive and a process, GloMPO will send a terminate signal to force it to close. However, threads cannot be terminated in this way and the manager can leave dangling threads at the end of its routine. If the script ends after a GloMPO routine then all its children will be automatically garbage collected (provided 'processes_forced' backend has not been used).

    By default, this timeout is 10s if a process backend is used and infinite of a threaded backend is used. This is the cleanest approach for threads but can cause very long wait times or deadlocks if the optimizer does not respond to close signals and does not converge.

load_checkpoint(path, task_loader=None, task=None, **glompo_kwargs)[source]

Initialise GloMPO from the provided checkpoint file and allows an optimization to resume from that point.

Parameters

path

Path to GloMPO checkpoint file.

task_loader

Optional method to reconstruct task from files in the checkpoint. Must accept a path to a directory containing the checkpoint files and return a callable which is the task itself. If not provided a direct unpickling of the task file will be attempted (see Notes).

task

Direct specification of the optimization task. Will take precedence over any other form of task specification i.e. task_loader or direct unpickling.

**glompo_kwargs

Most arguments supplied to setup() can also be provided here. This will overwrite the values saved in the checkpoint. See Notes for arguments which cannot/should not be changed:

Notes

  1. When making a checkpoint, GloMPO attempts to persist the task directly. If this is not possible it will attempt to call BaseFunction.checkpoint_save() to produce some files into the checkpoint. task_loader is the function or method which can return a task from files within the checkpoint (see BaseFunction.checkpoint_load()).

  2. If both task_loader and task are provided, the manager will use task directly and ignore task_loader.

  3. Caution

    GloMPO produces the requested log files when it closes (ie an exit or crash). The working directory is, however, purged of old results at the start of the optimization (if overwriting is allowed). This behavior is the same regardless of whether the optimization is a resume or a fresh start. This means it is the user’s responsibility to save and move important files from the working_dir before a resume.

  4. The following arguments cannot/should not be sent to glompo_kwargs:

    bounds

    Many optimizers save the bounds during checkpointing. If changed here old optimizers will retain the old bounds but new optimizers will start in new bounds.

    max_jobs

    If this is decreased and falls below the number required by the optimizers in the checkpoint, the manager will attempt to adjust the workers for each optimizer to fit the new limit. Slots are apportioned equally (regardless of the distribution in the checkpoint) and there is no guarantee that the optimizers will actually respond to this change.

    working_dir

    This can be changed, however, if a log file exists and you would like to append into this file, make sure to copy/move it to the new working_dir and name it 'glompo_log.h5'** before loading the checkpoint otherwise GloMPO will create a new log file (see :ref:`Outputs and Checkpointing).

start_manager()[source]

Begins the optimization routine and returns the lowest encountered minimum.

checkpoint()[source]

Saves the state of the manager and any existing optimizers to disk. GloMPO can be loaded from these files and resume optimization from this state.

Notes

When checkpointing GloMPO will attempt to handle the task in three ways:

  1. pickle with the other manager variables, this is the easiest and most straightforward method.

  2. If the above fails, the manager will attempt to call task.checkpoint_save if it is present. This is expected to create file/s which is/are suitable for reconstruction during load_checkpoint(). When resuming a run the manager will attempt to reconstruct the task by calling the method passed to task_loader in load_checkpoint().

  3. If the manager cannot perform either of the above methods the checkpoint will be constructed without a task. In that case a fully initialised task must be given to load_checkpoint().

write_summary_file(dump_dir=None)[source]

Writes a manager summary YAML file detailing the state of the optimization. Useful to extract output from a checkpoint.

Parameters

dump_dir

If provided, this will overwrite the manager working_dir allowing the output to be redirected to a different folder to not interfere with files in the working directory.