6.2.10. Other input¶
6.2.10.1. Initial parameter evaluation¶
By default, the loss function will be evaluated for the initial parameters given in the input (parameter_interface.yaml
).
This can be disabled with the SkipX0
key:
SkipX0
- Type
Bool
- Default value
No
- GUI name
Skip initial parameter evaluation:
- Description
Do not evaluate the initial parameters before starting the optimization. If the initial parameters evaluated and do not return a finite loss function value, the optimization will abort. A non-infinite value typically indicates crashed jobs.
6.2.10.2. Random validation set¶
Validation
- Type
Float
- Description
Fraction of the training set to be used as a validation set. Will be ignored if a validation set has been explicitly defined.
6.2.10.4. Scaler¶
Many optimization algorithms require that all optimized dimensions be in the same range. This can dramatically improve the numerics of the problem. ParAMS provides scalers which scale the optimized parameters to reduced ranges.
Scaler
- Type
Multiple Choice
- Default value
Optimizers
- Options
[Linear, Std, None, Optimizers]
- GUI name
- Description
Type of scaling applied to the parameters. A scaled input space is needed by many optimization algorithms. Available options: • Linear: Scale all parameters between 0 and 1. • Std: Scale all parameters between -1 and 1. • None: Applies no scaling. • Optimizers (Default): Does not specify a scaling at the manager level, but allows the selection to be governed by the optimizer/s. If they do not require any particular scaler, then ‘linear’ is selected as the ultimate fallback.
6.2.10.5. Custom extractors path¶
If you have created your own Extractors and use them in your training set, specify the path to them with MoreExtractorsPath
:
MoreExtractorsPath
- Type
String
- Default value
extractors
- Description
Path to directory with extractors.
6.2.10.6. Per optimizer wait-time on exit¶
EndTimeout
- Type
Float
- Default value
10.0
- GUI name
Optimizer wait time on end (s):
- Description
The amount of time the manager will wait trying to smoothly join each optimizer at the end of the run. If exceeded the manager will abandon the optimizer and shutdown. This can raise errors from the abandoned threads, but may be needed to ensure the manager closes and does not hang. This option is often needed if the Scipy optimizers are being used and should be set to a low value.