Introduction
A new command named optimize has been added. It realises an optimisation loop in Spice Opus.
When we run particular optimisation method on a given circuit with optimize command, a general
optimisation loop is performed and it is always the same. Its algorithm is as follows:
do
if number of iterations > max break
change parameter values
verify explicit constraints and correct parameter values if necessary
execute all analysis commands (analysis 0, analysis 1, ..., analysis N)
number of iterations = number of iterations + 1
if not implicit constraint 0 or not implicit constraint 1 or ...
or not implicit constraint M continue
calculate the cost function
while termination criteria not satisfied
Number of iterations is checked in every iteration. This way the user has some kind of control over the optimisation algorithm and an infinitive optimisation loop (in case of no convergence) is avoided. The optimisation method, which was chosen, is hidden in the way of changing parameter values. If one parameter violate its explicit constraint, its value is corrected to the explicit border. More analyses lines can be defined with optimize analysis commands. They are performed in order and produce necessary results for verifying implicit constraints and evaluating the cost function. If at least one implicit constraint is violated, than algorithm continues with next iteration. Otherwise the cost function is calculated and terminating criteria decides whether the loop will continue or stop.
The optimize command is placed on top of Nutmeg by its function. In fact it just prepares commands in Nutmeg language and executes them in an appropriate order, what leads to optimising process. But on the other hand optimize command is written as an additional Nutmeg command and can be used as one.
The optimize command also generates special group of data (plot). It is named optimizex, where x repesents a plot number. It contains a vector called parameter with final parameter values, a vector called cost with values of the cost function in each iteration, and vectors with parameter values in each iteration.
When the optimize command finishes and finds some optimal parameter values, which are better than initial values, the initial parameter values are changed to optimal ones. That way we can proceed with the next optimize command (maybe with some other method) on the same circuit from the best known point as initial guess.
If the initial_guess option is set (equal or greater than 1), the above is not true. In this case the initial parameter values are set to the values where parameter space was not searched yet. This way many dimensional parameter spaces can be searched by successive optimisation processes. The value of initial_guess option gives the weight factor of the point with the greatest value of the cost function. The weight factor of the point with the smallest value is always 1.
Optimization methods
One can choose among several optimisation methods which are coded in the optimize command:
Further description of a particular method can be found in the literature. Here are brief descriptions of optimisation methods and their parameters which can be manually adjusted.
Steepest Descent (steepest_descent)
If the expression for the i-th component of the gradient of the cost function is not given, then it is calculated numerically. All parameters, that are to be optimised, have to be given before this method is chosen. The parameters of the method are:
r | constant used in determining constraints where the minimum lies in the negative gradient direction. It must be between 1.5 and 2. |
method | method used for searching in the negative gradient direction. Possible values are quadratic, golden and fibonacci. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the negative gradient direction. |
number_of_iterations | maximum number of iterations when searching in the negative gradient direction. |
transformation | type of transformation from explicitly constrained to unconstrained space. Possible values are no, sin and arcctg. |
gradient0 | expression of the first component of the gradient |
gradient1 | expression of the second component of the gradient |
gradient2 | expression of the third component of the gradient etc. |
Gradient and Hesse matrix of the cost function are calculated numerically. The parameters of the method are:
r | constant used in determining constraints where the minimum lies in the desired direction. It must be between 1.5 and 2. |
method | method used for searching in the desired direction. Possible values are quadratic, golden and fibonacci. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the desired direction. |
number_of_iterations | maximum number of iterations when searching in the desired direction. |
transformation | type of transformation from explicitly constrained to unconstrained space. Possible values are no, sin and arcctg. |
Davidon-Fletcher-Powell's method (davidon_fletcher_powell)
If the expression for the i-th component of the gradient of the cost function is not given, then it is calculated numerically. All parameters, that are to be optimised, have to be given before this method is chosen. The parameters of the method are:
r | constant used in determining constraints where the minimum lies in the desired direction. It must be between 1.5 and 2. |
method | method used for searching in the desired direction. Possible values are quadratic, golden and fibonacci. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the desired direction. |
number_of_iterations | maximum number of iterations when searching in the desired direction. |
modification | modification of the Davidon-Fletcher-Powell's method. Possible values are no, modified, first and second. |
transformation | type of transformation from explicitly constrained to unconstrained space. Possible values are no, sin and arcctg. |
gradient0 | expression of the first component of the gradient |
gradient1 | expression of the second component of the gradient |
gradient2 | expression of the third component of the gradient etc. |
Parameter distribution is regarded only if direction = no. On the other hand parameters r, epsilon and number_of_iterations are taken into account when direction != no. The parameters of the method are:
direction | the direction of search is determined randomly. Possible values are no, quadratic, golden and fibonacci. |
distribution | the distribution of the random parameter values. Possible values are linear and normal. |
r | constant used in determining constraints where the minimum lies in the randomly determined direction. It must be between 1.5 and 2. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the randomly determined direction. |
number_of_iterations | maximum number of iterations when searching in the randomly determined direction. |
Grid Search Method (grid_search)
The parameters of the method are:
level | grid level used. Possible values are 2 and 3. |
alpha | the magnifying factor. |
epsilon | satisfying relative grid size (in percentage). |
Search Along Coordinate Axes (axis_search)
The parameters of the method are:
r | constant used in determining constraints where the minimum lies in the desired direction. It must be between 1.5 and 2. |
method | method used for searching in the desired direction. Possible values are quadratic, golden and fibonacci. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the desired direction. |
number_of_iterations | maximum number of iterations when searching in the desired direction. |
The parameters of the method are:
r | constant used in determining constraints where the minimum lies in the desired direction. It must be between 1.5 and 2. |
method | method used for searching in the desired direction. Possible values are quadratic, golden and fibonacci. |
epsilon | satisfying relative distance (in percentage) between two points when searching in the desired direction. |
number_of_iterations | maximum number of iterations when searching in the desired direction. |
transformation | type of transformation from explicitly constrained to unconstrained space. Possible values are no, sin and arcctg. |
Hooke-Jeeves's Method (hooke_jeeves)
The parameters of the method are:
alpha | the magnifying factor. |
epsilon | satisfying relative step (in percentage). |
Constrained Simplex Method (complex)
The parameters of the method are:
k | number of points in constrained simplex. The default value is 2 times number of parameters. It should be greater than number of parameters. |
alpha | the mirroring factor. |
size | satisfying average relative distance (in percentage) of the points to the centre of the simplex. |
oscillation_detection | detect and prevent an oscillation behavior around the lowest point of the constrained simplex. |
nmirror | number of points mirrored in each iteration. |
width | portion of explicitly constrained parameter space, where initial simplex will be chosen around the initial point. |
Simple Genetic Algorithm (genetic)
The parameters of the method are:
popsize | population size. Must be an even number. |
lchrom | chromosome length (number of bytes per dimension). Maximum length is 4 bytes per dimension. |
maxgen | number of generations. |
pcross | crossover probability. |
pmutation | mutation probability. |
scaling | cost function scaling factor. Must be greater than 1. |
coding | parameter values coding. Possible values are binary and gray. |
elitism | the best individual in each generation is copied to the next one. Possible values are yes and no. The default is no. |
Simulated Annealing Algorithm (simulated_annealing)
The parameters of the method are:
init_num_of_pts | number of points used for determining initial temperature. |
tmin | absolute final temperature value. |
range_min | maximum reduction of parameter range. Moves are constrained by parameter range around current point. Parameter range is reduced during the optimisation run. range_min defines maximum reduction of parameter range, which always has to be greater than range_min * (upper_limit - lower_limit). |
moves | number of moves at initial temperature stage. |
alpha | cooling factor. |
Parallel Simulated Annealing with Differential Evolution Algorithm (psade)
Several simulated annealing processes in parallel interacting through differential evolution. The parameters of the method are:
np | number of points in population. |
tmin | minimum absolute temperature value. |
rmin | minimum relative range for generating random moves. |
sfreq | save frequency (save entire population after every sfreq iterations). |
minCF | stopping cost function value. |
pL | local step probability. |
Checking cost function in axes directions through current point (cost_profile)
The parameter of the method is:
resolution | number of points in each axis direction. |
Evaluating cost function across whole parameter space (parameter_space)
The parameters of the method are:
outfile | output filename, where the results are stored. |
npts0 | number of points in the grid along the first parameter. |
npts1 | number of points in the grid along the second parameter. |
npts2 | number of points in the grid along the third parameter etc. |
The command syntax
optimize [command]
command =
analysis [number delete | number expression]
| cost [expression]
| implicit [number delete | number expression]
| method [definition]
| parameter [number delete | number [name] [low value] [high value] [initial value]
[increment value] [mean value] [deviation value] [abstol value] [reltol value]
[lin | log]]
| options [initial_guess value] [number_of_iterations value] [stop_cost value]
[centering off | on] [discrete_space off | on] [setparams]
| reset [analysis] [cost] [implicit] [method] [parameter] [options] [population]
definition =
steepest_descent [r value{1.5}]
[method quadratic | golden | fibonacci]
[epsilon value{0.1}]
[transformation no | sin | atcctg]
[number_of_iterations value{100}]
[gradient0 expression]
[gradient1 expression]
[...]
| newton [r value{1.5}]
[method quadratic | golden | fibonacci]
[epsilon value{0.1}]
[transformation no | sin | atcctg]
[number_of_iterations value{100}]
| davidon_fletcher_powell [r value{1.5}]
[method quadratic | golden | fibonacci]
[epsilon value{0.1}]
[number_of_iterations value{100}]
[modification no | modified | first | second]
[transformation no | sin | atcctg]
[gradient0 expression]
[gradient1 expression]
[...]
| monte_carlo [direction no | quadratic | golden | fibonacci]
[distribution linear | normal]
[r value{1.5}]
[epsilon value{0.1}]
[number_of_iterations value{100}]
| grid_search [level 2 | 3]
[alpha value{1.3}]
[epsilon value{0.1}]
| axis_search [r value{1.5}]
[method quadratic | golden | fibonacci]
[epsilon value{0.1}]
[number_of_iterations value{100}]
| powell [r value{1.5}]
[method quadratic | golden | fibonacci]
[epsilon value{0.1}]
[number_of_iterations value{100}]
[transformation no | sin | atcctg]
| hooke_jeeves [alpha value{1.3}]
[epsilon value{0.1}]
| complex [k value]
[alpha value{1.3}]
[size value{0.1}]
[oscillation_detection off | on]
[nmirror value{1}]
[width value{1}]
| genetic [popsize value{10}]
[lchrom value{1}]
[maxgen value{10}]
[pcross value{0.6}]
[pmutation value{0.03}]
[scaling value{2}]
[coding binary | gray]
[elitism no | yes]
| simulated_annealing
[init_num_of_pts value{10}]
[tmin value{1e-6}]
[range_min value{1e-4}]
[moves value{10}]
[alpha value{0.985}]
| psade [np
value{20}]
[tmin
value{1e-6}]
[rmin
value{1e-6}]
[sfreq
value{100}]
[minCF
value{-1e30}]
[pL
value{0.1}]
| cost_profile [resolution value{3}]
| parameter_space [outfile filename{opt.out}]
[npts0 value{2}]
[npts1 value{2}]
[...]