2.2. pyopus.evaluator.performance — System performance evaluation

Inheritance diagram of pyopus.evaluator.performance

System performance evaluation module (PyOPUS subsystem name: PE)

A system description module is a fragment of simulated system description. Usually it corresponds to a file or a section of a library file.

Performance measure ordering is a list of performance measure names that defines the order of performance measures.

The heads data structure provides the list of simulators with available system description modules. The analyses data structure specifies the analyses that will be performed by the listed simulators. The corners data structure specifies the corners across which the systems will be evaluated. The measures data structure describes the performance measures which are extracted from simulation results.

The heads data structure is a dictionary with head name for key. The values are also dictionaries describing a simulator and the description of the system to be simulated with the following keys:

  • simulator - the name of the simulator to use (see the pyopus.simulator.simulatorClass() function`for details on how this name is resolved to a simulator class)
  • settings - a dictionary specifying the keyword arguments passed to the simulator object’s constructor
  • moddefs - definition of system description modules
  • options - simulator options valid for all analyses performed by this simulator. This is a dictionary with option name for key.
  • params - system parameters valid for all analyses performed in this simulator. This is a dictionary with parameter name for key.
  • variables - Python variables available during the evaluation of all save directives, analysis commands, and measurements associated with this simulator.

The definition of system description modules in the moddefs dictionary member are themselves dictionaries with system description module name for key. Values are dictionaries using the following keys for describing a system description module

  • file - file name in which the system description module is described
  • section - file section name where the system description module description can be bound

Specifying only the file member translates into an .include simulator input directive (or its equivalent). If additionally the section member is also specified the result is a .lib directive (or its equivalent).

The analyses data structure is a dictionary with analysis name for key. The values are also dictionaries describing an analysis using the following dictionary keys:

  • head - the name of the head describing the simulator that will be used for this analysis
  • modules - the list of system description module names that form the system description for this analysis
  • options - simulator options that apply only to this analysis. This is a dictionary with option name for key.
  • params - system parameters that apply only to this analysis. This is a dictionary with parameter name for key.
  • variables - Python variables available during the evaluation of all save directives, analysis commands, and measurements associated with this analysis.
  • saves - a list of strings which evaluate to save directives specifying what simulated quantities should be included in simulator’s output. See individual simulator classes in the pyopus.simulator module for the available save directive generator functions.
  • command - a string which evaluates to the analysis directive for the simulator. See individual simulator classes in the pyopus.simulator module for the available analysis directive generator functions.

The environment in which the strings in the saves member and the string in the command member are evaluated is simulator-dependent. See individual simulator classes in the pyopus.simulator module for details.

The environment in which the command string is evaluated has a member named param. It is a dictionary containing all system parameters defined for the analysis. It also has a member names var which is a dictionary containing all variables used associated with the analysis. The var dictionary is also available during save directive evaluation.

The measures data structure is a dictionary with performance measure name for key. The values are also dictionaries describing individual performance measures using the following dictionary keys

  • analysis - the name of the analysis that produces the results from which the performance measure’s value is extracted. Set it to None for dependent measures (measures whose value is computed from the values of other measures).
  • corners - the list of corner names across which the performance measure is evaluated. Corner indices obtained from the worstCornerIndex() method of normalization objects (defined in the pyopus.evaluator.cost module) can be converted to corner names by looking up the corresponding members of this list. If this list is omitted the measure is evaluated in all corners defined in the corners structure.
  • expression - a string specifying a Python expression that evaluates to the performance measure’s value
  • script - a string specifying a Python script that stores the performance measure’s value in a variable named __result
  • vector - a boolean flag which specifies that a performance measure’s value may be a vector. If it is False and the obtained performance measure value is not a scalar (or scalar-like) the evaluation is considered as failed. Defaults to False.
  • depends - an optional name list of measures required for evaluation of this performance measure. Specified for dependent performance measures.

If both expression and script are given script is ignored.

If the analysis member is None the performance measure is a dependent performance measure and is evaluated after all other (independent) performance measure have been evaluated. Dependent performance measures can access the values of independent performance measures through the result data structure.

expression and script are evaluated in an environment with the following members:

  • m - a reference to the pyopus.evaluator.measure module providing a set of functions for extracting common performance measures from simulated response
  • np - a reference to the NumPy module
  • param - a dictionary with the values of system parameters that apply to the particular analysis and corner used for obtaining the simulated response from which the performance measure is being extracted.
  • var - a dictionary with the values of Python variables that apply to the particular analysis and corner used for obtaining the simulated response from which the performance measure is being extracted.
  • result - a dictionary of dictionaries available to dependent performance measures only. The first key is the performance measure name and the second key is the corner name. The values represent performance measure values. If a value is None the evaluation of the independent performance measure failed in that corner.
  • thisCorner - a string that reflects the name of the corner in which the dependent performance measure is currently under evaluation. Not available for independent performance measures.

Beside these members every simulator object provides additional members for accessing simulation results. See individual simulator classes in the pyopus.simulator module and the getGenerators() method of simulator obejcts for details.

The corners data structure is a dictionary with corner name for key. The values are also dictionaries describing individual corners using the following dictionary keys:

  • modules - the list of system description module names that form the system description in this corner
  • params - a dictionary with the system parameters that apply only to this corner
  • variables - Python variables available during the evaluation of all save directives, analysis commands, and measurements for this corner.

This data structure can be omitted by passing None to the PerformanceEvaluator class. In that case a corner named ‘default’ with no modules and no parameters is created.

class pyopus.evaluator.performance.PerformanceEvaluator(heads, analyses, measures, corners=None, params={}, variables={}, activeMeasures=None, debug=0, spawnerLevel=1)

Performance evaluator class. Objects of this class are callable. The calling convention is object(paramDictionary) where paramDictionary is a dictionary of input parameter values. The argument can also be a list of dictionaries containing parameter values. The argument can be omitted (empty dictionary is passed).

heads, analyses, measures, and corners specify the heads, the analyses, the corners, and the performance measures. If corners are not specified, a default corner named default is created.

activeMeasures is a list of measure names that are evaluated by the evaluator. If it is not specified all measures are active. Active measures can be changed by calling the setActiveMeasures() method.

params is a dictionary of parameters that have the same value every time the object is called. They should not be passed in the paramDictionary argument. This argument can also be a list of dictionaries (dictionaries are joined to obtain one dictionary).

variables is a dictionary holding variables that have the same value in every corner and every analysis. These variables are also available during every performance measure evaluation in the var dictionary. This can also be a list of dictionaries (dictionaries are joined).

If debug is set to a nonzero value debug messages are generated at the standard output. Two debug levels are available (1 and 2). A higher debug value results in greater verbosity of the debug messages.

Objects of this class construct a list of simulator objects based on the heads data structure. Every simulator object performs the analyses which list the corresponding head under head in the analysis description.

Every analysis is performed across the set of corners obtained as the union of corners found in the descriptions of performance measures that list the corresponding analysis as their analysis.

The system description for an analysis in a corner is constructed from system description modules specified in the corresponding entries in corners, and analyses data structures. The definitions of the system description modules are taken from the heads data structure entry corresponding to the head specified in the description of the analysis (analysis data structure).

System parameters for an analysis in a particular corner are obtained as the union of

  • the input parameters dictionary specified when an object of the PerformanceEvaluator class is called
  • the params dictionary specified at evaluator construction.
  • the params dictionary of the heads data structure entry corresponding to the analysis
  • the params dictionary of the corners data structure entry corresponding to the corner
  • the params dictionary of the analyses data structure entry corresponding to the analysis

If a parameter appears across multiple dictionaries the entries in the input parameter dictionary have the lowest priority and the entries in the params dictionary of the analyses have the highest priority.

A similar priority order is applied to simulator options and Python variables specified in the options dictionaries (the values from heads have the lowest priority and the values from analyses have the highest priority). The only difference is that here we have no options separately specified at evaluator construction because simulator options are always associated with a particular simulator (i.e. head).

Variables specified at evaluator construction have the lowest priority, followed by the values from heads, corners, and analyses dictionaries.

Independent performance measures (the ones with analysis not equal to None) are evaluated before dependent performance measures (the ones with analysis set to None).

The evaluation results are stored in a dictionary of dictionaries with performance measure name as the first key and corner name as the second key. None indicates that the performance measure evaluation failed in the corresponding corner.

Objects of this type store the number of analyses performed in the analysisCount member. The couter is reset at every call to the evaluator object.

A call to an object of this class returns a tuple holding the results and the analysisCount dictionary. The results dictionary is a dictionary of dictionaries where the first key represents the performance measure name and the second key represents the corner name. The dictionary holds the values of performance measure values across corners. If some value is None the performance measure evaluation failed for that corner. The return value is also stored in the results member of the PermormanceEvaluator object.

If spawnerLevel is not greater than 1, evaluations are distributed across available computing nodes (that is unless task distribution takes place at a higher level). Every computing node evaluates one job group. See the cooperative module for details on parallel processing. More information on job groups can be found in the simulator module.

finalize()

Removes all intermediate simulator files and stops all interactive simulators.

formatResults(outputOrder=None, nMeasureName=10, nCornerName=6, nResult=12, nPrec=3)

Formats a string representing the results obtained with the last call to this object. Generates one line for every performance measure evaluation in a corner.

outputOrder (if given) specifies the order in which the performance measures are listed.

nMeasureName specifies the formatting width for the performance measure name.

nCornerName specifies the formatting width for the corner name.

nResult and nPrec specify the formatting width and the number of significant digits for the performance measure value.

getActiveMeasures()

Returns the names of the active measures.

getAnnotator()

Returns an object of the PerformanceAnnotator class which can be used as a plugin for iterative algorithms. The plugin takes care of cost function details (results member) propagation from the machine where the evaluation of the cost function takes place to the machine where the evaluation was requested (usually the master).

getCollector()

Returns an object of the PerformanceCollector class which can be used as a plugin for iterative algorithms. The plugin gathers performance information from the results member of the PerformanceEvaluator object across iterations of the algorithm.

getComputedMeasures()

Returns the names of all measures that are computed by the evaluator.

getParameters()

Returns the parameters dictionary.

getVariables()

Returns the variables dictionary.

resetCounters()

Resets analysis counters to 0.

setActiveMeasures(activeMeasures=None)

Sets the list of measures that are going to be evaluated. Specifying None as activeMeasures activates all measures.

setParameters(params)

Sets the parameters dictionary. Can handle a list of dictionaries.

setVariables(variables)

Sets the variables dictionary. Can handle a list of dictionaries.

simulators()

Returns the dictionary with head name for key holding the corresponding simulator objects.

pyopus.evaluator.performance.updateAnalysisCount(count, delta, times=1)

Updates the analysis counts in dictionary count by adding the values from dictionary delta. If count is not given the current count of every analysis is assumed to be zero.

Returns the updated dictionary count.

class pyopus.evaluator.performance.PerformanceAnnotator(performanceEvaluator)

A subclass of the Annotator iterative algorithm plugin class. This is a callable object whose job is to

  • produce an annotation (details of the evaluated performance) stored in the performanceEvaluator object
  • update the performanceEvaluator object with the given annotation

Annotation is a copy of the results member of performanceEvaluator.

Annotators are used for propagating the details of the cost function from the machine where the evaluation takes place to the machine where the evaluation was requested (usually the master).

class pyopus.evaluator.performance.PerformanceCollector(performanceEvaluator)

A subclass of the Plugin iterative algorithm plugin class. This is a callable object invoked at every iteration of the algorithm. It collects the summary of the evaluated performance measures from the results member of the performanceEvaluator object (member of the PerformanceEvaluator class).

This class is also an annotator that collects the results at remote evaluation and copies them to the host where the remote evaluation was requested.

Let niter denote the number of stored iterations. The results structures are stored in a list where the index of an entry represents the iteration number. The list can be obtained from the performance member of the PerformanceCollector object.

Some iterative algorithms do not evaluate iterations sequentially. Such algorithms denote the iteration number with the index member. If the index is not present in the iterative algorithm object the internal iteration counter of the PerformanceCollector is used.

If iterations are not performed sequentially the performance list may contain gaps where no valid results structure is found. Such gaps are denoted by None.

reset()

Clears the performance member.

Previous topic

2.1. pyopus.evaluator.measure — Performance measure extraction

Next topic

2.3. pyopus.evaluator.aggregate — Performance aggregation

This Page