Introducing nonlinear elements

Until now all circuit elements were linear which made it possible for us to write down the system of linear equations directly from the circuit schematic using element footprints. When the elements are nonlinear their constitutive relations become nonlinear equations. Take for instance a semiconductor diode. Its current (iD) is expressed with the voltage across its terminals (uD) by the following relation

where IS is the diode's saturation current and VT is the thermal voltage. Assuming all constitutive relations (with the exception of voltage sources) are in the form where device currents are expressed with branch voltages. These currents can then be substituted in the KCL equations. For voltage sources the constitutive relations are added to the system of nonlinear equations. The system of equations is now nonlinear and can be formulated as

where x is the vector of unknowns and gi is the i-th nonlinear function defining the i-th nonlinear equation. Often we use a shorthand notation by introducing a vector-valued function g which yields a vector with n components for every argument x.

Solving such systems of equations can be done efficiently by means of the Newton-Raphson algorithm.

The Newton-Raphson algorithm for 1-dimensional problems

Suppose we have a nonlinear equation with a single unknown x.

Let the approximate solution of this equation be x(i). To improve its accuracy we linearize the nonlinear equation at x(i) and find the solution of the linearized equations.

Here g'(x) denotes the first derivative of g(x). The new approximate solution is

The algorithm that repeatedly applies this iterative formula is referred to as the Newton-Raphson (NR) algorithm. It can be shown that under certain mild conditions the algorithm converges rapidly to the solution of the nonlinear equation. One iteration of the Newton-Raphson algorithm is depicted in Fig. 1.


Fig. 1: One iteration of the Newton-Raphson algorithm. The thin line represents the nonlinear function g(x) while the thick line is its linearization at x(i).

We illustrate the NR algorithm on a n=1 dimensional example. Suppose we are trying to solve

If we rewrite this equation as g(x)=0 we get

Suppose our initial approximate solution is x(0)=0. Then the following sequence of approximate solutions is produced by the algorithm:
iteration 1: 0.500000000000000
iteration 2: 0.443851671995364
iteration 3: 0.442854703829747
iteration 4: 0.442854401002417
iteration 5: 0.442854401002389
...

We see the algorithm converges rapidly. In only 5 iterations the result stabilizes at 12 significant digits (i.e. differs in 13th digit between fourth and fifth iteration). The solution of the equation written with 15 significant digits is 0.442854401002389. We see the NR algorithm solved the equation to double precision (15 digits) in only 5 iterations.

What makes the NR algorithm so efficient? Mathematically it can be shown that the algorithm converges quadratically in the neighborhood of a solution. Quadratic convergence means that the error (i,.e. the difference between the approximate solution x(i) and the exact solution x*) can be expressed as

for i that is large enough. Roughly speaking this means that the number of exact digits doubles with every iteration of the algorithm. This is true if the initial approximate solution is close to the exact solution of the problem. The algorithm can fail in several ways.

For optimal performance the function g must be twice continuously differentiable, The initial approximate solution must be close to the exact solution, and the first derivative of g must not be equal to zero in an interval containing the initial approximate solution x(0) and the exact solution.

When to stop?

The NR algorithm improves the approximate solution with every iteration. At some point the approximate solution becomes good enough. How do we know when to stop? Simulators usually stop the algorithm when the following condition is satisfied

Here er and ea are the relative and the absolute tolerance, respectively. The stopping criterion is based on the assumption that if two consecutive approximate solutions are close enough to each other they are also close enough to the exact solution. In SPICE the relative tolerance (reltol simulator parameter) is 10-3. The absolute tolerance depends on the type of the unknown. If the unknown is a voltage 10-6 is used (specified by the vntol simulator parameter). For currents the absolute tolerance is 10-12 (specified by the abstol simulator parameter).

Generalizing the algorithm for n>1

To help us understand the algorithm for higher dimensional problems, let us find a geometric interpretation for the NR formula by first rewriting it as

The left-hand side is a linear function of x(i+1). In fact it is the linearization of g(x) in the neighborhood of x(i). The whole equation requires this linearization to be zero. When the linearization is performed close to the exact solution then it is almost equal to g(x) and solving the linearized equation produces a good approximation to the exact solution.

Now what is different when we have n unknowns? We can linearize n equations by computing the derivatives of the n left-hand sides. Let k denote the index of an unknown. One equation (g(x)=0) is linearized as

The linearized equation defines a plane in n-dimensional space. Therefore linearizing n nonlinear equations gives us n planes in n-dimensional space. The intersection of these n planes is the new approximate solution x(i+1). It can be obtained by solving the corresponding system of linear equations (obtained by linearizing the nonlinear system of equations at the previous approximate solution). We already know how to do this (by means of Gaussian elimination or LU decomposition, forward, and backward substitution ...).

We can write the linearized system of equations in matrix form as

where matrix G is the Jacobian of the system at x(i) and is defined as

The NR algorithm solves the following linear system to obtain the next approximate solution x(i+1)

The stopping condition is applied to every component of x independently. The algorithm stops when all components satisfy the stopping condition. In SPICE OPUS the stopping condition is slightly more elaborate. Three requirements must be met in order for the NR algorithm to stop.

The last two requirements prevent the algorithm from stopping when it is in fact oscillating around a solution. Because this last check can slow down convergence users of SPICE OPUS can turn it off by setting the noconviter simulator parameter.

Usually the initial approximate solution is a vector of all zeros. In SPICE one can override this default initial approximate solution with the use of the .nodeset netlist directive.

In practice the algorithm has a limited number of iterations for satisfying the stopping condition. This number is set by the itl1 parameter in SPICE (100 by default). If the algorithm fails to satisfy the stopping condition after itl1 iterations the analysis is considered as failed and an error is reported.

Let us illustrate the generalization to n>1 by solving the following system of nonlinear equations with n=2 unknowns.

The exact solution of this system is x1=2 and x2=1. First, we rewrite the system as

Vector valued function g(x) is then

Now we can express the Jacobian matrix

In first iteration, assuming initial vector x=[x1(0),x2(0)]=[0, 0], we need to solve the following linear system

Resulting in

After recursively applying the algorithm for 7 iterations we get the following (x1, x2) values:
iteration 1: (0.854842233008492, 1.072578883495754)
iteration 2: (1.772581077924714, 0.258541641916214)
iteration 3: (1.964213510583830, 1.014788796168930)
iteration 4: (1.999094978820735, 1.000282174876357)
iteration 5: (1.999999453215599, 1.000000137500636)
iteration 6: (1.999999999999811, 1.000000000000040)
iteration 7: (2.000000000000000, 1.000000000000000)
...

We can see that the number of digits of precision roughly doubles with each iteration. After 7 iterations the result is computed to 15 significant digits, i.e. full double precision.

Element footprint of a two-pin nonlinear device

Let us illustrate the construction of the linearized system of equations that are used in one iteration of the NR algorithm. We start with a simple example: a semiconductor diode (Fig. 2).


Fig. 2: A semiconductor diode connected between nodes k and l.

The constitutive relation of a semiconductor diode expresses the diode current as a nonlinear function of the diode voltage.

The diode contributes its current to the KCL equations of nodes k and l (i.e. gk(x)=0 and gl(x)=0).

The diode branch voltage can be expressed with node potentials as

To obtain the diode's contribution to the Jacobian matrix we must compute the derivative of the diode current with respect to the node potentials. First, let us compute the derivative with respect to the diode voltage.

Here gD denotes the differential conductance of the diode (not to be mistaken with nonlinear functions gk and gl which correspond to the KCL equations of nodes k and l). The diode's contribution depends on two unknowns: vk and vl. The derivatives of the diode current with respect to these two unknowns are

When we linearize the two nonlinear KCL equations for nodes k and l the differential conductance of the diode is added to the Jacobian of the system into rows k and l (corresponding to the two KCL equations) and columns k and l (corresponding to node potentials of nodes k and l). Let us assume for now there are no voltage sources in the circuit so we don't have to apply MNA. Note that rows of the Jacobian correspond to KCL equations and columns correspond to unknowns (node potentials). Let i denote the iteration of the NR algorithm. The element footprint of a semiconductor diode in the Jacobian matrix is then

Note that gD(i) is computed from vk(i) and vl(i). What about the right-hand side of the system of linearized equations? A diode will contribute to the k-th and l-th row of the RHS vector. The contributions of the diode to the k-th and l-th nonlinear equation are

For the diode's contribution in the k-th row of the right-hand side of the linearized system of equations we get

The contribution to row l of the RHS vector is

Now we can revisit the linear resistor connected to nodes k and l, but this time we treat it like a nonlinear element with constitutive relation

We see that the differential conductance is R-1 and does not depend on the current approximate solution. The contribution to the Jacobian matrix is therefore the same as the contribution to the coefficient matrix of a linear circuit and does not change between iterations of the NR algorithm. The contributions to the k-th row of the RHS vector is

Similarly, the contribution to the l-th row of the RHS vector is also 0. We see that the element footprint of a linear resistor does not change between iterations of the NR algorithm. In fact linear elements contribute to the Jacobian matrix in the same way as they do to the coefficient matrix of a linear circuit.

From the two examples we see that the Jacobian matrix of a nonlinear circuit in one iteration of the NR algorithm has the same role as the coefficient matrix of a linear circuit. The major difference between solving linear and nonlinear circuits is that the former require solving only one linear system of equations while the latter require solving multiple systems of linear equations (one per every iteration of the NR algorithm).

A more simple approach via element linearization

The approach described in the previous section seems complicated at first glance. Now let us take a different approach and show that it results in the same Jacobian and RHS contribution. Let us illustrate the approach on the semiconductor diode. The iD term in the system of nonlinear equations contributes to two KCL equations. The term itself depends nonlinearly on the diode voltage uD. Suppose we have the circuit's solution from i-th iteration. Let us denote the corresponding diode voltage and current with uD(i) and iD(i), respectively. The solution of i+1-th iteration is not known, but it must satisfy


Fig. 3: Linearization of the diode's constitutive relation. The nonlinear relation (thin line) is linearized around the circuit's solution obtained in i-th iteration of the NR algorithm. The linearized relation is depicted by a thick line.

Now let us replace the nonlinear diode's constitutive relation with a linear one by computing its linearization at the last known circuit solution (i.e. uD(i)). If the new candidate solution uD(i+1) is close to the last one this will result in a small error which decreases as the algorithm converges to the circuit's solution (Fig. 3). The linearized relation can be written as

where gD(i) is the derivative of the nonlinear constitutive relation at uD(i).

We can rewrite the linearized constitutive relation as

Now let us interpret the linearized constitutive relation in terms of circuit elements. Note that the voltages and the currents in the circuit are the ones being computed in i+1-th iteration. The values from i-th iteration are actually constants from the point of view of i+1-th iteration. The first two terms of the reorganized constitutive relation do not depend on the voltage from i+1-th iteration. Therefore they are constant and represent an independent current source.


Fig. 4: Nonlinear diode in i+1-th iteration of the NR algorithm (left) and its linearized model (right).

The last term depends on uD(i+1) and represents a linear resistor with conductance gD(i). The linearized constitutive relation can therefore be modelled as a linear circuit comprising one independent current source and one resistor (Fig. 4). Now let us construct the contribution of the linearized diode model to the system of linear equations (element footprint) of a linearized diode. Suppose the diode is connected between nodes k and l. The contribution to the matrix of coefficients is then

which is in fact identical to the contribution of a diode to the Jacobian matrix in the i+1-th iteration. Similarly, we can construct the contribution to the RHS vector by first rewriting the expression for I(i).

The contribution to the RHS vector can now be written as

By comparing the obtained RHS contribution to the one from the previous section we can see that they are identical. The NR algorithm solves the circuit in i+1-th iteration by first replacing all nonlinear elements with their linearizations at the solution of i-th iteration and then solving the obtained linear circuit.

Element footprint of a device with multiple pins

We are going to illustrate the construction of an element footprint for elements with multiple pins on an enhancement mode MOSFET with n-type channel (NMOS) operating in saturation region (Fig. 5). MOS transistors are 4-pin elements. To keep things simple we assume the bulk pin is connected to the source pin.


Fig. 5: A NMOS transistor.

A NMOS transistor is operating in the saturation region when the following two conditions are satisfied.

Where UT is the threshold voltage of the NMOS transistor. The currents flowing into the pins in the saturation region are given by

We can express branch voltages uGS and uDS with node potentials as

To construct the element footprint in the Jacobian matrix we first compute the partial derivatives of currents with respect to branch voltages.

Next, we express the partial derivatives of branch currents with respect to node potentials. There are 9 partial derivatives we need to compute. These derivatives can be expressed with g21 and g22.

Now we can construct the element footprint in the Jacobian matrix. A NMOS transistor contributes to KCL equations of nodes k and l in columns corresponding to node potentials vj, vk, and vl. There is no contribution to the KCL equation of node j because iG=0.

A NMOS does not contribute to the j-th row of the RHS vector due to iG=0. The contribution to the k-th row is

The contribution to the l-th row of the RHS vector is

The superscripts indicate the iteration from whose results the respective quantity is obtained (i.e. g21(i) denotes the value of g22 computed from the results of the i-th iteration).

Linearization of a device with multiple pins

This time let us construct the element footprint of an enhancement mode NMOS transistor by linearizing its constitutive relations. Suppose the circuit's solution in i-th iteration of the NR algorithm results in uGS(i) and uDS(i). We are going to linearize the gate and the drain current in i+1-th iteration (iG(i+1) and iD(i+1)). The two unknowns are uGS(i+1) and uDS(i+1). Because we have two independent variables the linearization will comprise two partial derivative terms.

Clearly the gate current is zero so the gate can be modelled as an open circuit.

where


Fig. 6: Linearized NMOS transistor model for the NR algorithm.

Because iS=-iD the linearized NMOS model comprises three elements connected in parallel between the drain and the source: an independent current source I(i), conductance g22(i), and a voltage-controlled current source with transconductance g21(i) controlled by uGS(i+1) (Fig. 6).

Constructing the linear system of equations for one iteration of the Newton-Raphson algorithm

Suppose we have a nonlinear circuit depicted in Fig. 7. Let us write down the system of nonlinear equations and then formulate the equations solved by the NR algorithm in one iteration.


Fig. 7: A simple MOSFET-based amplifier.

We assume the MOSFET is operating in the saturation region. The two currents of the MOSFET can be expressed as

The circuit has n=4 nodes and two independent voltage sources which introduce two additional unknowns. The system of equations will comprise 3 KCL equations and the constitutive relations of the two independent voltage sources. We have 5 unknowns: 3 nodal voltages (v1, v2. and v3) and two branch currents (iGG and iDD). The three KCL equations are

The constitutive relations of the two independent voltage sources are

After substituting the MOSFET's constitutive relations and expressing MOSFET branch voltages with nodal voltages we get the following nonlinear system of equations.

Let gk denote the left-hand side (LHS) of the k-th nonlinear equation. Then the partial derivatives (i.e. the elements of the Jacobian matrix) are

Let us assume the following ordering for the unknowns: v1, v2, v3, iGG, iDD. The Jacobian matrix in the i+1-th iteration of the NR algorithm computed from the results of the i-th iteration is then

or more briefly

The argument of functions gk is the vector of unknowns denoted by x. The RHS vector of the linear system of equations solved in the i+1-th iteration of the NR algorithm is then

After substituting the Jacobian, the solution of the i-th iteration, and the LHS of the nonlinear equations we get

Which after simplification results in

The RHS vector has nonzero entries where independent sources contribute their element footprint and in equations that contain nonlinear terms. Strictly speaking all terms in a linear relation must be products of a constant and an unknown. All independent sources are in this respect nonlinear elements. Take for instance the constitutive relation of a resistor with resistance R (which by this strict definition is linear).

The relation consists only of linear terms (i.e. terms composed as a product of a constant and an unknown). On the other hand, the constitutive relation of an independent current source generating current I is

This relation is (strictly speaking) nonlinear as the second term (i.e. the constant I) is not a product of a constant and an unknown). The linear system of equations describing circuit in Fig. 7 that is solved by the NR algorithm in the i+1-th iteration is

where

Usually all components of the initial approximate solution are set to 0, (i.e. v1(0)=v2(0)=v3(0)=iGG(0)=iDD(0)=0).

Constructing the linear system of equations via element linearization

The linear system of equations solved by the NR algorithm in one iteration can also be constructed via a more simple approach - element linearization. In this approach we replace all nonlinear elements with their linearized models. The values of the elements comprising a linearized model of a nonlinear element depend on the circuit's solution obtained from the previous iteration of the NR algorithm. If we replace all nonlinear elements in the circuit in Fig. 7 we obtain the circuit in Fig. 8. All unknowns in this circuit are denoted with a superscript (i+1) meaning that they are computed in the i+1-th iteration of the NR algorithm.


Fig. 8: Linearized circuit of a simple MOSFET-based amplifier in Fig. 7 used in one iteration of the NR algorithm.

For the circuit in Fig. 8 we can write down the system of equations with everything we learned in the first two lectures.

where the values of g21(i), g22(i), and I(i) are the values of the elements in the linearized MOSFET model.

Operating point analysis and DC sweep

With the knowledge we gained up to this point we can handle circuits with arbitrary linear and nonlinear resistive elements. The main property of resistive elements is that we can express the currents flowing into the pins of the element as (non)linear functions of the node potentials corresponding to nodes to which the element is connected (or the opposite, for that matter). The function may not contain any derivatives or integrals with respect to time. The derivatives and integrals with respect to time are required for describing reactive elements (like capacitors and inductors).

Now let us assume we describe our reactive elements by means of derivatives with respect to time. We can always do that (i.e. convert an integral into a derivative) by choosing an appropriate independent variable in the formulation of the element's constitutive relation. Now suppose all derivative terms are equal to zero. This is the case when the voltages and the currents in the circuit no longer change. For stable circuits excited only by DC voltage and current sources we reach this state if we wait for a sufficient amount of time. We refer to this state as the circuit's operating point. For computing the circuit's operating point we need to consider only the resistive elements in the circuit. Therefore the above described algorithm can be used for finding the operating point of the circuit.

Often we are interested in how the operating point of a circuit changes if we change the DC value of the circuit's excitation (voltage or current source). Such analysis is also referred to as the DC operating point sweep or simply DC analysis. A DC analysis is much faster than the equivalent sequence of operating point analyses because the solution of the last sweep point is used as the initial iterate for the NR algorithm solving the next sweep point. It provides a good initial guess and the NR algorithm requires only a few iterations to satisfy the stopping condition. Therefore SPICE provides a separate simulator parameter (itl2) for setting the limit on the number of NR iterations available for solving one point in a DC sweep. By default its value is set to 50.

Convergence problems and how to solve them

The NR algorithm can exhibit convergence problems (slow convergence or even no convergence), particularly for circuits with strong nonlinearities. Simulators use various tricks to improve convergence.

Junction voltage limiting.

p-n junctions in diodes and transistors exhibit an exponential i(u) characteristic. This can result in large currents (and consequently large left-hand side in nonlinear KCL equations). The values can even exceed the maximum value allowed by double floating point precision. Once that happens IEEE floating point infinite values or even NaN (not a number) values can occur in the candidate solution. Especially NaN values spread like a "virus". Any (binary or unary) operation performed on a NaN value results in a NaN so the NaNs quickly spread across the whole solution vector and make the result completely useless.

To avoid this the independent variables in the exponential functions (the branch voltages across p-n junctions) are limited to interval [-voltagelimit, voltagelimit] where voltagelimit is a simulator parameter (1030V by default). If the branch voltage is smaller than -voltagelimit it is truncated to -voltagelimit. Similarly, if the branch voltage is greater than voltagelimit it is truncated to voltagelimit. The obtained value is then used for computing the p-n junction current and its derivative with respect to the node potentials. This procedure not only helps avoid infinite and NaN values, but also speeds up the convergence of the NR algorithm.

Damped Newton-Raphson algorithm.

The NR algorithm can produce an oscillating sequence of candidate solutions. This behavior can be eliminated to great extent if the step taken by the algorithm is shortened. To simplify the presentation of this approach we assume a 1-dimensional problem (n=1). Let er and ea denote the relative and the absolute tolerance used in the stopping condition. Let x(i) and x(i+1) denote the previous and the new approximate solution. We define the step tolerance as

After every NR iteration the stopping condition is checked. If the condition is not satisfied the step is truncated according to the following formula

In the next NR iteration the truncated solution is used as the previous solution. Parameter s is the truncation factor specified by the sollim simulator parameter in SPICE OPUS (10 by default). The truncated algorithm makes a slow but steady progress. Due to this slow progress it often runs out of available iterations. Therefore the iteration limit in SPICE OPUS is increased to itl1 * sollimiter. The value of the sollimiter simulator parameter is 10 by default.

The damped NR algorithm is used only when the simulator detects convergence problems. By default the original NR algorithm is used.

Adding shunt resistors to the circuit.

If we connect resistors from every node to the ground we effectively add diagonal entries equal to the inverse of the added resistance to the Jacobian. If the resistance is small enough the diagonal part begins to dominate the Jacobian. In practice many convergence problems are reduced if resistors with sufficiently small resistance (shunts) are added. If the resistance is not too small shunt resistors do not significantly alter the circuit's behavior. By default shunting is turned off. It can be enabled by specifying the shunt resistance with the rshunt simulator parameter.

Homotopy-based approaches

If we cannot solve a problem with the NR algorithm we try solving a much simpler problem first. In one iteration of the homotopy-based approach we slightly modify the problem so that it becomes more similar to the original (unsolvable) problem and apply the NR algorithm to this modified problem starting with the solution obtained from the simple problem. Usually we obtain good convergence and solve the problem successfully (because, after all, it is still a simple problem). In the next iteration we again change the problem a bit so that it now resembles the original problem even more. We use the last obtained solution as the initial solution and apply NR's algorithm again. We repeat this procedure until the modified problem becomes identical to the original problem. The last obtained solution is therefore the solution of the original problem.

There are many ways how one can apply homotopy to difficult circuits. We describe briefly some of the approaches used by circuit simulators.

GMIN stepping

In GMIN stepping resistors are added between every node and the ground. But contrary to shunt resistors which are added permanently, the resistors in GMIN stepping are added only temporarily. We start by adding large resistors and try to solve the circuit. If we fail we decrease the resistances and try again. Sooner or later the resistors will become small enough so that their contributions will dominate the diagonal of the Jacobian and the system will become solvable. Now homotopy comes to the rescue as we have our simple problem that we can solve. In every iteration of GMIN stepping we increase the resistance of the resistors by a certain amount (step size) and try to solve the circuit by using the solution obtained from the previous iteration. If we fail we try again, but with a smaller step size. After a successful iteration we increase the step size. Hopefully, after several iterations we manage to increase the resistors to such extent that their effect becomes neglectable (i.e. they become greater than 1/gmin). At this point we remove them and apply the NR algorithm for one last time (with the last solution used as the initial approximate solution) to solve the original circuit. If this NR algorithm fails the GMIN stepping is considered as failed.

In SPICE OPUS the value of gmin that is considered neglectable is specified by simulator parameters gmin (for AC and TRAN analysis) and gmindc (for operating point and DC analysis). The default value is 10-12. The number of GMIN steps (for both decreasing and increasing the added resistances) is specified by the gminsteps simulator parameter. When the step size in GMIN stepping becomes too small (i.e. the progress of GMIN stepping slows down too much) the damped NR algorithm is used until a solution for the problematic iteration is found.

Source stepping

In source stepping the simple problem to solve is the circuit with all independent sources turned off (i.e. set to 0). In every iteration of source stepping we increase the values of independent sources towards their true value by a certain step size. If the iteration is successful we increase the step size. In the opposite case we try again with a smaller step size. Eventually the independent sources reach their true values. At that point we have the solution of the original circuit.

In SPICE OPUS the number of source stepping iterations is limited to a value specified by the srcsteps parameter (10 by default). If the step size becomes too small the damped NR algorithm is used for the problematic iteration.

Source lifting and cmin stepping

Most circuits contain reactive elements (capacitors and inductors). These elements are ignored in operating point and DC analysis (i.e. capacitors are removed and inductors become short circuits). On the other hand, reactive elements provide another possibility for applying the homotopy-based approach to the problem of computing the solution of a nonlinear circuit. If we analyze a stable circuit (including all reactive elements) in time domain with all independent sources being slowly ramped up from zero to their actual values we can expect to reach the DC solution of the circuit if we perform the simulation up to a sufficiently distant timepoint where all derivatives with respect to time vanish. This approach does not work for oscillators.

Although at this point we don't understand how time-domain analysis of circuits containing reactive elements is performed, we can still outline the main idea of source lifting. In the time-domain analysis we assume the reactive elements initially store no energy (i.e. the initial voltages across capacitors and the initial currents flowing through inductors are all zero). Together with all independent sources set to zero we have a circuit that is trivial to solve. By ramping up independent sources we implicitly perform homotopy iterations as we go from timestep to timestep in time-domain analysis.

In SPICE OPUS simulator parameter srclriseiter specifies the number of timesteps during which ramping up of the sources is performed. If the srclrisetime simulator parameter is specified ramping up is not performed on a timestep to timestep basis. Instead it is performed until the simulation reaches the time specified by the srclrisetime. The srclminstep simulator parameter sets a lower bound on the timestep. srclmaxtime and srclmaxiter specify the time and the number of timepoints when the time-domain analysis stops. If the values of the unknowns in the time-domain analysis stabilize within their respective tolerances and remain there for the number of timepoints specified by the srclconviter simulator parameter the time-domain analysis is terminated earlier. The solution obtained at the final timepoint is used as the initial solution approximation in the NR algorithm for computing the DC solution of the circuit. If the NR algorithm fails to converge the source lifting is considered as failed.

Fast changes of the unknowns can cause problems in the time-domain analysis. If source lifting fails and the cminsteps simulator parameter is set to a value greater than zero capacitors are temporarily connected between circuit's nodes and the ground node. The value of the capacitance is specified by the cmin simulator parameter. Source lifting is repeated with the modified circuit. If it fails again the values of the added capacitors are increased and the procedure is repeated. The number of repetitions is specified by the cminsteps simulator parameter. For problematic circuits one can disable the initial source lifting without added capacitors by setting the noinitsrcl simulator parameter.

Sometimes a circuit oscillates in time-domain analysis. This means that source lifting will most likely fail. For such circuits source lifting can be disabled by setting the nosrclift simulator parameter.

Fine-tuning the algorithms for achieving convergence in SPICE OPUS

Assigning numbers from 1 to 3 to simulator parameters gminpriority, srcspriority, and srclpriority sets the order in which GMIN stepping, source stepping, and source lifting are applied, respectively. For particularly troublesome circuits one can disable the initial NR algorithm and go straight to the advanced algorithms by setting the noopiter simulator parameter. The opdebug simulator parameter turns on the verbose mode in SPICE OPUS. This produces a lot of messages which can help debug convergence problems.

SPICE OPUS automatically tunes the parameters of its algorithms for solving the operating point of a circuit. This tuning can be disabled by setting the noautoconv simulator parameter.