Approximating the continuous time scale with discrete steps

Suppose we want to simulate the behavior of a circuit from t=0 to t=tend. Clearly, we cannot solve the circuit for all timepoints. Therefore we divide the timescale in discrete steps. Let tk denote the timepoint where the latest solution of the circuit's equations was computed. The solution we are trying to compute corresponds to timepoint tk+1. Let hk=tk+1-tk denote the k-th timestep between tk and tk+1 (Fig. 1). In circuit simulation the timestep is usually not constant because the dynamics of the circuit's behavior can vary greatly from time to time. When the circuit's response changes quickly the timestep is appropriately small. When the circuit "sleeps" and nothing interesting is happening with voltages and currents the timestep can be large.


Fig. 1: Discretization of the timescale in time-domain analysis. Note that the timestep (h) is not constant.

Numerical integration of differential equations

As we learned in previous chapters the system of equations describing the circuit's behavior in time domain can be formulated as

where vector-valued function g represents the resistive part of the circuit, vector-valued function q represents the reactive part of the circuit, vector y represents the independent sources (excitations), and vector x represents the unknowns (node potentials and branch currents). This is a nonlinear ordinary differential equation (ODE). Until now we managed to avoid the derivative term. In operating point analysis and DC small-signal analysis it simply vanished. In AC small-signal analysis we transformed the equation into frequency domain which changed the computation of the derivative wrt. time into simple multiplication with jω. But now we must face the music and deal with the derivative term.

The usual approach is to approximate the derivative of a quantity with a weighted sum of its values and its derivatives at past timepoints tk, tk-1, ... This procedure is referred to as numerical integration of ODEs. In fact, we go even one step further. We also use the value of the quantity at the timepoint for which we are trying to solve the circuit (tk+1). If we do the latter the numerical integration is referred to as implicit numerical integration. In explicit numerical integration we avoid using quantities that have not been computed yet.

For the sake of simplicity we introduce the following notation. Let q̇ denote the derivative of q with respect to time. Subscripts denote timepoint indices, i.e. qk corresponds to timepoint tk. A very simple implicit integration algorithm is the backward Euler algorithm where the derivative is expressed with the last computed value and the one that is about to be computed.


Fig. 2: Illustration of the backward Euler and the trapezoidal formula. The dashed lines represent tangents and secants. The expressions next to the dashed lines are their respective slopes.

Another simple algorithm is the trapezoidal algorithm whose origins can be traced back to finite difference approximation of the derivative via the average of the derivative values taken at the endpoints of the interval.

From here we can express the trapezoidal formula

Fig. 2 illustrates the quantities that appear in the aforementioned integration algorithms.

Nonlinear reactive elements and their matrix footprints
If, for instance, the backward Euler formula is applied to the circuit equations we obtain the following algebraic equation for the circuit's response at tk+1.

In this equation q(xk) and yk+1 are known (the former one can be computed from the solution at the previous timepoint and the latter one represents the values of circuit's excitations at the timepoint where we are solving the circuit). We can rearrange the equation by separating the known and the unknown terms.

This is a nonlinear equation which can be solved by means of the Newton-Raphson algorithm. From this equation we can quickly deduce the matrix footprint of a linear capacitor. We start with the constitutive relation where the charge is expressed with the voltage as q=Cu. In the KCL part of the circuit equations the capacitor contributes a term of the form q̇ (the capacitor current). When we are solving at timepoint tk+1 we must express q̇k+1 with past values of the circuit's response and qk+1 (after all we are using implicit integration). Suppose we use the backward Euler formula which yields


Fig. 3: Linear capacitor (left) and its model used for computing the transient response at one timestep (right).

Here uk+1 is the unknown and uk is the circuit's solution at the last computed timepoint. Because q̇ is the capacitor current we can interpret the obtained equation as a parallel connection of a resistor R and an independent current source I (Fig. 3, right).

From this we can conclude that the element footprint of a capacitor connected between nodes a and b for solving the circuit at timepoint tk+1 is

Similarly, the contribution to the right-hand side (contribution of the current source) is

If the capacitor is nonlinear we skip the assumption that q=Cu. The backward Euler integration formula yields

Note that qk+1 is a nonlinear function of the capacitor voltage uk+1. Therefore a nonlinear capacitor can be interpreted as a nonlinear resistor whose current is equal to f(uk+1) in parallel with an independent current source I (Fig. 4, center). We can see that numerical integration converted a nonlinear capacitor into a nonlinear subcircuit.


Fig. 4: Nonlinear capacitor (left), its model after numerical integration (center), and the model used in one iteration of the NR algorithm that solves the circuit at tk+1 (right).

The NR algorithm is used for solving the resulting system of nonlinear equations. In one iteration of the NR algorithm the nonlinear resistor is linearized. Let superscripts denote the iteration of the NR algorithm and let j denote the last solved iteration of the NR algorithm. Then the linearization of f(uk+1) in the j+1-th iteration of the NR algorithm is

Because the derivative of charge (q) with respect to voltage (u) is the differential capacitance c(u), we can write

This corresponds to an independent current source INR in parallel with a linear resistor R (Fig 4, right) where

For a linear capacitor we have c(u)=C and q(u)=Cu which implies INR=0.

Now suppose we use the trapezoidal integration formula. The current flowing into a linear capacitor (q̇) is now expressed as

When the trapezoidal algorithm is used the circuit equivalent of a linear capacitor in one iteration of the Newton-Raphson algorithm is the same as the circuit in Fig. 3 (right). The only difference is, that the resistance of the resistor is R=hk/(2C) and the current generated by the current source is I=-(2Cuk/hk+q̇k).

One might ask at this point: where do we get q̇k from? Fortunately the derivative is computed by the integration formula at every past timestep for which the circuit's equations were solved. We just need to store it for later use. This leaves us in a bit of a dilemma - what to do when we are solving the circuit's equations at the first timepoint? For the first timepoint we resort to integration formulae which do not require the knowledge of the derivative at past timepoints (e.g. Backward Euler formula). We can switch to using a more advanced integration formula (like trapezoidal integration) at the second timepoint.

If the capacitor is nonlinear, the trapezoidal integration formula results in

This again corresponds to the model circuit Fig. 4 (center) with the following values for its elements

After linearization is applied we arrive at the model circuit in Fig 4 (right) which is used at tk+1 for computing the j+1-th iteration of the NR algorithm. The elements of the model circuit are

Again, we can see that INR vanishes for linear capacitors.

Generalized approach to numerical integration formulae

There exist many integration formulae. In numerical mathematics their purpose is to express the value of a quantity q at timepoint tk+1 with the values of the quantity at past timepoints and values of its first derivative q̇ at past timepoints including tk+1 (the latter is used only in implicit formulae). The following ansatz can be used for expressing an arbitrary integration formula.

Suppose q is a function of time. For most functions an integration formula provides only an approximation qk+1 to the actual value of q(tk+1). An exception to this are polynomials of an order not exceeding n. An integration formula has order n if it is exact for polynomials of order up to and including n. By assuming q(t) is a polynomial of time of the form αtj we can compute coefficients ai and bi.

Derivation of the backward Euler formula

Suppose we want do derive an implicit integration formula of order n=1 (backward Euler formula). Because a first-order polynomial has only two coefficients the integration formula can have only two nonzero coefficients. We are constructing an implicit integration formula so b-1 will be nonzero. As the second nonzero coefficient we choose a0. The ansatz for the integration formula is therefore

For a polynomial q(t)=α we have

Substituting this in our ansatz yields the first equation for computing the coefficients b-1 and a0.

For a polynomial q(t)=αt we can write

By substituting this in the ansatz we get the second equation for computing coefficients a-1 and a0.

Equations can be simplified if we assume tk=0 which implies tk+1=hk. The system of equations is now

This system is already solved. The obtained numerical integration formula is therefore

By expressing q̇k+1 we get the backward Euler formula we introduced earlier.

Derivation of the trapezoidal formula

Trapezoidal formula is a second order implicit formula (n=2). We obtain it if we assume the nonzero coefficients are a0, b0, and b-1. The ansatz for the formula is

By substituting q(t)=α and q(t)=αt in the ansatz we get the first two equations for computing the coefficients.

The third equation is obtained by substituting q(t)=αt2 in the ansatz. For this purpose we first compute

By substituting this in the ansatz we obtain the third equation.

Again, we assume tk=0 and tk+1=hk which greatly simplifies the equations and yields the following linear system.

Solving the system yields a0=1, b0=1/2, and b-1=1/2. After substituting these values in the ansatz we get

from where the trapezoidal formula follows after solving for q̇k+1.

Adams-Moulton integration formulae

An Adams-Moulton integration formula of order n is obtained if we choose b-1, ..., bn-2, and a0 to be the nonzero coefficients in the general integration formula ansatz. The backward Euler formula is in fact the Adams-Moulton formula of order n=1. The trapezoidal formula is the Adams-Moulton formula of order n=2.

Gear integration formulae

These formulae are also referred to as backward differentiation formulae (BDF). The BDF formula of order n is obtained if a0, ..., an-1 and b-1 are chosen as the nonzero coefficients in the general integration formula ansatz. The BDF formula of order n=1 is in fact the backward Euler formula.

Single-step vs. multistep

If the integration formula involves only values at tk and tk+1 then it is a single-step formula. Such a formula depends only on the last step length (hk). The backward Euler formula and the trapezoidal formula are single step integration formulae. Because their coefficients depend only on hk, they can be computed in advance.

Multistep formulae involve values at more that the previously mentioned two timepoints. If the step is constant (i.e. hk=hk-1=hk-2=...) their coefficients can be computed in advance. Unfortunately in circuit simulators the step varies with the circuit's dynamics. Therefore the coefficients of multistep methods must be recomputed at every timestep. For a formula of order n this involves solving a linear system of n+1 equations.

How simulators apply numerical integration

A quantity (i.e. capacitor charge, inductor flux) must be numerically integrated to get rid of its derivative with respect to time and replace it with a weighted sum of past values of the quantity and its derivative. As we already know, the value of the quantity that is about to be computed (at tk+1) also takes part in this weighted sum if implicit numerical integration formulae are used. For this purpose the simulator stores past values of the quantity and its derivative with respect to time. The length of this storage depends on the type and maximal order of the integration algorithm used by the simulator.

If a multistep algorithm is used the simulator recomputes the integration formula coefficients before the Newton-Raphson algorithm starts solving the circuit for one timepoint. Because the last timestep (hk) is part of the integration formula ansatz the coefficients don't need to be recomputed if the simulator decides to abandon a solution at a particular timepoint, reduce the timestep, and repeat the Newton-Raphson algorithm with a shorter timestep (this happens when the obtained solution is not accurate enough; we will discuss this issue later when we introduce local truncation error).

What about explicit integration?

There also exist various explicit integration algorithms. For instance, if we select the nonzero coefficients to be b0, ..., bn-1 and a0 we obtain the Adams-Bashforth integration formula of order n. The forward Euler formula is in fact the Adams-Bashforth formula of order n=1.

Explicit integration methods tend to produce unstable results (the circuit response explodes) if the timestep (hk) is greater than the time constant of the circuit's response. Consequently the timestep must be kept small, even when the circuit's response does not change much. This is particularly problematic in "stiff" circuits with multiple time constants, of which one is small and one is large. Such circuits can be analyzed with explicit integration only when the timestep is smaller than the shortest time constant of the circuit. To see any meaningful results we must simulate the circuit for a multiple of the longest time constant. This means that a large number of timesteps must be taken. Because every step contributes some numerical error (we will later name it local truncation error) and the errors accumulate, the error at the final timepoint can become greater than the response itself which makes the obtained result useless.

As a side note, let us mention that if we choose a0, ..., an to be the nonzero coefficients we obtain an n-th order polynomial extrapolation formula for computing the value of q at tk+1. The obtained formula is equivalent to constructing an n-th order polynomial interpolation that matches the quantity at timepoints tk-n, ..., t0 and computing its value at tk+1.

Local truncation error

In the process of deriving an integration algorithm formula of n-th order we assumed that the response (q(t)) is a polynomial of order not exceeding n). Now suppose the response is a polynomial of order exceeding n. As any sufficiently smooth function can be expressed in the form of a Taylor series (i.e. as a polynomial of infinite order) we can expect that the result produced by the integration algorithm will differ from the actual response q(t). Even if the values of q and q̇ are known exactly at past timepoints tk, tk-1, ..., we can expect that the value computed by the integration algorithm at tk+1 will differ from q(tk+1). This difference is referred to as the local truncation error (LTE) (see Fig. 5). LTE is expressed in terms of q (i.e. for capacitors this is the stored charge and for inductors the stored magnetic flux).


Fig. 5: Local truncation error εk+1 is the difference between the value of a quantity at tk+1 computed via numerical integration and its actual value under the assumption that the quantity and its derivative are known exactly for all earlier timepoints t≤tk.

Suppose the exact values of q are given by a Taylor series in the neighbourhood of tk

where q(j) denotes the j-th derivative of q with respect to time. The first derivative of the exact response is

The integration algorithm computes an approximation of q(tk+1) denoted by qk+1. We substitute q(tk) and q̇(tk) for qk and q̇k in the integration algorithm ansatz to obtain the approximate value of q(tk+1).

We separate the term corresponding to j=0.

and express the sum over j as a series comprising terms with powers of hk.

On the other hand, the exact value of the response at tk+1 can be expressed with the Taylor series

The LTE is defined as the difference between qk+1 and q(tk+1). Due to the way we expressed these two terms LTE can be expressed as a series comprising terms with powers of hk.

By comparing the last three expressions we get

When we were deriving the equations for computing coefficients ai and bi we required qk+1=q(tk+1) for q(t) that was a polynomial of order not exceeding n. This is possible only if coefficients Cj are zero for all j=0,1,2,...,n. In fact, the first n+1 equations obtained from the last two expressions are exactly those that are used for computing coefficients ai and bi. For an integration formula of order n the first nonzero coefficient in the series expressing the LTE is therefore Cn+1 which is also referred to as the error coefficient. The series expressing the LTE has infinitely many terms. For the purpose of estimating the LTE we keep only the term with the lowest power of hk.

Let us compute the error coefficient for the backward Euler formula for which n=1 (this implies that the error coefficient is denoted by j=n+1=2). Because the only nonzero coefficients in the formula are a0 and b-1 we have p=0 and r=-1. The error coefficient is therefore

The error coefficient for the trapezoidal formula (n=2, p=0, r=0) is obtained as

Whenever the coefficients of the integration formula are recomputed the error coefficient must also be recomputed.

One piece is still missing before we can actually compute the LTE - the n+1-th derivative of q at tk. We can estimate it by applying polynomial interpolation of order n+1 to the latest n+2 values of q. Because LTE is estimated after the Newton-Raphson algorithm solves the circuit at tk+1, the values of q used in the interpolation are qk-n, ..., qk+1. The n+1-th derivative of obtained polynomial p(t) is constant and does not depend on time. A convenient way for computing it is to use divided differences. Suppose we have n+1 points (x0 y(x0)), ..., (xn, y(xn)). The Newton interpolation polynomial of order n that interpolates these points can be expressed as

where basis polynomials are defined as

Note that η0(x)=1. The coefficients αi are defined recursively with divided differences as

The n-th derivative of Nn(x) can be expressed as

Assuming values qk-n, ..., qk, qk+1 correspond to timepoints tk-n, ..., tk, tk+1 the LTE can be expressed as

Timestep control

The transient analysis algorithm can be summarized as follows.

Integration order n;
Circuit solved up to and including tk;
Current timestep is hk;
Closest breakpoint in the future is at tbr;
Maximal allowed integration algorithm order is maxord;
Choose initial timestep h0;
n := 1;
k := 0;
t0 := 0;
while tk<tstop do
  // Are we crossing a breakpoint?
  if tk+hk>tbr then
     // Cut the timestep.
     hk := tbr - tk;
  endif
  Solve circuit at tk+hk;
  if NR algorithm converged slowly or failed to converge then
     // Timestep rejected, reduce algorithm order to 1.
     n := 1;
     Choose new timestep h* < hk;
     hk := h*;
  else
     // Timestep accepted, increase algorithm order, if possible.
     n := min(n+1, maxord)
     Compute new timestep h* so that LTE stays bounded;
     h* := min(2hk, h*);
     if h* is too small then
       Stop simulation (timestep too small);
     endif
     if tk+hk = tbr then
        // Integration order must be set to 1 after the circuit is solved at a breakpoint.
        n := 1;
     endif
     // Move to the next point on the timescale.
     tk+1 := tk+hk;
     hk+1 := h*;
     k := k+1;
  endif
endwhile

One iteration of the while loop tries to solve the circuit at tk+hk. The circuit's solution at tk is used by the NR algorithm as the initial approximate solution for solving the circuit at tk+hk. After the NR algorithm finishes there are two possible outcomes: the timestep is either rejected or accepted.

A timestep is rejected if

The timestep is reduced and the integration order is set to 1. In the next iteration of the while loop the simulator tries to solve the circuit with a shorter timestep.

If the timestep is not rejected, then it is accepted. With the circuit's solution at tk+1=tk+hk (and all previous stored solutions) a new timestep h* is computed in such manner that the LTE is kept bounded at tk+2=tk+1+h*. Index k is increased and the simulator advances to the next timepoint.


Fig. 6: Excitation (e.g. independent voltage source) with two breakpoints (left). Passing the nearest breakpoint (tk+1≥tb1) results in the timestep being cut so that tk+1=tb1 (right).

Sometimes the circuit's excitations contain discontinuities in their value or derivatives. Timepoints tb where such discontinuities occur are referred to as breakpoints. If a breakpoint lies on the time interval tk < t < tk+1 the timestep is shortened (cut) so that tk+1=tb. After the circuit's solution is accepted at a breakpoint the simulator reduces the integration algorithm order to 1.

Choosing the order of the integration formula

For the first timepoint an integration formula of order n=1 is used (backward Euler formula) which does not require the values of the derivative of q at past timepoints. Whenever a timepoint is rejected or a breakpoint is reached the order of the integration formula used in the next run of the Newton-Raphson algorithm is reduced to n=1.

Whenever a timepoint is accepted the order of the integration formula used for the next timepoint can be increased by 1. The order of the integration formula is limited. In SPICE OPUS when an Adams-Moulton formula is used the maximal allowed order is n=2. For Gear formulae the maximal allowed order is 6.

Predictor-corrector approach to numerical integration

The initial approximate solution used by the Newton-Raphson algorithm for solving the circuit at tk+1 is the circuit's solution at tk. This is assumed to be a good starting point for which we expect the Newton-Raphson algorithm to require only a few iterations to reach convergence. If, however, we have some means for obtaining a better initial approximate solution we can expect even faster convergence.

The method for computing the initial approximate solution is also referred to as the predictor. The method used for performing numerical integration is also referred to as the corrector.

When an Adams-Moulton formula of order n is used in SPICE OPUS as the corrector the Adams-Bashforth explicit integration formula of order n is used as the predictor, but instead of applying it to components of q we apply it to components of the vector of unknowns (x). Recall that the nonzero coefficients are a0 and b0, ..., bn-1. The predicted solution is then expressed as

Because usually the derivative of the circuit's response with respect to time is not available we have to compute it numerically with finite differences (i.e. divided differences of first order).

When a Gear formula of order n is used as the corrector, simple polynomial extrapolation of order n is used as the predictor. Note that in our general approach to integration formulae we have to choose a0, ..., an as the nonzero coefficients to obtain the coefficients of polynomial extrapolation of order n. The predicted value is then computed as

When predictor-corrector numerical integration is used the LTE can be computed in a fairly simple way. The LTE of the predictor can be estimated as

where x(tk+1) denotes the exact response of the circuit. The LTE of the corrector, on the other, hand is given by

The difference between the value obtained from the predictor and the value obtained from the corrector (i.e. the result of the Newton-Raphson algorithm) is

By comparing the last two expressions we can see that they differ only by a constant factor. Therefore the LTE of the corrector can be expressed as

The obtained expression for computing the LTE does not require the computation of divided differences.