Lectures Laboratory Sessions

Circuit Analysis and Optimization

imagegoeshere

Circuit topology, constitutive relations, and nodal analysis

1st Lecture W1, October

In this lecture we are going get acquainted with the concept of concentrated circuits, Kirchhoff current and voltage law and constitutive relations of circuit elements. These equations form the mathematical model of a circuit. The number of equations and unknowns can be greatly reduced if we introduce nodal voltages (which results in the nodal analysis approach to circuit equations). This also bring some restrictions that we are going to loosen up a bit in later lectures.

To make things more simple we focus on linear circuits for now. This makes it possible for us to write the equations in matrix form. By taking a long hard look at the coefficient matrix and the vector of right-hand values we observe simple patterns (element footprints) that enable us to construct the system of equations on the fly.


imagegoeshere

Modified nodal analysis

2nd Lecture W2, October

Nodal analysis has one great disadvantage. It cannot handle elements for which constitutive relations express branch voltages with branch currents (e.g. independent voltage source). In this lecture we are going to introduce modified nodal analysis. If we cannot explicitly express a branch current with branch voltages in some constitutive relation we simply keep that branch current as an unknown. To make sure the system of equations is fully determined we must add an additional equation for every branch current we decide to keep. This additional equation is the corresponding element's constitutive relation.

Now we can handle independent voltage sources, linear controlled voltage sources, and linear current controlled sources. Modified nodal analysis is the approach used in most circuit simulators today. With everything we learned up to now it is fairly easy to handle arbitrary linear elements in our equations. We demonstrate this with several examples: ideal transformer, ideal opamp with negative feedback, and inverting amplifier built with an opamp.


imagegoeshere

Solving systems of linear equations

3rd Lecture W3, October

Solving systems of linear equations is nothing new. Several approaches were developed in the past. For starters we take a look at Gaussian elimination. We examine its computational cost and show how it can fail. To improve the robustness of Gaussian elimination we introduce pivoting. Gaussian elimination leads to many unneccessary operations when it is used to solve multiple systems of equations with the same coefficient matrix (which is common in circuit simulation). To reduce the number of operations we introduce LU-decomposition followed by backward and forward subtitution.

One could, of cource, take a different approach by first inverting the coefficient matrix. But this approach has one serious disadvantage which becomes obvious when we introduce sparse matrices (i.e. matrices where most entries are zero). If the coefficient matrix is sparse, its inverse is generally not. This is not the case with LU-decomposition where we can obtain sparse L and U matrices.


imagegoeshere

Sparse matrices, solving large systems of linear equations

4th Lecture W1, November

Sparse matrices are matrices where most entries are zero. Due to their nature coefficient matrices of circuit equations are sparse. This makes it possible to analyze large circuits without prohibitively large memory requirements. But there is a catch. Performing LU-decomposition of sparse matrices must make sure that as few as possible new nonzero entries are created during decomposition (fill-in). Unfortunately one cannot have both - a small fill-in and small numerical error. This is because avoiding fill-in dictates the choice of matrix pivots which now cannot be chosen in a way that would result in minimal numerical error.


imagegoeshere

Nonlinear elements and the Newton-Raphson algorithm

5th Lecture W2, November

When we introduce nonlinear elements we can no longer write equations in matrix form. Instead they are now written as a list of nonlinear equations. We take a look at the nonlinear models of selected semiconductor elements (diodes, transistors). If the equations are twice continuously differentiable we can numerically solve them with the Newton-Raphson algorithm. The algorithm iteratively approaches the solution by linearizing the equations and solving the resulting linear system to produce a new approximation to the solution of the original nonlinear system. The linearized system can again be constructed by means of the element footprints approach.

We discuss the stopping conditions for the Newton-Raphson algorithm. The algorithm can also fail to converge to a solution. Several approaches to finding a solution in case of non convergence are discussed. Strong nonlinearities in the element characteristics can also be the cause for non convergence. Approaches for dealing with such elements are presented.


Introducing nonlinear elements

Until now all circuit elements were linear which made it possible for us to write down the system of linear equations directly from the circuit schematic using element footprints. When teh elements are nonlinear their constitutive relations become nonlinear equations. Take for instance a semiconductor diode. Its current (iD) is expressed with the voltage across its terminals (uD) by the following relation

where IS is the diode's saturation current and VT is the thermal voltage. Assuming all constitutive relations (with the exception of voltage sources) are in the form where device currents are expressed with branch voltages we can substitute them in the KCL equations (or add them to the system of equations if we are dealing with a voltage source). The system of equations becomes nonlinear. As such it can be formulated as

where x is the vector of unkowns an fi is the i-th nonlinear function defining the i-th nonlinear equation. Often we use a shorthand notation by introducing a vector-valued function g which yields a vector with n components for every argument x.

Solving such systems of equations can be done efficiently by means of Newton-Raphson algorithm.

The Newton-Raphson algorithm for 1-dimensional problems
We illustrate the Newton-Raphson algorithm on a n=1 dimensional example. Suppose we are trying to solve

If we rewrite this equation as f(x)=0 we see that

The Newton-Raphson algorithm is an iterative one. With every iteration it improves the solution. Suppose we start with initial approximate solution x(i). Then the algorithm computes the new approximate solution x(i+1) as

where f' is the derivative of f with respect to x. Suppose our initial approximate solution is x(0)=0. Then the following sequence of approximate solutions is produced by the algorithm:
0.500000000000000
0.443851671995364
0.442854703829747
0.442854401002417
0.442854401002389
...

We see the algorithm converges rapidly. In only 5 iterations the result stabilizes at 12 significant digits (i.e. differs in 13th digit between fourth and fifth iteration). The solution of the equation written with 15 significant digits is 0.442854401002389. We see the Newton-Raphson algorithm solved the equation to double precision (15 digits) in only 5 iterations.

To put the Newton-Raphson algorithm in a different perspective let us rewrite the iterative formula as

This is a linear equation where x(i) is unknown. The left-hand side is actually the linearization of f(x(i+1)) in the neighborhood of x(i). The linearization can be obtained from the Taylor series expansion by keeping only the constant and the linear term. The Newton-Raphson algorithm obtains the new approximate solution by linearizing the nonlinear system of equations in the neighbourhood of the current approximate solution and then solving the obtained linear system.

What makes the Newton-Raphson algorithm so efficient? Mathematically it can be shown that the algorithm converges quadratically in the neighborhood of a solution. Quadratic convergence means that the error (i,.e. the difference between the approximate solution x(i) and the exact solution x*) can be expressed as

for i that is large enough. Roughly speaking this means that the number of exact digits doubles with every iteration of the algorithm as i becomes large enough. This is rue if the initial approximate solution is close o the exact solution of the problem. The algorithm can fail in several ways.

  • If the derivative of g becomes zero (i.e. at a stationary point) the algorithm fails due to division by zero.
  • The algorithm can be trapped in a cycle where the same approximate solutions are visited over and over again.
  • The algorithm can fail to converge if the derivative of g is not continuous.
  • If the solution of the equation is a multiple root (i.e. for g(x)=(x-1)2 the solution x=1 has multiplicity 2) the algorithm converges slowly in the neighborhood of the solution.

For optimal performance the function g must be twice continuously differentiable, The initial approximate solution must be close to the real solution, and the first derivative of g must not be equal to zero in an interval containing the initial approximate solution x(0) and the exact solution.

When to stop?

The Newton-Raphson algorithm improves the approximate solution with every iteration. At some point the approximate solution becomes good enough. How do we know when to stop? Simulators usually stop the algorithm when the following condition is satisfied

Here er and ea are the relative and the absolute tolerance, respectively. The stopping criterion is based on the assumption that when two consecutive approximate solutions are close enough to each other they are also close enough to the exact solution. In SPICE the relative tolerance (reltol simulator parameter) is 10-3. The absolute tolerance depends on the type of the unknown. If the unknown is a voltage 10-6 is used (specified by the vntol simulator parameter). For currents the absolute tolerance is 10-12 (specified by the abstol simulator parameter).

Generalizing the algorithm for n>1

To help us understand the algorithm for higher dimensional problems, let us find a geometric interpretation for the Newton-Raphson formula by first rewriting it as

The left-hand side is a linear function of x(i+1). In fact it is the linearization of f(x) in the neighborhood of x(i). The whole equation requires this linearization to be zero. When the linearization is performed close to the exact solution then it is almost equal to f(x) and solving the linearized equation produces a good approximation to the exact solution.

Now what is different when we have n unkowns? We can linearize n equations by computing the derivatives of the n left-hand sides. Let k denote the index of an unknown. One equation (f(x)=0) is linearized as

The linearized equation defines a plane in n-dimensional space. Therefore linearizing n nonlinear equations gives us n planes in n-dimensional space. The intersection of these n planes is the new approximate solution x(i+1). The intersection of n planes can be obtained by solving the corresponding system of linear equations (i.e. obtained by linearizing the nonlinear system of equations at the previous approximate solution). We already know how to do this (by means of Gaussian elimination or LU decomposition, forward, and backward substitution ...).

We can write the linearized system of equations in matrix form as

where matrix F is the Jacobian of the system at x(i) and is defined as

The Newton-Raphson algorithm solves the following linear system to obtain the next approximate solution x(i+1)

Again, this is equivalent to solving the linearized version of the original noninear system of equations.

The stopping condition is applied to every component of x independently. The algorithm stops when all components satisfy the stopping condition. In SPICE OPUS the stopping condition is slightly more elaborate. Three requirements must be met in order for the Newton-Raphson algorithm to stop.

  • The difference between candidate solutions from iterations i+1 and i must be within the given relative and absolute tolerances,
  • The difference between candidate solutions from iterations i and i-1 must be within the given relative and absolute tolerances, and
  • The last three approximate solutions must form a triangle in the n-dimensional space where the angle at the second point is smaller than 90o.
The last two requirements prevent the algorithm from stopping when it is in fact oscillating around a solution. Because this last check can slow down convergence users of SPICE OPUS can turn it off by setting the noconviter simulator parameter.

Usually the initial approximate solution is a vector of all zeros. In SPICE one can override this default initial approximate solution with the use of the .nodeset netlist directive.

In practice the algorithm has a limited number of iterations for satisfying the stopping condition. This number is set by the itl1 parameter in SPICE (100 by default). If the algorithm fails to satisfy the stopping condition after itl1 iterations the analysis is considered as failed and an error is reported.

Element footprints revisited

Let us illustrate the construction of the linearized system of equations that are used in one iteration of the Newton-Raphson algorithm. Let us start with a simple example: a semiconductor diode (Fig. 1).


Fig. 1: A semiconductor diode connected between nodes k and l.

The constitutive relation of a semiconductor diode expresses the diode current as a nonlinear function of the diode voltage.

The diode contributes its current to the KCL equations of nodes k and l (i.e. fk(x)=0 and fl(x)=0).

The diode branch voltage can be expressed with node potentials as

To obtain the diode's contribution to the Jacobian matrix we must compute the derivative of the diode current with respect to the node potentials. First, let us compute the derivative with respect to the diode voltage.

Here gD denotes the differential conductance of the diode. The diode's contribution depends on two unknowns: vk and vl. The derivatives of the diode current with respect to these two unknowns are

When we linearize the two nonlinear KCL equations for nodes k and l the differential conductance of the diode is added to the Jacobian of the system into rows k and l (corresponding to the two KCL equations) and columns k and l (corresponding to node potentials of nodes k and l). Let us assume for now there are no voltage sources in the circuit so we don't have to apply MNA. Note that rows of the Jacobian correspond to KCL equations and columns correspond to unknowns (node potentials). Let i denote the iteration of the Newton-Raphson algorithm. The element footprint of a semiconductor diode in the Jacobian matrix is then

Note that gD(i) is computed from vk(i) and vl(i). What bout the right-hand side of the system of linearized equations? A diode will contribute to the k-th and l-th row of the RHS vector. The contributions of the diode to RHS of the k-th and l-th nonlinear equation are

For the diode's contribution to the k-th RHS we get

The first term comes from the linearized system of equations and the second one is the contribution to the nonlinear system of equations. The contribution to l-th RHS is

Now we can revisit the linear resistor connected to nodes k and l, but this time we treat it like a nonlinear element with constitutive relation

We see that the differential conductance is R-1 and does not depend on the current approximate solution. The contribution to the Jacobian matrix is therefore the same as the contribution to the coefficient matrix of a linear circuit and does not change between iterations of the Newton-Raphson algorithm. The contributions to the k-th row of the RHS vector is

Similarly, the contribution to the l-th row of the RHS vector is also 0. This is due to the linearized contribution of a linear resistor being identical to the "nonlinear" contribution (the two terms cancel each other out). We see that the element footprint of a linear resistor does not change between iterations of the Newton-Raphson algorithm. In fact linear elements contribute to the Jacobian matrix in the same way as they do to the coefficient matrix of a linear circuit.

From the two examples we see that the Jacobian matrix of a nonlinear circuit in one iteration of the Newton-Raphson algorithm has the same role as the coefficient matrix of a linear circuit. The major difference between solving linear and nonlinear circuits is that the former requires solving only one linear system of equations while the latter requires solving multiple systems of linear equations, where every linear system corresponds to the linearized circuit in one iteration of the Newton-Raphson algorithm.

Element footprint of a nonlinear element with multiple pins

We are going to illustrate the construction of an element footprint for elements with multiple pins on a NMOS transistor operating in saturation region (Fig. 2). MOS transistors are 4-pin elements. To keep things simple we assume the bulk pin is connected to the source pin.


Fig. 2: A NMOS transistor.

A NMOS transistor is operating in the saturation region when the following two conditions are satisfied.

Where UT is the threshold voltage of the NMOS transistor. The currents flowing into the pins in the saturation region are given by

We can express branch voltages uGS and uDS with node potentials as

To construct the element footprint in the Jacobiam matrix we first compute the partial derivatives of currents with respect to branch potentials.

next, we express the partial derivatives of branch currents with respect to node potentials. There are 9 partial derivatives we need to compute. These derivatives can be expressed with g21 and g22.

Now we can construct the element footprint in the Jacobian matrix. A NMOS transistor contributes to KCL equations of nodes k and l in columns corresponding to node potentials vj, vk, and vl. There is no contribution to the KCL equation of node j because iG=0.

A NMOS does not contribute to the j-th row of the RHS vector due to iG=0. The contribution to the k-th row is

The contribution to the l-th row of the RHS vector is

Operating point analysis and DC sweep in SPICE OPUS

With the knowledge we gained up to this point we can handle circuits with arbitrary linear and nonlinear resistive elements. The main property of resistive elements is that we can express the current flowing into the pins of the element (voltage) as a (non)linear function of the node potentials corresponding to nodes to which the element is connected. The function may not contain any derivatives or integrals with respect to time. The derivatives and integrals with respect to time are required for describing reactive elements (like capacitors and inductors), but we haven't reached that point, yet.

Now let us assume we describe our reactive elements by means of derivatives with respect to time. We can always do that (i.e. convert an integral into a derivative) by choosing an appropriate independent variable in the formulation of the element's constitutive relation. Now suppose all derivative terms are equal to zero. This is the case when the voltages and the currents in the circuit no longer change. For stable circuits excited only by DC voltage and current sources we reach this state if we wait for a sufficient amount of time. We refer to this state as the circuit's operating point. For computing the circuit's operating point we need to consider only the resistive elements in the circuit. Reactive elements are ignored by removing all capacitors and replacing all inductors with short circuits. Therefore the above described algorithm can be used for finding the operating point of the circuit.

Often we are interested in how the operating point of a circuit changes if we change the DC value of the circuit's excitation voltages/currents. Such analysis is also referred to as the DC operating point sweep or simply DC analysis. A DC analysis is much faster than the equivalent sequence of operating point analyses because the solution of the last sweep point is used as the initial iterate in the Newton-Raphson algorithm solving the next sweep point. This provides a good initial guess for the Newton-Raphson algorithm which consequently requires only a few iterations to satisfy the stopping condition. Therefore SPICE provides a separate simulator parameter (itl2) for setting the limit on the number of Newton-Raphson iterations available for solving one point in a DC sweep. By default its value is set to 50.

Convergence problems and how to solve them

The Newton-Raphson algorthm can exhibit convergence problems (slow convergence or even no convergence), particularly for circuits with strong nonlinearities. Simulators use various tricks to achieve convergence.

Junction voltage limiting

p-n junctions in diodes and transistors exhibit an exponential i(u) characteristic. This can cause very large values of currents (ans consequently large values of the nonlinear left-hand side g(x) in the KCL equations). The values can even exceed the maximum value allowed by double floating point precision. Once that happens IEEE floating point infinite values or even NaN (not a number) values can occur in the candidate solution. Especially NaN values spread like a "virus". Any (binary or unary) operation performed on a NaN value results in a NaN so the NaNs quickly spread across the whole solution vector and make the result completely useless.

To avoid this the independent variable in the exponential function (the branch voltages across p-n junctions) are limited to interval [-voltagelimit, voltagelimit] where voltagelimit is a simulator parameter (1030V by default). If the branch voltage is smaller than -voltagelimit it is truncated to -voltagelimit. Similarly, if the branch voltage is greater than voltagelimit it is truncated to voltagelimit. The obtained value is then used for computing the p-n junction current and its derivative with resect to the node potentials. This procedure not only helps avoid infinite and NaN values, but also speeds up the convergence of the Newton-Raphson algorithm.

Damped Newton-Raphson algorithm

The Newton-Raphson algorithm can produce an oscillating sequence of candidate solutions. This behavior can be eliminated to great extent if the step taken by the algorithm is shortened. To simplify the presentation of this approach we assume a 1-dimensional problem (n=1). Let er and ea denote the relative and the absolute tolerance used in the stopping condition. Let x(i) and x(i+1) denote the previous and the new approximate solution. We define the step tolerance as

After every Newton-Raphson iteration the stopping condition is checked. If the condition is not satisfied the step is truncated according to the following formula

In the next Newton-Raphson iteration the truncated solution is used as the previous solution. Parameter s is the truncation factor specified by the sollim simulator parameter in SPICE OPUS (10 by default). The truncated algorithm makes a slow but sure progress. Due to this slow progress it often runs out of available iterations. Therefore the iteration limit in SPICE OPUS is increased to itl1 * sollimiter. The value of the sollimiter simulator parameter is 10 by default.

The damped Newton-Raphson algorithm is used only when the simulator detects convergence problems. By default the original Newton-Raphson algorithm is used.

Adding shunt resistors to the circuit

If we connect resistors from every node to the ground we effectively add diagonal entries equal to the inverse of the added resistance to the Jacobian. Therefore if the resistance is small enough the diagonal part begins to dominate the Jacobian. In practice many convergence problems are reduced if resistors with sufficiently small resistance (shunts) are added. If the resistance is not too small shunt resistors do not significantly alter the circuit's behavior. By default shunting is turned off. Shunting is turned on by specifying the shunt resistance with the rshunt simulator parameter.

Homotopy-based approaches

If we cannot solve a problem with the Newton-Raphson algorithm we try solving a much simpler problem first. In one iteration of the homotopy-based approach we slightly modify the problem so that it becomes more similar to the original (unsolvable) problem and apply the Newton-Raphson algorithm to this modified problem starting with the solution obtained from the simple problem. Usually we obtain good convergence and solve the problem successfully (because, after all, it is still a simple problem). In the next iteration we again change the problem a bit so that it now resembles the original problem even more. We use the last obtained solution as the initial solution and apply Newton-Raphson's algorithm. We iterate until the modified problem becomes identical to the original problem. The last obtained solution is therefore the solution of the original problem.

There are many ways how one apply homotopy to difficult circuits. We describe some of these approaches used by circuit simulators briefly.

GMIN stepping

In GMIN stepping resistors are added between every node and the ground. But contrary to adding shunt resistors the resistors in GMIN stepping are added only temporarily. We start by adding large resistors and try to solve the circuit. If we fail we decrease the resistances and try again. We repeat this procedure until we successfully solve the circuit. Now homotopy comes to the rescue as we have our simple problem that we can solve. In every iteration of GMIN stepping we increase the resistors by a certain amount (step size) and try to solve the circuit by using the solution obtained from the previous iteration. If we fail we try again, but with a smaller step size. After a successful iteration we increase the step size. Hopefully, after several iterations we manage to increase the resistors to such extent that their effect becomes neglectable (i.e. they become smaller than 1/gmin). At this point we remove them and apply the Newton-Raphson algorithm for one last time (with the last solution used as the initial approximate solution) to solve the original circuit. If this Newton-Raphson algorithm fails the GMIN stepping is considered as failed.

In SPICE OPUS the value of gmin that is considered neglectable is specified by the gmin (for AC and TRAN analysis) and gmindc (for operating point and DC analysis) simulator parameters. The default value is 10-12. The number of GMIN steps (for both decreasing and increasing the added resistances) is specified by the gminsteps simulator parameter. When the step size in GMIN stepping becomes too small (i.e. the progress of GMIN stepping slows down too much) the damped Newton-Raphson algorithm is used until a solution for the problematic iteration is found.

Source stepping

In source stepping the simple problem to solve is the circuit with all independent sources turned off (i.e. set to 0). In every iteration of source stepping we increase the values of independent sources towards their true value by a certain step size. If the iteration is successful we increase increase the step size. In the opposite case we try again with a smaller step size. Eventually the independent sources reach their true values. At that point we have the solution of the original circuit.

In SPICE OPUS the number of source stepping iterations is limited to a value specified by the srcsteps parameter (10 by default). If the step size becomes too small the damped Newton-Raphson algorithm is used for the problematic iteration.

Source lifting and cmin stepping

Most circuits contain reactive elements (capacitors and inductors). These elements are ignored in operating point and DC analysis (i.e. capacitors are removed an inductors become short circuits). On the other hand reactive elements provide another possibility for applying the homotopy-based approach to solving finding the operating point of a circuit. If we analyze a circuit (without ignoring reactive elements) in time domain with all independent sources being slowly ramped up from zero to their actual values we can expect to reach the operating point of the circuit if we perform the simulation up to a sufficiently distant timepoint.

Although at this point we don't understand how time-domain analysis of circuits containing reactive elements is performed, we can still outline the main idea of source lifting. In the time-domain analysis we assume the reactive elements initially store no energy (i.e. the initial voltages across capacitors and the initial currents flowing through inductors are all zero). Together with all independent sources set to zero we have a circuit that is trivial to solve. By ramping up independent sources we implicitly perform homotopy iterations as we go from timestep to timestep in time-domain analysis.

In SPICE OPUS simulator parameter srclriseiter specifies the number of timesteps during which ramping up of the sources is performed. If the srclrisetime simulator parameter is specified ramping up is not performed on a timestep to timestep basis. Instead it is performed until the simulation reaches the time specified by the srclrisetime. The srclminstep simulator parameter sets a lower bound on the timestep. srclmaxtime and srclmaxiter specify the time and the number of timepoints when the time-domain analysis stops. If the values of unknowns in the time-domain analysis stabilize within their respective tolerances and remain there for the number of timepoints specified by the srclconviter simulator parameter the time-domain analysis is terminated earlier. The solution obtained at the final timepoint is used as the initial solution in the Newton-Raphson algorithm for computing the operating point of the circuit. If the Newton-raphson algorithm fails to converge the source lifting is considered as failed.

Fast changes of the unknowns can cause problems in the time-domain analysis. If source lifting fails and the cminsteps simulator parameter is set to a value greater than zero capacitors are temporarily connected between circuit's nodes and the ground node. The value of the capacitance is specified by the cmin simulator parameter. Source lifting is repeated with the modified circuit. If it fails again the values of the added capacitors are increased and the procedure is repeated. The number of repetitions is specified by the cminsteps simulator parameter. For problematic circuits one can disable the initial source lifting without added capacitors by setting the noinitsrcl simulator parameter.

Sometimes a circuit oscillates in time-domain analysis. This means that source lifting will most likely fail. For such circuits source lifting can be disabled by setting the nosrclift simulator parameter.

Fine-tuning the algorithms for achieving convergence in SPICE OPUS

Assigning numbers from 1 to 3 to simulator parameters gminpriority, srcspriority, and srclpriority set the order in which GMIN stepping, source stepping, and source lifting, are applied, respectively. For particularly troublesome circuits one can disable the initial Newton-Raphson algorithm and go straight to the advanced algorithms by setting the noopiter simulator parameter. The opdebug simulator parameter turns on the verbose mode is SPICE OPUS.

SPICE OPUS automatically tunes the parameters of its algorithms for solving the operating point of a circuit. This tuning can be disabled by setting the noautoconv simulator parameter.



imagegoeshere

DC small signal analysis

6th Lecture W3, November

One can interpret the elements of the coefficient matrix as conductances, resistances, and controlled sources. This interpretation results in the linearized circuit model. The linearized circuit model can be used for obtaining small signal properties of the circuit like gain, input impedance, and output impedance. If the signals are composed of a large DC component and a small perturbation we can treat the circuit as linear if we consider only the perturbations. We draw parallels between linear electronics and small signal DC analysis. We also relate small signal DC analysis results to the results of operating point sweeps.


Linearized circuit

Suppose we are interested in how much a circuit's operating point will change if we slightly change the circuit's excitation. In previous chapter we formulated the nonlinear equations describing the circuit. Now let us split the nonlinear function in two parts. The first part is a function of x while the second part is independent of x and thus represents the independent sources in the circuit.

Here g is a vector-valued function and y is the right-hand side of the nonlinear equations. The system of equations can now be written as

A small perturbation of y results in a small perturbation of g which can be formulated as

By taking the first two terms of the Taylor series for g and neglecting the rest we can write the system of equations as

where G is the Jacobian (i.e. the matrix of first derivatives) of g. Because g and f differ only by a constant term it is also equal to the Jacobian of the circuit equations (F). The Jacobian depends on the solution of the unperturbed circuit (x). Note that we can neglect higher order terms in the Taylor series because we assume the perturbation is small. Because the intial circuit equation (the one without perturbations) must still be satisfied the first and the second term cancel each other out and we are left with

We obtained a linear system of equations which corresponds to some linear circuit. By solving it we can express the perturbation of the circuit's operating point with the perturbation of the circuit's excitation. The process of formulating this system of linear equations is also referred to as circuit linearization.


Fig. 1: A simple circuit with an MOS transistor.

For instance, take a simple circuit with a NMOS transistor depicted in Fig. 1. Let us write the equations of this circuit and then lninearize them. The circuit has 4 nodes and 2 voltage sources. The list of unknowns includes the node potentials of nodes 1, 2, and 3, and the currents flowing into the two independent voltage sources. We assume the MOS transistor is operating in the saturation region. Its currents can be expressed as

The three KCL equations and the two constitutive equations of the independent voltage sources are

After solving this system of equations we obtain the operating point of the circuit (specified by V1, V2, V3, IGG, and IDD). The operating point depends on the values of the two independent voltage sources VGG and VDD. Now suppose we perturb the first voltage source by ΔUGG while the second one is left unperturbed (ΔUDD=0). The operating point of the circuit changes to V1+ΔV1, V2+ΔV2, V3+ΔV3, IGG+ΔIGG, and IDD+ΔIDD. We can compute the perturbations of the solution (ΔV1, ΔV2, ΔV3, ΔIGG, and ΔIDD) from the linearized system of equations

Where g21 and g22 are expressed as (see an example in the previous lecture)


Fig. 2: Linearized circuit obtained from the circuit in Fig. 1.

From the obtained linear system of equations we can reconstruct a linear circuit depicted in Fig. 2. This circuit is also referred to as the linearized circuit or small signal model of the circuit. Conductance g22 and transconductance g21 represent the linearized model of the NMOS transistor. Linearized models of linear elements are identical to respective linear elements (e.g. resistor in Fig. 2). Independent sources that are not perturbed are set to zero (turned off) in the linearized circuit (indepenmdent current sources are removed, independent voltage sources are replaced with short circuits). The value of perturbed independent sources in the linearized circuit is equal to the respective perturbation (e.g. see UGG in Fig. 2).


Fig. 2: Linearized models (bottom) of nonlinear elements (top) for diode (left), NMOS transistor (center), and NPN bipolar transistor (right).

Linearized models of nonlinear elements can be built in advance. The linearized circuit can be obtained by replacing nonlinear elements with their linearized models and by handling independent sources as mentioned in the previous paragrah. Fig. 3 depicts the linearized model of a diode, a NMOS transistor, and a NPN bipolar junction transistor. Parameter gD of the linearized model of a diode at operating point UD can be computed as

Parameters of the linearized model of a NMOS transistor are computed at operating point defined by UGS and UDS as

Parameters of the linearized model of a NPN bipolar transistor are computed at operating point defined by UBE and UCE as

Analysis of a linearized circuit reveals many important characteristics like current and voltage gain, input and output impedances, transconductance, etc. Take for instance the example in Fig. 2 which is in fact an amplifier. The input signal is generated by UGG and the output signal is observed as the node potential V2. If we compute ΔV2 and from there the quotient ΔV2/ΔUGG we obtain the gain of the amplifier for small signals (i.e. small perturbations of the operating point).

DC small signal analysis in SPICE OPUS

In SPICE OPUS the DC small signal analysis is deemed TF analysis. One has to specify two things at its invocation: the output signal (e.g. node potential, voltage between two nodes, or current of a voltage source) and the independent source (voltage or current) that provides the small signal perturbation. The analysis then computes three things. The small signal gain (also referred to as the transfer function) whose type depends on the choice of the input and output. There are four possibilities: voltage gain, current gain, transconductance, and transimpedance. The analysis also computed input impedance at the nodes of the circuit where the independent source providing the excitation is located and the output impedance at the circuit's output. The latter is computed correctly only if the output is a node potential or a voltage between two nodes.

Most of the time in TF analysis is spent for computing the circuit's operating point. Once it is computed the Jacobian matrix describing the linearized circuit and its LU decomposition are already available. The simulator only constructs a right-hand side vector according to the specified input excitation and performs a forward and backward substitution to obtain the solution. From this solution the transfer function and teh input impedance are computed. For computing teh output impedance the simulatro constructs another right-hand side vector and performs another forward+backeard substitution. The output impedance is computed from the obtained result.



imagegoeshere

Small-signal analysis in the frequency domain

7th Lecture W1, December

We introduce the modelling of linear reactive elements (linear capacitors, inductors, and coupled inductors). We extend the notion of small-signal analysis to sinusoidal signals represented by complex numbers. The absolute value of such representation corresponds to the magnitude of the sinusoidal signal while the argument corresponds to its phase. We assume all signals in the circuit share the same frequency. Due to reactive elements the solution of the circuit depends on this frequency.

We extend linear reactive elements to nonlinear ones. We demonstrate the modelling of nonlinear capacitances on an example - semiconductor diode. Finally, we show how nonlinear elements are handled in small-signal frequency-domain analysis.


Complex representation of sinusoidal signals

In circuit analysis we are often interested in the circuit's response when all excitations are sinusoidal signals of the same frequency. Beside sinusoidal excitations we also allow DC excitations. The circuit's response to such stimulus is composed of a transient response and a periodic signal. In stable circuits the transient response eventually dies off and we are left with a periodic signal superimposed on a DC component. If the magnitudes of the sinusoidal excitation signals are small the circuit's response (excluding the DC component) is sinusoidal even if the circuit is nonlinear. We refer to the sinusoidal part of this response as small-signal sinusoidal response. The magnitudes and phases of the response signals provide us with many valuable insights into circuit's behavior.

In unstable circuits we cannot observe the small-signal sinusoidal response because the transient response never dies off. We can, however, compute it with a simulator. Again, the computed response can provide many valuable insights into circuit's behavior (like frequency response, stability, etc.).

 
Fig. 1: Three sinusoidal signals in time domain (left) and their complex representations (right).

Let us assume a sinusoidal signal with magnitude A and phase Φ.

We can express it in terms of complex numbers as

where j denotes the imaginary unit. We can see that, assuming the frequency ω is known, the signal is uniquely defined by complex number X. The absolute value of X is equal to the magnitude of the signal while the argument of X is equal to the phase of the signal. Fig. 1 depicts 3 sinusoidal signals and the corres[ponding complex numbers as vectors in a complex plane. We refer to complex numbers that represent sinusoidal signals as complexors.

Capacitors and inductors in the frequency domain

By assuming all unknowns are sinusoidal signals of same frequency we effectively moved our analysis to the frequency domain. Unknowns become complex numbers representing magnitudes and phases of sinusoidal signals. Now what do we gain by doing this? Take for instance the constitutive relation of a linear capacitor (Fig. 2, left).


Fig. 2: Linear capacitor (left) and inductor (right).

After applying the Fourier transformation, which transposes us into frequency domain, this relation changes into

where I and U are complexors representing the current and the voltage of a capacitor. Similarly, for an inductor (Fig. 2, right) with constitutive relation

we have


Fig. 3: Coupled inductors. When a current is flowing into the pin marked with a dot it is considered positive.

Another element to consider is the coupling between two inductors. Suppose inductors L1 and L2 are coupled with a coupling factor k<1. The constitutive relations of the two inductors are extended with an additional term representing the magnetic coupling.

where

In the frequency domain the constitutive relations become

Frequency-domain analysis of linear circuits

Assuming all signals in the circuit are sinusoidal greatly simplifies circuit analysis. The constitutive relations of capacitors and inductors become algebraic equations. Instead of solving a system of differential equations we are confronted with a system of linear equations. The unknowns and the coefficients are now complex. Capacitors and inductors are described in a manner equivalent to ordinary resistors, i.e. the current is proportional to the voltage. The coefficient of proportionality is complex and depends on the frequency (ω). With this in mind we can immediately derive the element footprint of a capacitor (Fig. 2, left) in the coefficient matrix.

Handling of inductors is somewhat more complicated. For a single inductor we could easily express the current with the voltage. But for coupled inductors this would require inverting a matrix. Furthermore, expressing the current explicitly with voltage in time-domain would require solving a system of differential equations. Therefore most simulators choose to take a different path. Instead of explicitly expressing the device current the introduce a new unknown for every inductor - its current. This way inductors are handled in a manner similar to voltage sources. The constitutive relation of an inductor becomes the additional equation that makes the system fully determined.

The element footprint of an inductor (Fig. 2, right) is

For coupled inductors (Fig. 3) the coupling appears in the constitutive relations of the two inductors, but not in the KCL part of the circuit equations.

The element footprint of a pair of coupled inductors is

Coupled inductors do not contribute to the right-hand side vector. We can see that the element footprint is identical to the element footprint of the two inductors with the addition of the jωM12 term to both constitutive relations.


Fig. 4: Model of non-ideal transformer. The coupling factor between the two inductors is given by k.

Let us illustrate frequency-domain analysis of a linear circuit with an example (Fig. 4). The circuit is a model of a non-ideal transformer with imperfect magnetic coupling k<1, winding resistance, and winding capacitance. Two input signals are generated by the two current sources. The circuit has 5 nodes. Due to this the list of unknowns contains 4 node potentials. Additionally, two currents are introduced into the list of unknowns by the two inductors in the circuit. With our knowledge of element footprints we can write the system of equations by inspection.

Nonlinear capacitors and inductors

All capacitors and inductors in this chapter were linear. Now we are going to introduce nonlinear capacitors and inductors. The former ones model charge storage in semiconductor devices, while the latter ones can be used for modelling coilswith nonlinear cores.


Fig. 5: A linear capacitor (left), a nonlinear capacitor (center), and the linearized model of a nonlinear capacitor (right).

A capacitor stores charge (Fig. 5, left). The plate connected to the node with the higher potential holds positive charge (q), while the opposite plate holds equally large negative charge (-q). For linear capacitors the charge is proportional to the voltage across the capacitor.

For nonlinear capacitors this relation is nonlinear. With respect to modelling there are two kinds of nonlinear capacitors. The ones where the charge can be expressed as a univariate function of the voltage (voltage-dependent capacitors) and the ones where the voltage can be expressed as a univariate function of charge (charge-dependent capacitors). Because in most real-world cases the nonlinear function is a bijective map (and thus its inverse exists) bot approaches are feasible for most real-world nonlinear capacitors. We are going to focus on voltage-dependent nonlinear capacitors because this approach fits well with the modified nodal analysis. For a voltage-dependent nonlinear capacitor (Fig. 5, center) we can write

For both linear and nonlinear capacitors the charge conservation must be honored and therefore we can express the current flowing through a capacitor as

If we assume the sotred charge depends only on the voltage (but not on time itself) we can write

Here c(uC) denotes the differential capacitance which is voltage-dependent. For a linear capacitor the differential capacitance is equal to the capacitor's total capacitance (i.e. stored charge divided by the voltage). For a nonlinear capacitor it does not make sense to define the total capacitance because the charge is not proportional to the voltage.

Now suppose we slightly perturb the voltage of a nonlinear capacitor from UC to UC+ΔuC. How much does the charge change?

Because the first term on the left-hand side cancels out the first term on the right-hand side we have

We can see that a nonlinear capacitor behaves as a linear capacitor if we consider onlz the small changes in voltage and charge. Now suppose the voltage is composed of a DC component UC and a small sinusoidal component given by complexor Uc. Because for small perturbations the nonlinear capacitor behaves as a linear capacitor with capacitance equal to the differential capacitance at UC the small-signal model of a nonlinear capacitor is a linear capacitor in Fig. 5 (right). A small sinusoidal voltage results in a small sinusoidal current flowing through a nonlinear capacitor which can be expressed as


Fig. 6: A semiconductor diode (left), its operating point (center), and its linearized model (right).

An example of a nonlinear capacitor is the capacitance of a semiconductor diode (Fig. 6, left). Its operating point (Fig. 6, center) is given by voltage UD which drives a DC current ID through the diode. The differential conductance of a diode (gD) depends on the operating point and was computed in one of the previous lectures. The differential capacitance of a diode also depends on the voltage. The differential capacitance of a diode consists of three components: depletion capacitance which is dominant for reverse polarization (UD<0), diffusion capacitance which dominates when the diode starts to conduct significant currents, and linear capacitance due to overlap effects. The depletion capacitance can be expressed with the voltage as

where C0, M, and VJ are diode model parameters. The diffusion capacitance, on the other hand, depends on the resistive current flowing through the diode.

IS and τ are diode model parameters, and VT is the thermal voltage.



Fig. 7: Differential capacitance (full line), depletion capacitance (dotted line), and diffusion capacitance (dashed line) of a semiconductor diode with respect to operating point voltage in linear (top) and logarithmic (bottom) scale.

The differential capacitance of a diode (Fig. 7) is

where the last term represents the overlap capacitance (which in turn is independent of the operating point).


Fig. 8: Symbol of a MOSFET (left) and its linearized model including differential capacitances (right).

As another example, let us consider the MOS transitor. If we simplify it a lot, the charge storage in a MOSFET can be described with two differential capacitances (cgs and cds) whose value depends on the operating point (UGS and UDS). We are not going to introduce the equations for computing these two capacitances because they are out of this lecture's scope. But nevertheless, let us note that in this simplified model the charge stored between gate and drain depends only on the gate-to-drain voltage (UGD=UGS-UDS) and the charge stored between gate and source depends only on the gate-to-source voltage (UGS). Thus we can write

and

A MOSFET and its linearized (small-signal) model including the differential capacitances are depicted in Fig. 8.

Nonlinear inductances can be handled in a similar manner. Two kinds of nonlinear inductors exist with respect to the modelling approach. For current-dependent inductors the magnetic flux (Φ) can be expressed as a univariate function of the current. For flux-dependent inductors the current can be expressed as a univariate function of the magnetic flux. Again, in most practical cases both approaches can be applied as the mapping between flux and current is a bijective one. Here we are going to introduce the former one, where the magnetic flux is a nonlinear function of the current.

The voltage across an inductor can then be expressed as

Assuming the flux depends only on current, but not on time itself we can write

By introducing differential inductance l (which in turn depends on the operating point) we arrive at the small-signal model of a nonlinear inductor which is a linear inductor with inductance equal to the differential inductance l(IL) where IL is the operating point current flowing through the inductor. We leave the rest of the derivation to the interested user, as nonlinear inductors are not common in modern integrated circuits.

Within this framework for a nonlinear inductor we can handle a linear inductor by writing

Equations of nonlinear circuits revisited

Modified nodal analysis of circuits that comprise nonlinear capacitors results in a nonlinear system of equations of the form

Here nonlinear vector-valued function g and vector y represent the resistive part of the circuit and its excitations, while q is a vector valued function expressing the total charge stored by the capacitors connected to a particular node in the circuit. Note that every component of q is a nonlinear function of the circuit's unknowns. In this way the stored charge can depend on an arbitrary node potential which results in nonlinear transcapacitances where the stored charge does not depend solely on the voltage between the capacitor's pins but also on other voltages in the circuit.

Within the framework of this equation we can also handle nonlinear inductors for which the magnetic flux is expressed as a function of the inductor's current. The inductor's current must be one of the circuit's unknowns. This current appears as a term in the corresponding KCL equations. The constitutive relation of the nonlinear inductor appears as an additional nonlinear equation in vector valued function q.


Fig. 9: A nonlinear circuit - resonant amplifier with a MOS transistor.

Let us illustrate this by writing down the nonlinear equations of a circuit which contains nonlinear resistive and capacitive elements in Fig. 9. The gate current of a MOS transistor has no resistive component and can be expressed as

The drain current has a resistove component we introduced in our previous lecture and a capacitive component arising from the charge stored between gate and drain.

Not that the negative sign of the capacitive term comes from the fact that the charge stored on the gate side of cgd is considered as positive while the charge stored at the drain side is considered as negative. The expression for the drain current is valid if the MOS is operating in the saturation region.

The circuit has 4 nodes which result in 4 KCL equations. The two independent voltage sources and the inductor add three more unknowns to the system of equations (iSRC, iDD and iL). With the knowledge we gather up to now the two KCL equations can be written down by inspection.

Three addittional equations are obtained from the constitutive relations of the independent voltage sources and the inductor.

After gathering the resitive terms (terms that do not include derivatives with respect to time) and the reactive terms (the ones that include derivatives with respect to time) in two vectors we obtain the following system of equations.

The first term on the left-hand side corresponds to the nonlinear resistive part of the circuit (vector-valued function g) while the second term represents the nonlinear reactive part of the circuit (vector-valued function q). The right-hand side represents the circuit's excitation (vector y). Note that the last component of q represents the negative of the inductor's magnetic flux because we are modelling inductors by adding new unknowns (inductor currents) and expressing their constitutive relations in the same manner as we did with capacitors.

Small-signal frequency-domain analysis of nonlinear circuits

Suppose the circuit's excitations are composed of a DC component and a small sinusoidal component. Therefore we can write

Complex vector yAC comprises complexors representing small sinusoidal excitations superimposed on the DC excitations specified by vector yDC. Because the sinusoidal excitations are small we can linearize the circuit and assume its response is of the form

where the magnitudes of components in complex vector xAC are small. We already know how to linearize the resistive part of the circuit.

Matrix G is the Jacobian of the vector valued function g.

Now let us linearize the reactive part.

Matrix C is the Jacobian of the vector valued function q.

It contains the differential capacitances of reactive elements. Computing the derivative of with respect to time eliminates the first term yielding

By taking into account both linearizations the system of equations becomes

Because g(xDC)=yDC must be satisfied (i.e. xDC is the circuit's operating point) we can simplify the equation to

which finally yields

To simplify notation we define the matrix of coefficients for obtaining the circuit's small signal response.

For linear circuits the expression in parentheses is actually the matrix of coefficients of the circuit's linear system of equations obtained via modified nodal analysis.

To illustrate the small signal analysis of nonlinear circuits let us write down the system of equations for circuit in Fig. 9. Assume the unknowns are ordered in the followin manner: first the 4 node voltages, followed by iSRC, iDD, and iL. First, let us obtain the equations by computing the Jacobians of the resistive and the reactive part. The former is

Because g21 and g22 are differential conductances of the MOS transistor, which is a nonlinear element, they depend on the operating point. Let V1, V2, V3, V4, ISRC, IDD, and IL denote the operating point of the circuit. Then g21 and g22 can be computed as

The Jacobian of the reactive part of the circuit is

The differential capacitances of the MOS transistor can be computed as

We assume the excitation provided by the voltage source consists of a DC component U1 and a small sinusoidal signal represented by complexor U1AC. In our example the signal source has no DC component. The power supply, on the other hand, has no small sinusoidal component. All signals representing the response also consist of a DC component (i.e. the operating point of the circuit given by V1, V2, V3, V4, ISRC, IDD, and IL) and a small sinusoidal component represented by a complexor (V1AC, V2AC, V3AC, V4AC, ISRCAC, IDDAC, and ILAC).

The operating point is determined by solving the nonlinear system of equations where the term representing the reactive part of the circuit (q) is removed. This means that in operating point analysis capacitors are removed and inductors are replaced with short circuits. Only the DC part of excitations is considered in operating point analysis.

After the operating point is obtained the differential capacitances of nonlinear elements can be computed and matrix C built. Note that the entries in matrix C belonging to linear elements do not depend on the operating point. Now we can build the system of equations for small signal analysis. First let us assemble the matrix of coefficients (Z).

The system of equations is then

From the resulting system of equations we can see that the matrix and the right-hand side can be assembled with the element footprint approach. Linear elements are handled in the same manner as we described in the beginning of this lecture. The footprints of nonlinear elements can be derived from their linearized models (e.g. see Fig. 6 (right) for the linearized model of a semiconductor diode). The right-hand side contributions of the independent sources generating nonzero sinusoidal excitations are built in the same manner as the contributions of those sources in operating point analysis, except that now their value is equal to the complexor that represents the sinusoidal signal.

Small-signal frequency-domain analysis in SPICE OPUS

In SPICE the small-signal frequency-domain analysis is called AC analysis. At invocation the frequency range and step must be specified. The magnitudes and optional phases of sinusoidal excitations are specified in the circuit description by passing the ac parameter to independent sources. Note that the ac value is used only in small-signal frequency-domain analysis.

The simulator first computes the operating point of the circuit. In this computation all reactive elements are disabled (capacitors areremoved and inductors replaced with short circuits). The operating point is used for computing the Jacobians of the resistive and the reactive part of the circuit. The approach via element footprints is used for quickly assembling these two matrices. The right-hand size of the system of equations for small-signal frequency-domain analysis is assembled based on the values of the ac parameter passed to the independent sources (again with the element footprint approach). The system of equations is then solved for all frequencies specified by the range and the step parameters at analysis invocation. The linearized models are computed only once because the equations differ between two frequency points only by the value of ω.

AC analysis is quite fast. It requires only one LU decomposition per frequency point. For a moderate number of frequency points a large part of the time spent by the analysis is used for computing the operating point.



imagegoeshere

Small-signal noise modelling and analysis

8th Lecture W2, December

We briefly introduce the relevant aspects of noise modelling and analysis in linear circuits. Various types of noise appearing in electronic circuits are presented and characterized. Noise models of selected circuit elements are presented. We introduce small-signal noise analysis as a special case of AC analysis where signals are represented with power spectrum densities. We conclude with the computation of output and equivalent input noise contributions.


Noise as a signal

One way to classify signals is according to their energy and power. Let x(t) denote a signal. Signals with finite energy satisfy

As time approaches positive or negative infinity such signals must approach zero, Due to this periodic signals do not belong to this class of signals. Instead they often satisfy a less strict requirement, i.e. they have finite power.

where T denotes the period of the signal. Noise signals are generated by random processes. We are going to focus on noise signals with finte power. First of all, noise signals are random signals. The same noise source can generate infinitely many realizations of the same noise signal. Therefore observing the dependence of the signal on time does not deliver much useful information. Let x(t) denote a realization of a noise signal. All realiyations of a particular noise signal have some common properties. Mathematically these properties can be formulated with teh correlation function. The correlation function of two signals x(t) and y(t) is defined as

where E[.] denotes expectation, i.e. the mean value computed across all possible realizations of the two sigfnals. If x(t) and y(t) are generated by two stationary random processes the correlation function depends only on τ. If the two processes are also ergodin (i.e. its statistical properties can be obtained from a sufficiently long realization of the noise signal) then averaging over all realizations can be replaced with averaging over time.

Most noise signals we meet in practice are ergodic. If we choose y(t)=x(t) the correlation function is also referred to as the autocorrelation function.

The fourier transform of the autocorrelation function is also referred to as the power spectrum density.

where f denotes the frequency. The integral of the power spectrum density is equal to the signal's power (Parseval's theorem).

The autocorrelation function is symmetric (cxx(-τ)=cxx(τ)). Therefore the power spectrum density is an even function (i.e. Sxx(-f)=Sxx(f)) so it is sufficient to know its values for nonnegative frequencies. With this in mind we introduce one-sided power spectrum density.

Parseval's theorem for one-sided power spectrum density can be written as

We introduced this one-sided power spectrum density because it is used for noise characterization in circuit simulators. The power spectrum density also has a physical meaning. Suppose we pass a noise signal through an ideal bandpass filter with pass-band between f1 and f2, and measure the RMS value of the output signal (RMS). The filter eliminates all frequencies outside the pass band. The following relation between the measured RMS value and the power spectrum density holds

Common types of noise and their spectra
Thermal noise (Johnson-Nyquist noise)

This type of noise arises due to the chaotic movement of electrons in the conductor. Every resistor generates thermal noise. It can be modelled with a current source i(t) in parallel with the resistor where the noise originates (Fig. 1, right). Note that the polarity of the source is not important because the power spectrum density does not change if we reverse the current.


Fig. 1: Ideal resistor (left) and a resistor that generates thermal noise (right).

The current source produces thermal noise with the following one-sided power spectrum density

where h is the Planck constant, k is the Boltzmann constant, T is the absolute temperature, and R is the resistance of the resistor, Because at room temperature the exponent in the exponential term is small for frequencies below 6THz we can simplify the formula to

We can see the power spectrum density does not depend on frequency across a very wide range. Therefore we refer to this noise as "white".

Shot noise

This type of noise occurs because the electric current consists of a flow of discrete charges (electrons). Its power spectrum density does not depend on temperature or frequency. In a diode this type of noise is modelled by a current source in parallel with the p-n junction (Fig. 2, right, represented by current source is(t)).


Fig. 2: Ideal diode (left) and a diode with current sources representing flicker noise if and shot noise is (right).

The power spectrum density of of the current source representing shot noise is

where q is the electron charge and I is the current flowing across the p-n junction. Shot noise is also "white".

Flicker noise

The power spectrum density of flicker noise is inversely proportional to the frequency. This type of noise is also referred to as 1/f noise or pink noise. In a semiconductor diode flicker noise is modelled by a current source in parallel with the p-n junction (Fig. 2, right, represented by current source if(t)). The power spectrum density of the current source is

where Kf and Af are two constants that characterize the flicker noise, and I is the current flowing through the p-n juncion. Noise signals with power spectrum densities proportional to 1/f are also deemed pink noise.


Fig. 3: A current source for modelling channel thermal noise and flicker noise in a MOS transistor.

We can model noise by introducing noise sources in arbitrary semiconductor devices. Take, for instance, an MOS transistor (Fig. 3). The channel thermal noise and flicker noise can be modelled with a single current source connected between the drain and the source terminal. We are not going to introduce the expression for the power spectrum density of this source because it exceeds the scope of this lecture. Let us only note that the power spectrum density depends on the currents and voltages at the terminals of a MOS transistor.

Modelling the noise generated by circuit elements

All resistances and semiconductor devices are sources of noise. The noise generated by circuit elements is modelled by current sources. For shot and flicker noise the power spectrum density of the noise current source depends on the current flowing through the device. One semiconductor element has several such noise sources. A diode, for instance, has three: shot noise and flicker noise generated by the current flowing across the p-n junction, and thermal noise of the contact resistance. A bipolar transitor has 5 noise sources: shot noise and flicker noise due to the device current, and thermal noise of the contact resistances of the emitter, base, and the collector contact.

Small-signal transfer function

Before we proceed to computing the output noise of a circuit we must introduce the notion of a small-signal transfer function. Suppose there is an independent source in the circuit that generates a small-signal sinusoidal excitation characterized by complexor X. Let us assume this complexor does not depend on the frequency. The excitation results in small sinusoidal responses observed all over the circuit. Let Y denote the complexor representing one such response observed at a selected point in the circuit. The small-signal transfer function from the independent source to the selected point where the response is observed is defined as

Note that the transfer function depends on the frequency. It can be computed from the results of the small-signal frquency-domain analysis (i.e. AC analysis in SPICE) by simply dividing the observed response with the complexor representing the excitation. The unit of the transfer function depends on the type of the observed response (voltage or current) and the type of the excitation (independent voltage or current source). If the circuit is excited by a voltage source and the observed response is a current then the units of the transfer function are A/V. If, on the other hand, the response is also a voltage, the transfer function is a dimensoinless quantity (since it is a ratio of two voltages).

Noise in linear systems

Nonlinear circuits behave as linear if we consider only the small signal sinusoidal excitations and the corresponding response. The steady-state response of a linear system excited by an independent sinusoidal source is sinusoidal and can be represented by a complexor (Y). This complexor can be computed from the complexor (X) representing the excitation and the small-signal transfer function as

where f denotes the frequency of the excitation. If the input signal is a small noise signal with power spectrum density Sxx+(f) then the output of the linear system is also a noise signal with power spectrum density Syy+(f). The output power spectrum density can be computed as

Now suppose a linear system is excited by two independent sources X1 and X2. If we observe the response of the system to X1 while X2 is disabled (i.e. set to zero) the response can be expressed as

where H1 is the transfer function from the first independent source to the output of the system. Similarly, if X1 is disabled and the system is excited only by X2 the response is

where H2 is the transfer function from the second independent source to the output of the system. Now suppose the system is excited simultaneously by both independent sources. The response of the system can then be expressed as

This property is also referred to as superposition. With this tool in our hands we could compute the time-domain response of a circuit excited by individual noise sources and then obtain the response of the circuit excited by all noise sources simultaneously by adding up these partial responses. If, however, we are interested in the power spectrum density of the response we need one more piece of the puzzle.

So how do we treat noise signals that are obtained by summing two noise signals x1(t) and x2(t)? Things are quite simple if the two signals are uncorrelated, i.e. the correlation function satisfies

In real-world circuits the noise sources representing various types of noise generated by a circuit element are uncorrelated. Similarly, noise sources modelling the noise generated by two circuit elements are also uncorrelated. Let S11+(f) and S22+(f) denote the power spectrum densities of x1(t) and x2(t), respectively. Then the power spectrum density of y(t)=x1(t)+x2(t) is

Uncorrelated noise signals remain uncorrelated even after they are transformed by a linear system. If we put together all we have learned so far: for a linear system is excited by two uncorrelated noise sources with power spectrum densities S11+(f) and S22+(f) the power spectrum density of the output noise can be expressed as

where H1 and H2 are transfer functions from the two noise sources to the circuit's output. This formula can be generalized to an arbitrary number of noise sources and is the basis for small-signal noise analysis in circuit simulators.

Small-signal noise analysis

We illustrate the small-signal noise analysis with an example. Take, for instance, the nonlinear circuit from previous chapter (Fig. 4). This time we added the noise sources modelling the noise generated by the two resistors and the MOS transistor.


Fig. 4: A nonlinear circuit with noise sources (in1, in2, and in3).

The first step of small-signal noise analysis is the computation of the circuit's operating point. In this computation all noise sources are disabled (set to 0). The obtained operating point is used for computing the power density spectrum of the noise generated by the MOS transistor and represented by in3. Two noise sources are independent of the operating point (thermal noise generated by the two resistors represented by in1 and in2). The operating point is also used for computing the linearized models of nonlinear elements (i.e. MOS transistor). After the linearization is complete the frequency-domain system of equations for the linearized circuit is assembled. The coefficient matrix (Z) for the circuit in Fig. 4 was constructed in the previous lecture,

The output noise comprises contributions from all three noise sources. To compute these contributions we first need to compute the transfer functions from these noise sources to the output signal. For that purpose the small-signal analysis described in the previous chapter is used. The procedure we are going to describe computes the power spectrum density of output noise at a single frequency.

To compute the transfer function from in1 to v3 we must construct the corresponding system of equations for a circuit where the only sinusoidal excitation is comming from in1. The coefficient matrix of this system does not depend on the noise source because independent sources contribute only to the right-hand side of the system. The only thing we need to construct is the right-hand side, which is easy, as we already know the matrix footprint of an independent current source. If we set the complexor representing the AC current generated by in1 to 1 the complexor of the response observed at v3 will be identical to the transfer function we want to compute. This may seem strange because in previous chapter we were discussing how the excitations for small-signal frequency-domain analysis must be small signals. On the other hand, small-signal analysis computes the solution of a linear circuit. For linear circuits arbitrary magnitudes can be used for excitations. By increasing the excitation of a linear circuit the output signal increases by the same factor. Of course, in real world circuits such large excitations don't result is small sinusoidal perturbations of the operating point. But then again, real world circuits are not linear and cannot handle arbitrary magnitudes of excitation. With this in mind we obtain the following system of equations

By solving for V3AC we obtain the desired transfer function denoted by H1. Similarly for the transfer function from in2 to v3 the system of equations is

By solving for V3AC we obtain the transfer function denoted by H2. Finally, to obtain the transfer function from in3 to v3 we must solve

By solving for V3AC we obtain the transfer function denoted by H3. Because matrix Z depends on the frequency (ω) we can compute the values of all required transfer functions at a single frequency with only one LU decomposition which is common to all transfer functions at given ω. Computing the value of a transfer function for one ω requires only one forward and one backward substitution.

Now suppose S11+, S22+, and S33+ denote the power spectrum densities of in1, in2, and in3 at the chosen frequency for which the transfer functions H1, H2, and H3 were computed. The output power spectrum density (Sout+) observed at the circuit's output (node potential v2) is then

The units of the power spectrum density are V2/Hz if the observed output signal is a voltage. If the output signal is a current the power spectrum density is in A2/Hz.

Equivalent input noise

Sometimes we are interested how much noise (i.e. power density spectrum) must be injected at the circuit's input to recreate the power density noise spectrum at the circut's output assuming all noise sources in the circuit are disabled. This equivalent input noise power density spectrum represents the aggregation of all noise sources at the circuit's input.

To compute it, we need to define the circuit's input (i.e. the independent source that produces the input signal). Suppose for the circuit in Fig. 4 this is uSRC(t). First, we need to compute the transfer function from this source to the circuit's output (let us assume this is again v3) defined as V3AC/USRCAC. We can obtain this transfer function by setting USRCAC to 1 while disabling all other independent sources. This produces the following system of equations (which depends on frequency ω).

After solving this system the resulting V3AC corresponds to the transfer function which we denote by H. Because power spectrum densities are transformed by multiplying the input power spectrum density with the squared absolute value of the transfer function the equivalent input noise power spectrum density is obtained as

The computation of H and Sout+ needs to be repeated for every frequency for which we would like to compute the equivalent input noise power spectrum density. If the input signal source is a voltage source the equivalent input noise power spectrum density is in V2/Hz. If the input source is a current source the result is in A2/Hz.

Small-signal noise analysis in SPICE OPUS

When performing noise analysis one has to specify the output signal (which can be any node potential or current appearing in the system of equations as an unknown), the input voltage source or current source (for computing the equivalent input noise), and the frequency range (in the same manner as for AC analysis). The simulator produces two groups of results. The first one comprises output and equivalent input noise power spectrum densities along with output noise power spectrum density contributions corresponding to individual noise sources. The second group of results comprises integrals of noise spectra over the whole simulated frequency range.



imagegoeshere

Time-domain analysis

9th Lecture W3, December

To simulate the circuit in time-domain we first divide the time scale in discrete equidistant points. The circuit is then solved at sequential timepoints from the first one to the last one. Reactive elements are handled by expressing the time derivatives in their constitutive relations with approximations based on circuit solutions at past timepoints and the solution we are currently computing (implicit integration). Several integration algorithms are presented. The local truncation error (LTE) is introduced and we show how it can be kept low by selecting an appropriate timestep. We replace fixed timestep with a variable one to obtain the time-domain analysis approach used in modern circuit simualtors. Variable timestep complicates the integration algorithm because its coefficients must be recomputed for every new timestep. We show how this is achieved with selected numerical integration algorithms. Finally, we introduce the predictor-corrector approach to solving the circuit in time-domain.


Approximating the time scale with discrete steps

Suppose we want to simulate the behavior of a circuit from t1 to t2. Clearly, we cannot solve the circuit for all timepoints. Therefore we divide the timescale in discrete steps. Let tk denote the timepoint where the latest solution of the circuit's equations was computed. The solution we are trying to compute corresponds to timepoint tk+1. Let hk=tk+1-tk denote the k-th timestep between tk and tk+1. In circuit simulation the timestep is usually not constant because the dynamics of the circuit's behavior can vary greatly from time to time. When the circuit's response changes quickly the timestep is appropriately small. When the circuit "sleeps" and nothing interesting is happening with voltages and currents the timestep can be much greater.

Numerical integration of differential equations

As we learned in previous chapters the system of equations describing the circuit's behavior in time domain can be formulated as

where vector-valued function g represents the resistive part of the circuit, vector-valued function q represents the reactive part of the circuit, vector y represents the independent sources (excitations), and vector x represents the unknowns (node potentials and branch currents introduced by the modified nodal approach to circuit equations). This is a nonlinear ordinary differential equation (ODE). Until now we managed to avoid the derivative term. In operating point analysis and DC small-signal analysis it simply vanished. In AC small-signal analysis we transformed the equation to frequency domain which changed the derivative term to simple multiplication with jω. But now we must face the music and deal with the derivative term.

The usuall approach is to approximate the derivative of a quantity with a weighted sum of the quantity and its derivatives at past timepoints tk, tk-1, ... (where they are already known). This procedure is referred to as numerical integration of ODEs. In fact, we go even one step further. In this weighted sum we also allow the value of the quantity at the timepoint for which we are trying to solve the circuit (tk+1). If we do the latter the numerical integration is referred to as implicit numerical integration, opposite to explicit numerical integration where we avoid using quatities that are not computed yet.

For the sake of simplicity we introduce the following notation. Let q̇ denote the derivative of q with respect to time. Subscripts denote timepoint indices, i.e. qk corresponds to timepoint tk. A very simple implicit integration algorithm is the backward Euler algorithm where the derivative is expressed with the last computed value and the one that is about to be computed.

Another simple algorithm is the trapezoidal algorithm which originates in the reasoning that the finite difference approximation to the derivative is assumed to be equal to the average of the derivative at the endpoints of the interval.

From here we can express the trapezoidal formula

Nonlinear reactive elements and their matrix footprints
If, for instance, the backward Euler formula is applied to the circuit equations we obtain the following algebraic equation for the circuit's response at tk+1.

In this equation q(xk) and yk+1 are known (the former one can be computed from the solution at the previous timepoint and the latter one is the value of circuit's excitations at the timepoint where we are solving the circuit).

This is a nonlinear equation which can be solved by applying the Newton-Raphson algorithm. From this equation we can quickly deduce the matrix footprint of a linear capacitor. We start with its constitutive relation where the charge is expressed with the voltage as q = C v. In circuit equations the capacitor contributes a term of the form q̇. When we are solving at timepoint tk+1 we must express q̇k+1 with past values of the circuit's response and qk+1 (afterall we are using implicit integration). Suppose we use the backward Euler formula which yields


Fig. 1: Linear capacitor (left) and its model used for computing the transient response at one timestep (right).

Here vk+1 is the unknown and vk is the circuit's solution at the last computed timepoint. Because q̇ is the capacitor current we can interpret the obtained equation as a parallel connection of a resistor with resistance R=hk/C and independent current source I=-C vk/hk (Fig. 1). From this we can conclude that the matrix footprint fo a capacitor connected between nodes a and b for solving the circuit at timepoint tk+1 is

Similarly, the contribution to the right-hand side (originating from the current source) is

Now suppose we use the trapezoidal integration formula. The capacitor current (q̇) is now expressed as

One migh ask at this point: where do we get q̇k from. Fortunately the derivative is computed using the integration formula at every past timestep for which the circuit's equations were solved. We just need to store it for later use. This leaves us in a bit of a dilemma - what to do when we are solving the circuit's equations at the first timepoint? Well, in that case we resort to integration formulae which do not require the knowledge of the derivative at past timepoints (e.g. Backward Euler formula). We can switch to a more advanced integration formula (like trapezoidal integration) at the second timepoint.

Generalized approach to numerical integration formulae

There exist many integration formulae. In numerical mathematics their purpose is to express the value of a quantity q at timepoint tk+1 with the values of the quantity and its first derivative q̇ at past timepoints and tk+1 (the latter is used only in implicit formulae). The following ansatz can be used for expressing an arbitrary integration formula.

Suppose q is a function of time. For most functions an integration formula is only approximate in the sense that qk+1 is only approximately equal to the right-hand side of the ansatz. An exception to this are polynomials of an order not exceeding n. An integration formula has order n if it is exact for polynomials of order up to and including n. By assuming q(t) is a polynomial of time of the form αtj we can derive coefficients ai and bi.

Suppose we want do derive an implicit integration formula of order n=1 (backward Euler formula). Because a polynomial of first order has only two coefficients the integration formula can have only two nonzero coefficients. Because we chose to construct an implicit formula, b-1 will be nonzero. As the second nonzero coefficient we choose a0. The ansatz for the integration formula is therefore

For a polynomial q(t)=α we have

Substituting this in our ansatz yields the first equation for computing the coefficients b-1 and a0.

For a polynomial q(t)=αt we can write

By substituting this in the ansatz we get the second equation for computing coefficients a-1 and a0.

Equations can be simplified if we assume tk=0 which implies tk+1=hk. The system of equations is now

This system is already solved. The obtained numerical integration formula is therefore

By expressing q̇k+1 we get the backward Euler formula we introduced earlier.

Trapezoidal formula is a second order implicit formula (n=2). We obtain it if we assume the nonzero coefficients are a0, a1, and b-1. The ansatz for the formula is

By substituting q(t)=α and q(t)=αt in the ansatz we get the first two equations for computing the coefficients.

The third equation is obtained by substituting q(t)=αt2 in the ansatz. For this purpose we first compute

By substituting this in the ansatz we obtain the third equation.

Again, we assume tk=0 and tk+1=hk which greatly simplifies the equations and yields the following linear system.

Solving the system yields a0=1, b0=1/2, and b-1=1/2. After substituting these values in the ansatz we get

from where the trapezoidal formula follows after solving for q̇k+1.

Adams-Moulton integration formulae

An Adams-Moulton integration formula of order n is obtained if we choose b-1, ..., bn-1, and a0 to be the nonzero coefficients in the general integration formula ansatz. The backward Euler formula is in fact the Adams-Moulton formula of order n=1. The trapezoidal formula is the Adams-Moulton formula of order n=2.

Gear integration formulae

These formulae are also referred to as bacward differentiation formulae (BDF). The BDF formula of order n is obtained if a0, ..., an-1 and b-1 are choosen as the nonzero coefficients in the general integration formula ansatz. The BDF formula of order n=1 is in fact the backward Euler formula.

Single-step vs. multistep

If the integration formula involves only values at tk and tk+1 then it is a single-step formula. Such a formula depends only on the last step length (hk). The backward Euler formula and the trapezoidal formula are single step integration formulae. Because their coefficients depend only on hk, they can be computed in advance.

Multistep formulae involve values at more that the previously mentioned two timepoints. If the step is constant (i.e. hk=hk-1=hk-2=...) their coefficients can be computed in advance. Unfortunately in circuit simulators the step varies with the circuit's dynamics. Therefore the coefficients of multistep methods must be recomputed at every timestep. For a formula of order n this involves solving a linear system of n+1 equations (i.e. like we did for the backward Euler and the trapezoidal formula).

How simulators apply numerical integration

A quantity (i.e. capacitor charge, inductor flux) must be numerically integrated to get rid of its derivative with respect to time and replace it with a weighted sum of past values of the quantity and its derivative. As we already know, the value of the quantity that is about to be computed (at tk+1) also takes part in this weighted sum if the numerical integration formula is an implicit one. For this purpose the simulator stores past values of the quantity and its derivative with respect to time. The length of this storage depends on the type and maximal order of the integration algorithm used by the simulator.

If a multistep algorithm is used the simulator recomputes the integration formula coefficients before the Newton-Raphson algorithm starts solving the circuit for one timepoint. Because the last timestep (hk) is part of the integration formula ansatz the coefficients don't need to be recomputed if the simulator decides to abandon a solution at a particular timepoint, reduce the timestep, and repeat the Newton-Raphson algorithm for this shorter timestep (this happens when the obtained solution is not accurate enough; we will discuss it later when we introduce local truncation error).

What about explicit integration?

There also exist various explicit integration algorithms. For instance, if we select the nonzero coefficients to be b0, ..., bn-1 and a0 we obtain the Adams-Bashforth integration formula of order n. The forward Euler formula is in fact the Adams-Bashforth formula of order n=1.

Explicit integration methods tend to produce unstable results (the circuit response explodes) if the timestep (hk) is greater than the time constant of the circuit's response. Consequently the timestep must be kept small, even when the circuit's response does not change much. This is particularly problematic in "stiff" circuits with multiple time constants, of which one is small and one is large. Such circuits can be analyzed with explicit integration only when the timestep is smaller than the shortest time constant of the circuit. Because every step contributes some numerical error (we will later name it local truncation error) many steps must be computed accumulating a lot of error which in turn can become greater than the response itself.

As a side note let us mention that if we choose as nonzero coefficients a0, ..., an we obtain an n-th order polynomial extrapolation formula for computing the value of q at tk+1. The obtained formula is equivalent to constructing an n-th order polynomial interpolation that matches the quantity at timepoints tk-n, ..., t0 and computing its value at tk+1.

Local truncation error

In the process of deriving an integration algorithm formula of n-th order we assumed that the response (q(t)) is a polynomial of order not exceeding n). Now suppose the response is a polynomial of order exceeding n. As any sufficiently smooth function can be expressed in the form of a Taylor series (i.e. as a polynomial of infinite order) we can expect that the result produced by the integration algorithm will differ from the actual response q(t). Even if the values of q and q̇ are known exactly at past timepoints tk, tk-1, ..., we can expect that the value computed by the integration algorithm at tk+1 will differ from q(tk+1). This difference is referred to as the local truncation error (LTE). LTE is expressed in terms of q (i.e. for capacitors this is the stored charge and for inductors the stored magnetic flux).

Suppose the exact values of q are given by a Taylor series in the neighbourhood of tk as

where q(j) denotes the j-th derivative of q with respect to time. The first derivative of the exact response is

The integration algorithm computes an approximation of q(tk+1) denoted by qk+1. We substitute q(tk) and q̇(tk) for qk and q̇k in the integration algorithm ansatz to obtain this approximate value of q(tk+1) computed by the integration algorithm.

We separate the term corresponding to j=0.

and express the sum over j as a series comprising terms with powers of hk.

On the other hand, the exact value of the response at tk+1 can be expressed with the Taylor series

The LTE is defined as the difference betweeen qk+1 and q(tk+1). Due to the way we expressed these two terms LTE can be expressed as a series comprising terms with powers of hk.

By comparing the last three expressions we get

When we were deriving the equations for computing coefficients ai and bi we required qk+1=q(tk+1) for q(t) that was a polynomial of order not exceeding n. This is possible only if coefficients Ci are zero for all i=1,2,...,n. In fact, the n equations given by the last two expressions are exactly those that are used for computing coefficients ai and bi. For an integration formula of order n the first nozero coefficient in the series expressing the LTE is therefore Cn+1 which is also referred to as the error coefficient. The series expressing LTE has infinitely many terms. For the purpose of estimating the LTE we keep only the term with the lowest power of hk.

Let us compute the error coefficient for the backward Euler formula which has n=1 (this implies j=n+1=2). Because the only nonzero coefficients in the formula are a0 and b-1 we have p=0 and r=-1. The error coefficient is therefore

The error coefficient for the trapezoidal formula (n=2, p=0, r=0) is obtained as

Whenever the coefficients of the integration formula are recomputed the error coefficient must also be recomputed. One piece is still missing before we can actually compute the LTE - the n+1-th derivative of q at tk. We can estimate it by applying polynomial interpolation of order n+1 to the latest n+2 values of q. Because LTE is estimated after the Newton-Raphson algorithm solves the circuit at tk+1, the values of q used in the interpolation are qk-n, ..., qk+1. The n+1-th derivative of obtained polynomial p(t) is constant and does not depend on time. A convenient way for computing it is to use divided differences. Suppose we have n+1 points (x0 y(x0)), ..., (xn>, y(xn)). The Newton interpolation polynomial of order n that interpolates these points can be expressed as

where basis polynomials are defined as

Note that η0(x)=1. The coefficients αi are defined recursively with divided differences as

The n-th derivative of Nn(x) can be expressed as

Assuming values qk-n, ..., qk, qk+1 correspond to timepoints tk-n, ..., tk, tk+1 the LTE can be expressed as

Timestep control

The initial approximate solution used by the Newton-Raphson algorithm for solving the circuit at tk+1 is the circuit's solution at tk. If certain conditions are satisfied the solution at timepoint tk+1is accepted and the simulator proceeds with computing the response for the next timepoint tk+2. But this is not always the case. The computed timepoint is rejected if

  • the Newton-Raphson algorithm fails to converge
  • the Newton-Raphson algorithm converges slowly which indicates that the initial approximate solution is far from the obtained solution

If timepoint tk+1 is rejected the simulator reduces the timestep hk and repeats the Newton algorithm at timepoint tk+1=tk+hk.

Sometimes the circuit's excitations contain discontinuities in their value or derivatives. Timepoints tb where such discontinuities occur are referred to as breakpoints. If a breakpoint lies on the time interval tk < t < tk+1 the timestep is shortened so that tk+1=tb. In case of a breakpoint the simulator behaves in the same manner as when a timepoint is rejected.

When a timepoint is accepted the new timestep is computed depending on the LTE estimate in such manner that the LTE is kept bounded. This also means that if LTE is small the timestep is increased.

Choosing the order of the integration formula

For the first timepoint an integration formula of order n=1 is used (backward Euler formula) which does not require the values of the derivative of q at past timepoints. Whenever a timepoint is rejected and at every breakpointthe order of the integration formula used in the next run of the Newton-Raphosn algorithm is reduced to n=1.

Whenever a timepoint is accepted the order of the integration formula used for the next timepoint can be increased by 1. The order of the integration formula is limited. In SPICE OPUS when an Adams-Moulton formula is used the maximal allowed order is n=2. For Gear formulae the maximal allowed order is 6.

Predictor-corrector approach to numerical integration

The initial approximate solution used by the Newton-Raphson algorithm for solving the circuit at tk+1 is the circuit's solution at tk. This is assumed to be a good starting point for which we expect the Newton-Raphson algorithm to require only a few iterations to reach convergence. If, however, we have a means for obtaining a better initial approximate solution we can expect even faster convergence.

The method for computing the initial approximate solution is also referred to as the predictor. The method used for performing numerical integration is also referred to as the corrector.

In SPICE OPUS when an Adams-Moulton formula of order n is used as the corrector the Adams-Bashforth explicit integration formula of order n is used as the predictor, but instead of applying it to components of q we apply it to components of the vector of uknowns (x). Recall that the nonzero coefficients are a0 and b0, ..., bn-1. The prediected solution is then expressed as

Because usually the derivative of the circuit's response with respect to time is not available we have to compute it numerically with finite differences (i.e. divided differences of first order).

When SPICE OPUS uses a Gear formula of order n as the corrector simple polynomial extrapolation of order n is used as the predictor. Note that in our general approach to integration formulae we have to choose a0, ..., an to obtain the coefficients of polynomial extrapolation of order n. The predicted value is then computed as

When predictor-corrector numerical integration is used the LTE can be computed in a fairly simple way. The LTE of the predictor can be expressed as

where x(tk+1) denotes the exact response of the circuit. The LTE of the corrector, on the other hand is given by

The difference between the value obtained from the predictor and the value obtained from the corrector (i.e. the result of the Newton-Raphson algorithm) is

By comparing the last two expressions we can see that they differ only by a constant factor. Therefore the LTE can be expressed as

The obtained expression for computing the LTE does not require us to use divided differences and is therefore much simpler to compute.



imagegoeshere

Circuit optimization

10th Lecture W1, January

We introduce optimization algorithms for finding the minimum of a function of many variables. A short overview of available algorithms is presented. We show how design requirements for a circuit can be formally defined. A designer tunes these requirements by changing parameters of selected elements (design parameters). Constraints are imposed on the design parameters due to the nature of the circuit. These constraints can significantly reduce the number of design parameters. To automate the design process we introduce the cost function which is then minimized by an optimization algorithm to find circuits that satisfy design requirements. A live demonstration of the approach is given.


About the course

This is an intermediate level course about blablabla... This is a compulsory course in the 1. semester of the Master’s degree curriculum “Electronics”. The aim is to introduce students to the theoretical background of analog circuit simulation. The course also involves laboratory work in the advanced field of circuit simulation and optimization with SPICE OPUS.


Staff

Lecturer:

prof. dr. Árpád Bürmen

Teaching Assistant:

as. Žiga Rojec


Required knowledge

Basics of Electromagnetics Physics Mathematics I, II, III