International Course
In this lecture we are going get acquainted with the concept of lumped circuits, Kirchoff's current and voltage law, and constitutive relations of circuit elements. These equations form the mathematical model of a circuit. The number of equations and unknowns can be greatly reduced if we introduce nodal voltages (which results in the nodal analysis approach to circuit equations). This also bring some restrictions that we are going to loosen up a bit in later lectures.
To make things more simple we focus on linear circuits for now. This makes it possible for us to write the equations in matrix form. By taking a long hard look at the coefficient matrix and the vector of right-hand side values we observe simple patterns (element footprints) that can be used for constructing the system of equations directly from the circuit's schematic.
Nodal analysis has one great disadvantage. It cannot handle elements for which branch currents cannot be explicitly expressed (e.g. independent voltage source). In this lecture we are going to introduce modified nodal analysis. If we cannot explicitly express a branch current with branch voltages we simply keep that branch current as an unknown. To make sure the system of equations is fully determined we must add an additional equation for every branch current we decide to keep. This additional equation is the corresponding element's constitutive relation.
Now we can handle independent voltage sources, linear controlled voltage sources, and linear current controlled sources. Modified nodal analysis is the approach used in most circuit simulators today. With everything we learned up to now it is fairly easy to handle arbitrary linear elements in our equations. We demonstrate this with several examples: ideal transformer, ideal opamp with negative feedback, and inverting amplifier built with an opamp.
Solving systems of linear equations is nothing new. Several approaches were developed in the past. For starters we take a look at Gaussian elimination. We examine its computational cost and show how it can fail. To improve the robustness of Gaussian elimination we introduce pivoting. Gaussian elimination leads to many unnecessary operations when it is used for solving multiple systems of equations with the same coefficient matrix (which is common in circuit simulation). To reduce the number of operations we introduce LU-decomposition followed by backward and forward substitution.
Sparse matrices are matrices where most entries are zero. Coefficient matrices corresponding to real-world circuits are sparse. This makes it possible to analyze large circuits without prohibitively large memory requirements. But there is a catch. Performing LU-decomposition of a sparse matrix must make sure that as few as possible new nonzero entries (fill-in) are created during decomposition. Unfortunately one cannot have both - a small fill-in and a small numerical error. This is because avoiding fill-in dictates the choice of matrix pivots which now cannot be chosen in a way that would result in minimal numerical error.
When we introduce nonlinear elements we can no longer write equations in matrix form. Instead they are now written as a list of nonlinear equations. If the equations are twice continuously differentiable we can numerically solve them with the Newton-Raphson algorithm. The algorithm iteratively approaches the solution by linearizing the equations and solving the resulting linear system to produce an improved approximation to the solution of the original nonlinear system.
We take a look at the nonlinear models of selected semiconductor elements (diode, MOSFET). The linearized system can again be constructed by means of the element footprints approach. The Newton-Raphson algorithm can fail to converge to a solution. Strong nonlinearities in the element characteristics can cause convergence problems. Approaches for dealing with such elements are presented. Several approaches to finding a solution in case of convergence problems are discussed.
One can interpret the elements of the coefficient matrix as conductances, resistances, and controlled sources. This interpretation results in the linearized circuit model. The linearized circuit model can be used for obtaining small-signal properties of the circuit like gain, input impedance, and output impedance. If the signals are composed of a large DC component and a small perturbation we can treat the circuit as linear if we consider only the perturbations. We draw parallels between linear electronics and small-signal DC analysis.
We introduce the modelling of linear reactive elements (linear capacitors, inductors, and coupled inductors). We extend the notion of small-signal analysis to sinusoidal signals represented by complex numbers where the absolute value corresponds to the magnitude of the sinusoidal signal while the argument corresponds to its phase. We assume all signals in the circuit share the same frequency. Due to reactive elements the solution of the circuit depends on the signal frequency.
We extend the handling of linear reactive elements to nonlinear elements. We demonstrate the modelling of nonlinear capacitors on several examples - a semiconductor diode, a bipolar transistor, and a MOSFET. Finally, we show how nonlinear elements are handled in small-signal frequency-domain analysis.
We briefly introduce the relevant aspects of noise modeling and analysis in linear circuits. We characterize various types of noise appearing in electronic circuits. Noise models of selected circuit elements are presented. We introduce small-signal noise analysis as a special case of AC analysis where signals are represented by power spectral densities. We conclude with the computation of the output noise and the equivalent input noise.
To simulate a circuit in the time-domain we first divide the time scale in discrete points. The circuit equations are then solved at each timepoint. Reactive elements are handled by expressing the derivatives wrt. time with approximations based on circuit solutions at past timepoints and the one we are currently computing (implicit integration). Several integration algorithms are presented. The local truncation error (LTE) is explained and we show how it can be kept low by adjusting the timestep. In modern circuit simulators the timestep is variable. We show how the coefficients of numerical integration algorithms can be computed when the timestep is not constant. The timestep control algorithm and the choice of the order of the integration algorithm are explained. Finally, the predictor-corrector approach to numerical integration is introduced.
We introduce optimization algorithms for finding the minimum of a function of many variables. A short overview of available algorithms is presented. We show how design requirements for a circuit can be formally defined. A designer tunes these requirements by changing parameters of selected elements (design parameters). Constraints are imposed on the design parameters due to the nature of the circuit. These constraints can significantly reduce the number of design parameters. To automate the design process we introduce the cost function which is then minimized by an optimization algorithm to find circuits that satisfy design requirements. A live demonstration of the approach is given.
This is a compulsory course in the 1. semester of the Master’s degree curriculum “Electronics”. The aim is to introduce students to the theoretical background of analog circuit simulation. The course also involves laboratory work in the advanced field of circuit simulation and optimization with SPICE OPUS.
Basics of Electromagnetics Physics Mathematics I, II, III Basics of Programming