top of page

Grupo Profissional

Público·36 membros

EE263 Homework 8: Examples and Solutions for Linear Dynamical Systems Problems



EE263 Homework 8 Solutions




In this article, we will discuss some solutions to the homework 8 of EE263, a course on introduction to linear dynamical systems offered by Stanford University. The homework covers some topics such as FIR filters, eigenvalues, impulse responses, linear approximations, orthogonal bases, and halfspaces. We will show how to apply some concepts and techniques from linear algebra and matrix theory to solve these problems.




ee263 homework 8 solutions



FIR filter with small feedback




A FIR filter is a type of digital filter that has a finite impulse response, meaning that its output depends only on a finite number of past input values. In this problem, we consider a cascade of 100 one-sample delays with a small feedback loop added to the system. We will express this system as a linear dynamical system, find its eigenvalues, and compare its impulse responses with and without feedback.


Expressing the system as a linear dynamical system




A linear dynamical system is a system that can be described by a set of linear equations of the form x(t+1) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t), where x(t) is the state vector, u(t) is the input vector, y(t) is the output vector, and A, B, C, and D are constant matrices. To express the FIR filter as a linear dynamical system, we need to identify these matrices.


For the system without feedback, we can take A as a 100 by 100 matrix with zeros everywhere except for ones on the subdiagonal (the diagonal below the main diagonal). This matrix shifts the state vector one step down at each time step. We can take B as a 100 by 1 vector with one at the top entry and zeros elsewhere. This vector adds the input value to the first state variable at each time step. We can take C as a 1 by 100 vector with one at the bottom entry and zeros elsewhere. This vector outputs the last state variable at each time step. We can take D as zero since there is no direct feedthrough from input to output. The matrices are shown below:


A = [0 ... 0; 1 ... 0; ... ...; ... ...; 0 ... 0; 0 ... 1] B = [1; 0; ...; ...; 0] C = [0 ... 0 1] D = 0


For the system with feedback, we can take Af as A with an additional entry of alpha (the feedback gain) at the top right corner. This entry adds a scaled version of the output value to the first state variable at each time step. We can take Bf as B, Cf as C, and Df as D since they are unchanged from the system without feedback. The matrix Af is shown below:


Af = [0 ... ... alpha; 1 ... ... 0; ... ... ...; ... ... ...; 0 ... 0 0; 0 ... 1 0]


Finding the eigenvalues of A and Af




The eigenvalues of a matrix are the values of lambda that satisfy the equation Ax = lambda x, where x is a nonzero vector. They are also the roots of the characteristic polynomial of the matrix, which is det(A - lambda I), where I is the identity matrix. The eigenvalues of a matrix determine some of its properties, such as stability, controllability, and observability.


To find the eigenvalues of A, we can use either the determinant formula or the transfer function method. The determinant formula gives us det(A - lambda I) = (-lambda)^100, which has 100 roots of lambda = 0. This means that A has only one eigenvalue of zero with multiplicity 100. In fact, A has only one Jordan block, which is a canonical form of a matrix with repeated eigenvalues.


The transfer function method gives us another way to find the eigenvalues of A. The transfer function of a linear dynamical system is the ratio of the output to the input in the frequency domain, which is given by H(z) = C(zI - A)^(-1)B + D. The poles of the transfer function are the values of z that make the denominator zero, which are also the eigenvalues of A. The transfer function of the system without feedback is H(z) = z^(-100), which has poles of z = 0 with multiplicity 100. This confirms the result from the determinant formula.


To find the eigenvalues of Af, we can use either method as well. The determinant formula gives us det(Af - lambda I) = (-lambda)^100 - alpha, which has 100 roots of lambda = (-alpha)^(1/100). These are complex numbers that lie on a circle of radius alpha^(1/100) centered at the origin. The transfer function method gives us H(z) = z^(-100)/(1 - alpha z^(-100)), which has poles of z = (alpha)^(1/100) with multiplicity 100. These are also complex numbers that lie on a circle of radius alpha^(1/100) centered at the origin.


Comparing the impulse responses of the system




The impulse response of a system is the output when the input is a unit impulse, which is a signal that has value one at time zero and zero elsewhere. The impulse response characterizes the behavior of the system and can be used to find the output for any input by convolution.


The impulse response of the system without feedback is very simple. It is nothing more than a single unit impulse at t = 100. The system simply delays the input signal by 100 samples.


The impulse response of the system with feedback is slightly more complicated. If we put in a unit impulse at t = 0, a unit impulse comes out at t = 100. Then at t = 200, another impulse comes out with amplitude alpha. At t = 300, we get another one with amplitude alpha^2. And so on. In other words, we get a very faint echo with a period of 100. The first echo is barely detectable and the second and subsequent ones are not present for any practical purpose. Thus, the feedback has essentially no effect on the impulse response.


This impulse response is not what you might have expected from the eigenvalues. While the magnitude of the eigenvalues makes sense, most of them are complex, so we would expect to see some strong oscillations. But the impulse response is much simpler than you might have guessed.


We have seen that the feedback has essentially no effect on the impulse response of the system, but it causes an extreme change in eigenvalues. Why? There are many answers, but the most important message is that the poles can give you a good qualitative feel for the impulse response when there are just a few of them. But when there are many of them, it need not. The extreme sensitivity of the eigenvalues to the change in one entry is due to the Jordan form of A.


Selected EE263 Homework Problems




Price elasticity of demand




Price elasticity of demand is a measure of how responsive the quantity demanded of a good or service is to changes in its price. It is defined as the percentage change in quantity demanded divided by the percentage change in price. For example, if the price of a product increases by 10% and the quantity demanded decreases by 20%, then the price elasticity of demand is -20%/10% = -2.


In this problem, we are given a nonlinear function that relates the quantity demanded q to the price p: q = 1000(1 + p)^(-2). We are asked to find a linear approximation of this function around p = 1, and use it to estimate the price elasticity of demand at p = 1.


To find a linear approximation of a nonlinear function f(x) around a point x = a, we can use the first-order Taylor expansion: f(x) f(a) + f'(a)(x - a). In our case, we have f(p) = q = 1000(1 + p)^(-2), so we need to find f(1) and f'(1). We have f(1) = 1000(1 + 1)^(-2) = 250, and f'(p) = -2000(1 + p)^(-3), so f'(1) = -2000(1 + 1)^(-3) = -500. Therefore, the linear approximation is q 250 - 500(p - 1), or q -500p + 750.


To estimate the price elasticity of demand at p = 1, we can use the linear approximation and plug in p = 1. We get q -500(1) + 750 = 250. Then we can calculate the percentage change in quantity demanded and price when p increases by a small amount, say 0.01. We have delta q/q = (q(p + 0.01) - q(p))/q(p) (-500(p + 0.01) + 750 - (-500p + 750))/(-500p + 750) = -0.02/(250 - 500p), and delta p/p = (p + 0.01 - p)/p = 0.01/p. The price elasticity of demand is then epsilon = delta q/q / delta p/p (-0.02/(250 - 500p)) / (0.01/p) = -0.4p/(250 - 500p). Plugging in p = 1, we get epsilon -0.4/(-250) = 0.0016.


This means that the quantity demanded is very insensitive to changes in price around p = 1, according to the linear approximation. However, this may not be accurate for large changes in price, since the nonlinear function may behave differently from the linear one.


Color perception




Color perception is a complex phenomenon that involves the interaction of physical, biological, and psychological factors. One way to model color perception is to use color vectors, which are three-dimensional vectors that represent the intensity of red, green, and blue components of a color. For example, the color vector [1, 0, 0] represents pure red, while the color vector [0.5, 0.5, 0] represents yellow.


In this problem, we are given four color vectors that correspond to four primary colors: c1 = [0.9, 0.8, 0], c2 = [0.7, 0.9, 0], c3 = [0.6, 0.7, 0], and c4 = [0.5, 0.6, 0]. We are asked to find an orthogonal basis for the subspace spanned by these color vectors.


An orthogonal basis for a subspace is a set of linearly independent vectors that are mutually orthogonal (i.e., their dot product is zero) and span the subspace (i.e., any vector in the subspace can be written as a linear combination of them). One way to find an orthogonal basis for a subspace spanned by a set of vectors is to use the Gram-Schmidt process, which iteratively constructs an orthogonal basis from an arbitrary basis.


The Gram-Schmidt process works as follows: Let v1, v2, ..., vn be a set of linearly independent vectors that span a subspace. Then we can construct an orthogonal basis u1, u2, ..., un by the following steps:


  • Set u1 = v1.



  • For k = 2, ..., n, project vk onto the subspace spanned by u1, ..., uk-1, and subtract the projection from vk. This gives us a vector that is orthogonal to u1, ..., uk-1. Then normalize this vector to get uk.



In our case, we can take v1 = c1, v2 = c2, v3 = c3, and v4 = c4 as an arbitrary basis for the subspace spanned by the color vectors. Then we can apply the Gram-Schmidt process to find an orthogonal basis u1, u2, u3, and u4. The steps are shown below:


u1 = v1 = c1 = [0.9, 0.8, 0] u2 = v2 - proj_u1(v2) = c2 - (c2 . u1 / u1 . u1) * u1 = [0.7, 0.9, 0] - (0.7 * 0.9 + 0.9 * 0.8) / (0.9^2 + 0.8^2) * [0.9, 0.8, 0] = [0.7, 0.9, 0] - 0.978 * [0.9, 0.8, 0] = [-0.1818, 0.1176, 0] u2 = u2 / norm(u2) = [-0.9065, 0.5878, 0] u3 = v3 - proj_u1(v3) - proj_u2(v3) = c3 - (c3 . u1 / u1 . u1) * u1 - (c3 . u2 / u2 . u2) * u2 = [0.6, 0.7, 0] - (0.6 * 0.9 + 0.7 * 0.8) / (0.9^2 + 0.8^2) * [0.9, 0.8, 0] - (0.6 * (-0.9065) + 0.7 * 0.5878) / ((-0.9065)^2 + 0.5878^2) * [-0.9065, 0.5878, 0] = [0.6, 0.7, 0] - 0.8676 * [0.9, 0.8, 0] - (-0.0016) * [-0.9065, 0.5878, 0] = [-0.1814, -0.0144, 0] u3 = u3 / norm(u3) = [-0.9994, -0.0795, 0] u4 = v4 - proj_u1(v4) - proj_u2(v4) - proj_u3(v4) = c4 - (c4 . u1 / u1 . u1) * u1 - (c4 . u2 / u2 . u2) * u2 - (c4 . u3 / u3 . u3) * u3 u4 = v4 - proj_u1(v4) - proj_u2(v4) - proj_u3(v4) = c4 - (c4 . u1 / u1 . u1) * u1 - (c4 . u2 / u2 . u2) * u2 - (c4 . u3 / u3 . u3) * u3 = [0.5, 0.6, 0] - (0.5 * 0.9 + 0.6 * 0.8) / (0.9^2 + 0.8^2) * [0.9, 0.8, 0] - (0.5 * (-0.9065) + 0.6 * 0.5878) / ((-0.9065)^2 + 0.5878^2) * [-0.9065, 0.5878, 0] - (0.5 * (-0.9994) + 0.6 * (-0.0795)) / ((-0.9994)^2 + (-0.0795)^2) * [-0.9994, -0.0795, 0] = [0.5, 0.6, 0] - 0.7573 * [0.9, 0.8, 0] - (-0.0547) * [-0.9065, 0.5878, 0] - (-0.5007) * [-0.9994, -0.0795, 0] = [1e-05, -1e-05, 0] u4 = u4 / norm(u4) = [1/sqrt(2), -1/sqrt(2), 0]


Therefore, an orthogonal basis for the subspace spanned by the color vectors is u1, u2, u3, u4, where:


u1 = [0.9, 0.8, 0] u2 = [-0.9065, 0.5878, 0] u3 = [-0.9994, -0.0795, 0] u4 = [1/sqrt(2), -1/sqrt(2), 0]


Halfspace




A halfspace is a subset of a vector space that is defined by a linear inequality of the form a . x


In this problem, we are given a matrix A and a vector b and asked to find a matrix representation of the halfspace x . We are also asked to find the dimension of this halfspace and whether it contains the origin or not.


To find a matrix representation of the halfspace Ax


Using the SVD of A, we can rewrite Ax


Now we can choose a = V s and b = min(U^T b), where min denotes the minimum element of a vector. Then we have Ax


To find the dimension of the halfspace Ax


An affine subspace is a subset of a vector space that is closed under affine combinations, which are linear combinations with coefficients that sum to one. An affine subspace can be written as x0 + W, where x0 is a fixed point and W is a linear subspace.


The affine hull of a halfspace Ax


The dimension of the affine hull of a halfspace Ax


To check whether the halfspace x contains the origin or not, we just need to plug in x = 0 and see if it satisfies the inequality. If Ax


Conclusion




In this article, we have discussed some solutions to the homework 8 of EE263, a course on introduction to linear dynamical systems offered by Stanford University. We have shown how to apply some concepts and techniques from linear algebra and matrix theory to solve problems related to FIR filters, eigenvalues, impulse responses, linear approximations, orthogonal bases, and halfspaces. We hope that this article has been helpful and informative for you.


FAQs




  • Q: What are some applications of linear dynamical systems?



  • A: Linear dynamical systems are widely used to model various phenomena in science, engineering, economics, and other fields. Some examples are circuits, mechanical systems, population dynamics, control systems, signal processing, image processing, data analysis, and optimization.



  • Q: What are some advantages and disadvantages of using linear dynamical systems?



  • A: Some advantages of using linear dynamical systems are that they are easy to analyze and manipulate using matrix operations and algebraic methods. They also have well-established properties and results that can be used to understand their behavior and performance. Some disadvantages of using linear dynamical systems are that they may not capture the complexity and nonlinearity of real-world systems. They may also require simplifying assumptions and approximations that may not be valid or accurate.



  • Q: How can we deal with nonlinear dynamical systems?



  • A: There are several ways to deal with nonlinear dynamical systems. One way is to linearize them around an equilibrium point or a trajectory using Taylor series expansion or perturbation methods. This gives us a linear approximation that can be analyzed using linear techniques. Another way is to use numerical methods such as Euler's method or Runge-Kutta method to simulate the system and obtain approximate solutions. A third way is to use qualitative methods such as phase portraits or bifurcation diagrams to study the global behavior and stability of the system.



  • Q: What are some resources for learning more about linear dynamical systems?



  • A: Some resources for learning more about linear dynamical systems are:











  • Q: How can I contact you for more questions or feedback?



  • A: You can contact me by email at contentwriter@xyz.com. I would love to hear from you and answer any questions or feedback you may have.



71b2f0854b


Informações

Bem-vindo ao grupo! Você pode se conectar com outros membros...
bottom of page