Question 1 |
The state equation of a second order system is
\dot{x}(t)=A x(t), x(0) is the initial condition.
Suppose \lambda_{1} and \lambda_{2} are two distinct eigenvalues of A and v_{1} and v_{2} are the corresponding eigenvectors. For constants \alpha_{1} and \alpha_{2}, the solution, x(t), of the state equation is
\dot{x}(t)=A x(t), x(0) is the initial condition.
Suppose \lambda_{1} and \lambda_{2} are two distinct eigenvalues of A and v_{1} and v_{2} are the corresponding eigenvectors. For constants \alpha_{1} and \alpha_{2}, the solution, x(t), of the state equation is
\sum_{i=1}^{2} \alpha_{i} e^{\lambda_{i} t} v_{i} | |
\sum_{i=1}^{2} \alpha_{i} e^{2 \lambda_{i} t} v_{i} | |
\sum_{i=1}^{2} \alpha_{i} e^{3 \lambda_{i} t} v_{i} | |
\sum_{i=1}^{2} \alpha_{i} e^{4 \lambda_{i} t} v_{i} |
Question 2 |
Let x be an n \times 1 real column vector with length l=\sqrt{x^{T} x}. The trace of the matrix P=x x^{T} is
l^{2} | |
\frac{l^{2}}{4} | |
l | |
\frac{l^{2}}{2} |
Question 2 Explanation:
Given,
l=\sqrt{x^{T} x}, P=\left(x x^{T}\right)_{n \times n}
Let
\begin{aligned} (x)_{n \times 1} & =\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{n} \end{array}\right] \\ l & =\sqrt{x^{T} x}=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+\ldots x_{n}^{2}} \\ P & =x x^{T} \\ &=\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \\ \cdot \\ x_{n} \end{array}\right]\left[x_{1} x_{2} x_{3} \ldots x_{n}\right]\\ P&=\left[ \begin{array}{lllll} x_{1}^{2} & & & & \\ & x_{1}^{2} & & & \\ & & - & & \\ & & & - & \\ & & & & x_{n}^{2} \end{array} \right] \end{aligned}
Trace of P=x_{1}^{2}+x_{2}^{2}+ . . .+ x_{n}^{2}=l^2
l=\sqrt{x^{T} x}, P=\left(x x^{T}\right)_{n \times n}
Let
\begin{aligned} (x)_{n \times 1} & =\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{n} \end{array}\right] \\ l & =\sqrt{x^{T} x}=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+\ldots x_{n}^{2}} \\ P & =x x^{T} \\ &=\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \\ \cdot \\ x_{n} \end{array}\right]\left[x_{1} x_{2} x_{3} \ldots x_{n}\right]\\ P&=\left[ \begin{array}{lllll} x_{1}^{2} & & & & \\ & x_{1}^{2} & & & \\ & & - & & \\ & & & - & \\ & & & & x_{n}^{2} \end{array} \right] \end{aligned}
Trace of P=x_{1}^{2}+x_{2}^{2}+ . . .+ x_{n}^{2}=l^2
Question 3 |
Let the sets of eigenvalues and eigenvectors of a matrix B be \left\{\lambda_{k} \mid 1 \leq k \leq n\right\} and \left\{v_{k} \mid 1 \leq k \leq n\right\}, respectively. For any invertible matrix P, the sets of eigenvalues and eigenvectors of the matrix A, where B=P^{-1} A B, respectively, are
\left\{\lambda_{k} \operatorname{det}\mid 1 \leq k \leq n\right\} and \left\{P v_{k} \mid 1 \leq k \leq n\right\} | |
\left\{\lambda_{k} \mid 1 \leq k \leq n\right\} and \left\{v_{k} \mid 1 \leq k \leq n\right\} | |
\left\{\lambda_{k} \mid 1 \leq k \leq n\right\} and \left\{P v_{k} \mid 1 \leq k \leq n\right\} | |
\left\{\lambda_{k} \mid 1 \leq k \leq n\right\} and \left\{P^{-1} v_{k} \mid 1 \leq k \leq n\right\} |
Question 3 Explanation:
\begin{aligned} & B & =P^{-1} A P \\ A & =P B P^{-1}\end{aligned}
\Rightarrow A, B are called matrices similar.
\Rightarrow Both A, B have same set 7 eigen values
But eigen vectors of A, B are different.
Let B X=\lambda X
\Rightarrow \quad\left(P^{-1} A P\right) X=\lambda X
\Rightarrow \quad A(P X)=\lambda(P X)
\therefore Eigen vectors of A are P X.
\Rightarrow A, B are called matrices similar.
\Rightarrow Both A, B have same set 7 eigen values
But eigen vectors of A, B are different.
Let B X=\lambda X
\Rightarrow \quad\left(P^{-1} A P\right) X=\lambda X
\Rightarrow \quad A(P X)=\lambda(P X)
\therefore Eigen vectors of A are P X.
Question 4 |
The rate of increase, of a scalar field f(x, y, z)=x y z in the direction v=(2,1,2) at a point (0,2,1) is
\frac{2}{3} | |
\frac{4}{3} | |
2 | |
4 |
Question 4 Explanation:
\begin{aligned}
f(x, y, z) & =x y z \\
\overline{\nabla f} & =\hat{i} f_{x}+\hat{j} f_{y}+\hat{k} f_{z} \\
& =\hat{i}(y z)+\hat{j}(x z)+\hat{k}(x y) \\
\overline{\nabla f}_{(0,2,1)} & =\hat{i}(2)+0 \hat{j}+0 \hat{k}
\end{aligned}
Directional derivative,
\begin{aligned} D \cdot D & =\overline{\nabla f} \cdot \frac{\bar{a}}{|\bar{a}|} \\ & =(2 \hat{i}+0 \hat{j}+0 \hat{k}) \cdot \frac{(2 \hat{i}+\hat{j}+2 \hat{k})}{\sqrt{2^{2}+1^{2}+2^{2}}}=\frac{4}{\sqrt{9}}=\frac{4}{3} \end{aligned}
Directional derivative,
\begin{aligned} D \cdot D & =\overline{\nabla f} \cdot \frac{\bar{a}}{|\bar{a}|} \\ & =(2 \hat{i}+0 \hat{j}+0 \hat{k}) \cdot \frac{(2 \hat{i}+\hat{j}+2 \hat{k})}{\sqrt{2^{2}+1^{2}+2^{2}}}=\frac{4}{\sqrt{9}}=\frac{4}{3} \end{aligned}
Question 5 |
Let v_{1}=\left[\begin{array}{l}1 \\ 2 \\ 0\end{array}\right] and v_{2}=\left[\begin{array}{l}2 \\ 1 \\ 3\end{array}\right] be two vectors. The value of the coefficient \alpha in the expression v_{1}=\alpha v_{2}+e, which minimizes the length of the error vector e, is
\frac{7}{2} | |
-\frac{2}{7} | |
\frac{2}{7} | |
-\frac{7}{2} |
Question 5 Explanation:
\begin{aligned}
e & =V_{1}-\alpha V_{2} \\
e & =(i+2 k+0 k)-\alpha(2 i+j+3 k) \\
\hat{e} & =(1-2 \alpha) \hat{i}+(2-\alpha) \hat{j}+(0-3 \alpha) \hat{k} \\
|\hat{e}| & =\sqrt{(1-2 \alpha)^{2}+(2-\alpha)^{2}+(-3 \alpha)^{2}} \\
|\hat{e}|^{2} & =5+14 \alpha^{2}-8 \alpha \text { to be minimum at } \frac{\partial e^{2}}{\partial \alpha}=28 \alpha-8=0 \\
\alpha & =\frac{2}{7} \text { stationary point }
\end{aligned}
There are 5 questions to complete.