How to Switch Between Italic and Bold Fonts

This is a test fghjik Xyz xyz

Definition of Vectors

There are three ways to define vectors: (1) geometrically (2) analytically, and (3) axiomatically.

  • Geometical Approach
    In this approach, vectors are defined as directed line segments or arrows. Vector addition, substraction and multiplication by real numbers are defined geometrically. This method is limited to vectors in \(\mathbb{R}^{3}\).
  • Analytic Approach
    In this approach, vectors in \(n-\)dimensional space are entities that can be added to each other and multiplied by scalars. Specifically a vector is an ordered set of n (real) numbers1 \[\mathbf{a}=(a_{1},a_{2},\cdots,a_{n}).\] The numbers \(a_{1},\cdots,a_{n}\) are called the components of \(\mathbf{a}.\) The sum of two vectors \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{n})\) and \(\mathbf{b=(}b_{1},\cdots,b_{n})\) is defined by \[\mathbf{a}+\mathbf{b}=(a_{1}+b_{1},\cdots,a_{n}+b_{n}),\] and the product of the vector \(\mathbf{a}\) by the scalar (= number) \(\lambda\) means \[\lambda\mathbf{a}=(\lambda a_{1},\cdots,\lambda a_{n}).\] Now properties of the vector operations can be deduced from the properties of numbers. We use this approach in this course.

    • The “zero vector” all of whose components are zero is denoted by \(\mathbf{0}\): \[\mathbf{0}=(0,\cdots,0).\]
    • Two vectors \(\mathbf{a}\) and \(\mathbf{b}\) are called parallel or collinear if one of them is a scalar multiple of the other; that is, there exists a number \(c\), such that \(\mathbf{a}=c\mathbf{b}\) or \(\mathbf{b}=c\mathbf{a}\). Note that the zero vector is parallel to all vectors.
  • Axiomatic Approach
    In this approach, the nature of vectors is not specified. Instead we say vectors are elements of a set that can be added together and multiplied by numbers. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms. Such an algebraic system is called a linear vector space. In this approach, functions can be vectors. For more information see this link or any advanced or intermediate book on linear algebra.

Free, Sliding, and Bound Vectors in Physics

In mathematics it is often said that the location of the vector head is immaterial. This is not however, always, true in physics and this creates confusion for students when they take some courses in physics.

  • Some vector quantities can be moved about in space provided its length and direction are kept unchanged. These vectors are called free vectors.
    For example, if a body undergoes a uniform translational motion, its velocity is a free vector and it can be placed anywhere as long as its direction and its magnitude are preserved. A torque that is applied to a rigid body is another example of a free vector.
  • However, a force that is applied to a rigid body is not a free vector. It is clear that if its line of action does not pass through the center of mass, it can rotate the body or tip it over (Figure [fig:free-sliding](a,b)). But a force applied to a rigid body can be freely moved along its line of action (Figure [fig:free-sliding](c)). Vectors associated a specific line of action are called sliding vectors.
  • If the body is deformable instead of being rigid, the effect of a force depends on its point of action in addition to its line of action (Figure [fig:bound]). Such vectors are called bound vectors.
    [fig:bound]The effect of a force on a deformable body depends on where it is applied to as well as its line of action and its magnitude.

The dot product

The dot product (or the inner product) of two vectors \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{n})\) and \(\mathbf{b=(}b_{1},\cdots,b_{n})\), denoted by \(\mathbf{a}\cdot\mathbf{b}\) , is defined by \[\mathbf{a}\cdot\mathbf{b}=a_{1}b_{1}+\cdots+a_{n}b_{n}=\sum_{i=1}^{\infty}a_{i}b_{i}.\] Other notations for the dot product are \(\langle\mathbf{a},\mathbf{b}\rangle\) and \(\langle\mathbf{a}|\mathbf{b}\rangle\). The latter makes more scence because in quantum physics to indicate that \(\mathbf{a}\)is a row vector, we write it as \(\langle\mathbf{a|}\) and to indicate that it is a column vector, we write it as \(|\mathbf{b}\rangle\). Thus \[\langle\mathbf{a}|\mathbf{b}\rangle=[a_{1}\cdots a_{n}]\left[\begin{array}{c} b_{1}\\ \vdots\\ b_{n} \end{array}\right]=a_{1}b_{1}+\cdots+a_{n}b_{n}.\]

The Length or Norm of a Vector

The norm (or length) of a vector \(\mathbf{a}\) is denoted by \(\vert\vert\mathbf{a}\vert\vert\) or \(\vert\mathbf{a}\vert\) or \(a\) is defined by \[\vert\mathbf{a}\vert=\sqrt{\mathbf{a}\cdot\mathbf{a}}=\sqrt{a_{1}^{2}+\cdots+a_{n}^{2}}.\]

  • Each nonzero vector has a positive length (or norm). The only vector whose length is zero is the zero vector 0.
  • For any number \(\lambda\): \[|\lambda\mathbf{a}|=|\lambda||\mathbf{a}|.\]

The Angle between Two Vectors

If \(\mathbf{a},\) \(\mathbf{b}\in\mathbb{R}^{n}\) (\(n=2\) or 3), we can show \[\mathbf{a}\cdot\mathbf{b}=|\mathbf{a}||\mathbf{b}|\cos\theta\] where \(\theta\) is the angle between the two vectors. If \(n>3\), the above equation is used as the defintion of the angle between two vectors.

We define two vectors \(\mathbf{a}\) and \(\mathbf{b}\) to be perpendicular (or we also say orthogonal) and write \(\mathbf{a}\perp\mathbf{b}\) if \[\mathbf{a\cdot b}=0.\]

  • The zero vector is perpendicular (and parallel) to all vectors.

Unit vectors

We say a vector \(\mathbf{a}\) is a unit vector if its norm (length) is 1, i.e. \(|\mathbf{a}|\) = 1.

For any vector \(\mathbf{a}\) (\(\neq{\bf 0}\) ?), a unit vector in its direction, \(\mathbf{e_{a}}\), is defined by \[\mathbf{e_{a}=\frac{1}{|a|}a}.\] For any vector \(\mathbf{a}\in\mathbb{R}^{n},\)we can write \[\mathbf{a=|a|e_{a}.}\] If \(\mathbf{a=0}\), \(\mathbf{e_{a}}\) is any arbitrary unit vector. We can interpret the above equation as any nonzero vector can be determined by a positive number \(|\mathbf{a}|\) and a direction \(\mathbf{e_{a}}\). The direction of the zero vector is arbitrary.

The unit vectors along the coordinate axes are \[\mathbf{e}_{1}=(1,0,\cdots,0),\quad\mathbf{e}_{2}=(0,1,0,\cdots,0\mathbf{),}\cdots,\mathbf{e}_{n}=(0,0,\cdots,0,1).\] In \(\mathbb{R}^{3}\), they are often denoted by \(\mathbf{i}\), \(\mathbf{j}\), and \(\mathbf{k}\) \[\mathbf{i}=\mathbf{e}_{1}=(1,0,0),\quad\mathbf{j}=\mathbf{e}_{2}=(0,1,0),\quad\mathbf{k}=\mathbf{e}_{3}=(0,0,1).\] We note that for any vector \(\mathbf{a=}(a_{1},\cdots,a_{n})\), we can write \[\mathbf{a=}a_{1}\mathbf{e}_{1}+\cdots+a_{n}\mathbf{e}_{n}=\sum_{i=1}^{\infty}a_{i}\mathbf{e}_{i}.\] The \(i\)-th component of a is the dot product of a and the \(i\)-th unit vector: \[a_{i}=\mathbf{a}\cdot\mathbf{e}_{i}\] Specifically if \({\bf a}=(a_{1},a_{2},a_{3})\) then \[a_{1}={\bf a\cdot i},\quad a_{2}={\bf a\cdot j},\quad a_{3}={\bf a\cdot k}.\]

Projections

Let \(\mathbf{F}\) and \(\mathbf{d}\) be two vectors in \(\mathbb{R}^{n}\) and \(\mathbf{d\neq0}\). The component of F along d denoted by \(\text{{comp}}_{\mathbf{d}}\mathbf{F}\) is the number \[\text{comp}_{\mathbf{d}}\mathbf{F=}|\mathbf{F}|\cos\theta=|\mathbf{F}|\mathbf{\frac{F\cdot d}{|F|\mathbf{|d|}}},\] or \[\text{comp}_{\mathbf{d}}\mathbf{F=}\mathbf{\frac{F\cdot d}{|d|}}\]

  • Note that because \(\cos\theta<0\) for \(\pi/2<\theta\leq\pi\) , then \(\text{comp}_{{\bf d}}{\bf F}\) is negative if the angle between \({\bf F}\) and \({\bf d}\) is obtuse.

The projection of \(\mathbf{F}\) onto d denoted by \(\mathbf{proj_{d}F}\) is the vector \[\mathbf{proj_{d}F=(}\text{comp}_{\mathbf{d}}\mathbf{F})\mathbf{e_{d}=\frac{F\cdot d}{\mathbf{|}d\mathbf{|}}}\mathbf{\frac{\mathbf{d}}{|d|}=\frac{\mathbf{F\cdot d}}{d\cdot d}d}.\] see Figure [fig:projection].

Matrices

By a matrix, we mean a rectangular array of the form \[\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{array}\right]=[a_{ij}]\in\mathbb{\mathbb{R}}^{m\times n}\tag{{*}}\]

  • Matrices are usually symbolized using upper-case letters. For example we write \[\mathbf{A}=\left[\begin{array}{ccc} -1 & 3 & 0\\ 0.7 & 2 & -1 \end{array}\right].\tag{{**}}\]
  • If a matrix has \(m\) rows and \(n\) columns, we call is an \(m\) by \(n\) (or \(m\times n\)) matrix. An \(n\times n\) matrix is called a square matrix.
  • The entry in the \(i-\)th row and \(j-\)th column of a matirx A is denoted by \(a_{ij}\) and is called the \(ij-\)entry or \(ij-\)component of the matrix. Alternative notations for that entry are \(\mathbf{A}(i,j)\) or \(\mathbf{A}_{i,j}\).
  • \(\mathbf{A}(:,i)\) or \(\mathbf{A}_{:,i}\) denotes the \(i-\)th column of A.
  • \(\mathbf{A}(i,:)\) or \(\mathbf{A}_{i,:}\) denotes the \(i-\)th row of A.
  • If \(\mathbf{A}=[a_{ij}],\mathbf{B}=[b_{ij}]\in\mathbb{R}^{m\times n}\), and \(\lambda\) is a number then \[\mathbf{A}+\mathbf{B}=[a_{ij}+b_{ij}],\quad\lambda\mathbf{A}=[\lambda a_{ij}].\]
  • If \(\mathbf{A}=[a_{ij}]\) is an \(m\times n\) matirx and \(\mathbf{B}=[b_{ij}]\) is an \(n\times p\) matrix, then \(\mathbf{C}=\mathbf{AB}\) is an \(m\times p\) such \[C_{ij}=A(i,:)\cdot B(:,j).\]
  • Note that matrix multiplication is not comutative; that is, in general \(\mathbf{AB}\neq\mathbf{BA}\).
  • The zero matrix is an \(m\times n\) matrix whose components are all zero.
  • If all entries outside the main diagonal of a square matrix are zero, it is called a diagonal matrix. \[\left[\begin{array}{cccc} d_{1} & 0 & \cdots & 0\\ 0 & d_{2} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & d_{n} \end{array}\right]\]
  • An \(n\times n\) diagonal matrix with ones on the main diagonal (and zeros elsewhere) is called the identity matrix and is denoted by \(\mathbf{I}\) or to emphasize its size by \(\mathbf{I}_{n}\). \[\mathbf{I}_{n}=\left[\begin{array}{cccc} 1 & 0 & \cdots & 0\\ 0 & 1 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 1 \end{array}\right].\]
  • If \(\mathbf{X}\) and the zero matrix \(\mathbf{O=[}0\mathbf{]}\) are \(n\times p\) matrices, then \[\mathbf{X+O=X.}\]
  • For any \(n\times p\) matrix \(\mathbf{X}\) \[\mathbf{I}_{n}\mathbf{X}=\mathbf{X}.\]
  • If we have a system of linear equations \[\begin{cases} \begin{array}{c} a_{11}x_{1}+a_{12}x_{2}+\cdots+a_{1n}x_{n}=b_{1}\\ \vdots\\ a_{m1}x_{1}+a_{m2}x_{2}+\cdots+a_{mn}x_{n}=b_{m} \end{array}\end{cases}\] we can rewrite it as \[\left[\begin{array}{ccc} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{m1} & \cdots & a_{mn} \end{array}\right]\left[\begin{array}{c} x_{1}\\ \vdots\\ x_{n} \end{array}\right]=\left[\begin{array}{c} b_{1}\\ \vdots\\ b_{m} \end{array}\right]\] or \[\mathbf{Ax=b}\mathbf{.}\] We say that there are m equations and n unknowns, or n variables.
  • Let \(\mathbf{A}=[a_{ij}]\) be an \(m\times n\) matrix. The \(n\times m\) matrix \(\mathbf{B}=[b_{ji}]\) such that \(b_{ji}=a_{ij}\) is called the transpose of A, and is also denoted by \(\mathbf{A}^{T}\) or \(\mathbf{A}’\). In fact to obtain the transpose of a matrix, we switch the row and column indices of the matrix. Note \[(\mathbf{A}^{T})^{T}=\mathbf{A}.\]
  • Remark \[(\mathbf{AB})^{T}=\mathbf{B}^{T}\mathbf{A}^{T},\quad(\mathbf{A}+\mathbf{B})^{T}=\mathbf{A}^{T}+\mathbf{B}^{T}\]
  • A matrix \(\mathbf{A}\) is called symmetric if \[\mathbf{A}=\mathbf{A}^{T}.\] This definition implies that \(A\) is symmetric only if it is a square matrix.
  • A matrix \(\mathbf{A}\) is called skew symmetric if \[\mathbf{A}=-\mathbf{A}^{T}\]

Determinant

The determinant of a square matrix A is denoted by \(det(\mathbf{A})\) or by \(|A|\). Let \(\mathbf{A}\) be a 2 by 2 matrix \[A=\left[\begin{array}{cc} a & b\\ c & d \end{array}\right].\] we define \(det(A)\) to be \(ad-bc\).

\[\begin{aligned} \left|\begin{tabular}{cc} \ensuremath{a} & \ensuremath{b}\\ \ensuremath{c} & \ensuremath{d} \end{tabular}\right|=ad-bc.\end{aligned}\] The higher-order determinants are reduced to the lower-order determinants. For example,

\(\left|\begin{tabular}{ccc} \ensuremath{a_{1}} & \ensuremath{a_{2}} & \ensuremath{a_{3}}\\ \ensuremath{b_{1}} & \ensuremath{b_{2}} & \ensuremath{b_{3}}\\ \ensuremath{c_{1}} & \ensuremath{c_{2}} & \ensuremath{c_{3}} \end{tabular}\right|=a_{1}\left|\begin{tabular}{cc} \ensuremath{b_{2}} & \ensuremath{b_{3}}\\ \ensuremath{c_{2}} & \ensuremath{c_{3}} \end{tabular}\right|-a_{2}\left|\begin{tabular}{cc} \ensuremath{b_{1}} & \ensuremath{b_{3}}\\ \ensuremath{c_{1}} & \ensuremath{c_{3}} \end{tabular}\right|+a_{3}\left|\begin{tabular}{cc} \ensuremath{b_{1}} & \ensuremath{b_{2}}\\ \ensuremath{c_{1}} & \ensuremath{c_{3}} \end{tabular}\right|\)

and

\[\begin{aligned} \left|\begin{tabular}{cccc} $a_1$ & $a_2$ & $a_3$ & $a_4$\\ $b_1$ & $b_2$ & $b_3$ & $b_4$\\ $c_1$ & $c_2$ & $c_3$ & $c_4$\\ $d_1$ & $d_2$ & $d_3$ & $d_4$ \end{tabular}\right| =&a_1 \left|\begin{tabular}{ccc} $b_2$ & $b_3$ & $b_4$\\ $c_2$ & $c_3$ & $c_4$\\ $d_2$ & $d_3$ & $d_4$ \end{tabular}\right|-a_2 \left|\begin{tabular}{ccc} $b_1$ & $b_3$ & $b_4$\\ $c_1$ & $c_3$ & $c_4$\\ $d_1$ & $d_3$ & $d_4$ \end{tabular}\right|\\ &+a_3 \left|\begin{tabular}{ccc} $b_1$ & $b_2$ &$b_4$\\ $c_1$ & $c_2$ & $c_4$\\ $d_1$ & $d_2$ & $d_4$ \end{tabular}\right| -a_4 \left|\begin{tabular}{ccc} $b_1$ & $b_2$ &$b_3$\\ $c_1$ & $c_2$ & $c_3$\\ $d_1$ & $d_2$ & $d_3$ \end{tabular}\right| \end{aligned}\]

Properties of determinants:

If \(A\) and \(B\) are two \(n\times n\) matrices then:

  • If an entire row or an entire column of \(A\) contains only zero’s, then \(det(A)=0\).
  • \(det(AB)=det(A)\,det(B)\)
  • \(det(A^{T})=det(A)\)
  • If we add a multiple of one row to another, the determinant does not change \[det\left[\begin{array}{ccc} \text{\textemdash} & r_{1} & \text{\textemdash}\\ \text{\textemdash} & r_{2} & \text{\textemdash}\\ \text{\textemdash} & r_{3} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]=det\left[\begin{array}{ccc} \text{\textemdash} & r_{1} & \text{\textemdash}\\ \text{\textemdash} & r_{2}+\alpha r_{1} & \text{\textemdash}\\ \text{\textemdash} & r_{3} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]\]
    Similarly adding a multiple of one column to another does not change the determinant.
  • If we multiply all entities of a row of \(A\) by a number \(\alpha\), then the determinant of the new matrix is \(\alpha\,det(A)\) \[det\left[\begin{array}{ccc} \text{\textemdash} & r_{1} & \text{\textemdash}\\ \text{\ensuremath{\vdots}} & \vdots & \vdots\\ \text{\textemdash} & \alpha r_{i} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]=\alpha\,det\left[\begin{array}{ccc} \text{\textemdash} & r_{1} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{i} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]\]
  • \(det(\alpha A)=\alpha^{n}det(A)\) for any scalar \(\alpha\)
  • If we exchange two rows with each other, the sign of the determinant will switch \[det\left[\begin{array}{ccc} \text{\textemdash} & \textcolor{red}{r_{1}} & \text{\textemdash}\\ \text{\textemdash} & {\normalcolor \textcolor{blue}{r_{2}}} & \text{\textemdash}\\ \text{\textemdash} & r_{3} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]=-det\left[\begin{array}{ccc} \text{\textemdash} & \textcolor{blue}{r_{2}} & \text{\textemdash}\\ \text{\textemdash} & \textcolor{red}{r_{1}} & \text{\textemdash}\\ \text{\textemdash} & r_{3} & \text{\textemdash}\\ \vdots & \vdots & \vdots\\ \text{\textemdash} & r_{n} & \text{\textemdash} \end{array}\right]\]
    Similarly, if we swap two columns, the sign of the determinant will switch.

Inverse of a matrix

  • Let A be an \(n\times n\) matrix. An inverse for A is a matrix B such that \[\mathbf{AB}=\mathbf{BA}=\mathbf{I}_{n}.\] The inverse of \(\mathbf{A}\), if exists is unique and is denoted by \(\mathbf{A}^{-1}\) or inv(A).
  • If \(\mathbf{A}\) is a square matrix and invertible to solve the inhomogeneous equation \(\mathbf{Ax}=\mathbf{b}\), we multiply both sides on the left by \(\mathbf{A}^{-1}\) \[\mathbf{A}^{-1}\mathbf{Ax}=\mathbf{Ix}=\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}.\]
  • If \(\mathbf{A}\) is a square matrix and \(det(\mathbf{A})\neq0\), \(A\) is invertible.
    \(\mathbf{A}^{-1}\) exists \(\iff\) \(\quad det(\mathbf{A})\neq0\)

Orthogonal Matrices

An orthogonal matirx is a square matirx whose columns (or rows) are orthonomal vectors, namely \[\mathbf{A}(:,i)\cdot\mathbf{A}(:,j)=\begin{cases} \begin{array}{cc} 0 & \text{\text{{if} }}i\neq j\\ 1 & \text{{if} \emph{i}=\emph{j} } \end{array}\end{cases}\]

If \(\mathbf{A}\) is an orthogonal matrix then \[\mathbf{A}\mathbf{A}^{T}=\mathbf{A}^{T}\mathbf{A}=\mathbf{I}\] or equivalently \[\mathbf{A}^{T}=\mathbf{A}^{-1}.\]

The Cross Product

We have seen that the inner product (also called the dot product or the scalar product) is an algebraic operation that takes two vectors in $\mathbb{R}^n$ and returns a single number. It is natural to ask how we can define the product of two vectors such that the output is another vector. Such an operation is called the cross product. Let \({\bf {a}}\) and \({\bf {b}}\) be two vectors in \(\mathbb{R}^{3}\). The cross product of \({\bf {a}}\) and \({\bf {b}}\) denoted by \({\bf {a}\times{\bf {b}}}\) is another vector in \(\mathbb{R}^{3}\) that has the following properties:

  1. \({\bf a}\times{\bf b}\) is perpendicular to both \({\bf {a}}\) and \({\bf {b}}\). \[({\bf a}\times{\bf b})\perp{\bf a}\quad\text{and}\quad({\bf a}\times{\bf b})\perp{\bf b}.\]
  2. The direction of \({\bf {a}\times{\bf {b}}}\) is determined by the right-hand rule: if the fingers of the right hand are curled from \({\bf {a}}\) toward \({\bf {b}}\), then the thumb points in the direction of \({\bf {a}\times{\bf {b}}}\).
  3. The length of \({\bf {a}\times{\bf {b}}}\) is the product of lengths of \({\bf {a}}\) and \({\bf {b}}\) and the sine of the angle \(\theta\) between \({\bf {a}}\) and \({\bf {b}}\). Namely \[|{\bf a}\times{\bf b}|=|{\bf a}||{\bf b}|\sin\theta.\]

We can show the following definition of the cross product satisfies these properties: \[\mathbf{a\times b=\left|\begin{array}{ccc} \mathbf{i} & \mathbf{j} & \mathbf{k}\\ a_{1} & a_{2} & a_{3}\\ b_{1} & b_{2} & b_{3} \end{array}\right|}.\]

The Equation of a Line

If \(P\) is a point in the plane or space and \({\bf v\neq{\bf 0}}\), then there is a only one line that passes through \(P\) and is parallel to \({\bf v}\). The following set shows the sets of points on that line. \[\{\overrightarrow{OP}+t{\bf v}|\ t\text{ is a real number}\}\] When \(t=0\), the

Distance from a Point to a Line in 3-Space

The distance between a point \(P\) and a line \(L\) that contains the vector \({\bf v}\) is \(|\overrightarrow{PH}|\) where \(\overrightarrow{PH}\) is perpendicular to \(\mathbf{v}\). To find \(|\overrightarrow{PH}|\), choose a point \(Q\) on \(L\). Then \[|\overrightarrow{PH}|=|\overrightarrow{PQ}|\sin\theta\] where \(\theta\) is the angle between \(\overrightarrow{PQ}\) and \({\bf v}\). But \[|\overrightarrow{PQ}\times{\bf v}|=|\overrightarrow{PQ}|\ |{\bf v}|\ \sin\theta.\] Therefore \[\begin{aligned} |\overrightarrow{PH}| & =|PQ|\sin\theta\\ & =|PQ|\frac{|\overrightarrow{PQ}\times{\bf v}|}{|PQ|\ |{\bf v}|}\\ & =\frac{|\overrightarrow{PQ}\times{\bf v}|}{|{\bf v}|}\end{aligned}\]

Distance from a Point to a Plane

The distance \(D\) between a point \(P(x_{0},y_{0},z_{0})\) and the plane \(\Sigma\) \[\Sigma:\quad ax+by+cz+d=0\] is equal to \[D=\frac{|ax_{0}+by_{0}+dz_{0}+d|}{\sqrt{a^{2}+b^{2}+c^{2}}}\] The distance between \(P\) and \(\Sigma\) is \(|\overrightarrow{PH}|\) where \(\overrightarrow{PH}\parallel{\bf n}=(a,b,c)\). To find \(|\overrightarrow{PH}|\), let’s choose a point \(Q(x_{1},y_{1},z_{1})\) on \(\Sigma\). Then \[|\overrightarrow{PH}|=|\overrightarrow{PQ}|\ |\cos\theta|\] where \(\theta\) is the angle between \(\overrightarrow{PQ}\) and \({\bf n}\). But \[\cos\theta=\frac{\overrightarrow{PQ}\cdot{\bf n}}{|\overrightarrow{PQ}|\ |{\bf n}|}.\] Thus \[\begin{aligned} |\overrightarrow{PH}| & =|\overrightarrow{PQ}|\ |\cos\theta|\\ & =|\overrightarrow{PQ}|\frac{|\overrightarrow{PQ}\cdot{\bf n}|}{|\overrightarrow{PQ}|\ |{\bf n}|}\\ & =\frac{\overrightarrow{|PQ}\cdot{\bf n}|}{|{\bf n}|}\end{aligned}\] Let’s simplify the above equation: \[\overrightarrow{QP}=(x_{0}-x_{1},y_{0}-y_{1},z_{0}-z_{1})\] and \[\begin{aligned} \overrightarrow{QP}\cdot{\bf n} & =a(x_{0}-x_{1})+b(y_{0}-y_{1})+c(z_{0}-z_{1})\\ & =ax_{0}+by_{0}+cz_{0}-(ax_{1}+by_{1}+cz_{1}).\end{aligned}\] Because \(Q(x_{1},y_{1},z_{1})\) is on the plane then \[ax_{1}+by_{1}+cz_{1}+d=0\] or \[ax_{1}+by_{1}+cz_{1}=-d\] Therfore \[\begin{aligned} \overrightarrow{QP}\cdot{\bf n} & =ax_{0}+by_{0}+cz_{0}-(ax_{1}+by_{1}+cz_{1})\\ & =ax_{0}+by_{0}+cz_{0}+d\end{aligned}\] and finally because \(|{\bf n}|=\sqrt{a^{2}+b^{2}+c^{2}}\), we have \[\begin{aligned} D & =\frac{|\overrightarrow{QP}\cdot{\bf n}|}{|{\bf n}|}\\ & =\frac{|ax_{0}+by_{0}+cz_{0}+d|}{\sqrt{a^{2}+b^{2}+c^{2}}}.\end{aligned}\]

Linear Span

Let \(S=\{\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\}\) be a nonempty set of \(k\) vectors in \(\mathbb{R}^{n}\) (\(0<k,\) \(\mathbf{v}_{i}\in\mathbb{R}^{n}\)) . Let \(c_{1},\cdots,c_{k}\) be numbers. An expression of type \[c_{1}\mathbf{v}_{1}+\cdots+c_{k}\mathbf{v}_{k}\] is called a linear combination of \(\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\). The numbers \(c_{1},\cdots,c_{k}\) are then called the coefficients of the linear combination.

If \(\mathbf{x}\in\mathbb{R}^{n}\) can be represented as a linear combination of \(\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\), namely \[\mathbf{x}=c_{1}\mathbf{v}_{1}+\cdots+c_{k}\mathbf{v}_{k}\] we say the set \(S\) spans the vector \(\mathbf{x}\).

The set containing all linear combinations of linear combinations of \(\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\) is called the linear span of S and is denoted by \(L(S)\), namely \[L(S)=\{c_{1}\mathbf{v}_{1}+\cdots+c_{k}\mathbf{v}_{k}|\ c_{1}\in\mathbb{R},\cdots,c_{k}\in\mathbb{R}\}.\]

Let \(S=\{\mathbf{v}\}\), and \(\mathbf{v}\neq O\) is a vector in \(\mathbb{R}^{3}\). Then \(L(S)=\{c\mathbf{v}|\ c\in\mathbb{R}\}\) is the line passing through the origin in the direction of \(\mathbf{v}.\)

Let u and v be two non-parallel vectors in space. Then \(L(S)=\{c_{1}\mathbf{u}+c_{2}\mathbf{v}|\ c_{1}\in\mathbb{R},\ c_{2}\in\mathbb{R}\}\) is the plane passing through the origin generated by u and v.

Every set \(S=\{\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\}\) spans the zero vector. Or \(\mathbf{O}\in L(S)\).

Def: A set S = {A,, . . . ,Ak} of vectors in , is said to span X uniquely if it spans X and if

Theorem: A set S spans every vector in L(S) uniquely if and only if S spans the zero vector uniquely.

Linear Independence

A set \(S=\{\mathbf{v}_{1},\cdots,\mathbf{v}_{k}\}\) which spans the zero vector uniquely is said to be a linearly independent set of vectors. Otherwise, S is called linearly dependent.

Change of Basis

Use of the coordinate matrix of a vector \(\mathbf{v}\) relative to an ordered basis \(\mathbb{\mathcal{B}}\) \[\mathbf{v}=\left[\begin{array}{c} x_{1}\\ \vdots\\ x_{n} \end{array}\right],\] is often more convenient than writing it as an n–tuple \((x_{1},\cdots,x_{n})\) of coordinates. We often write the coordinate matrix as \[\left[\mathbf{v}\right]_{\mathbb{\mathcal{B}}}\] to indicate its dependence on the basis. If \(\mathbb{\mathcal{B}}\) is the standard basis, then we often drop the subscrpit and simply write \([\text{\textbf{v}}]\).

The question is how the coordinates of \(\mathbf{v}\) in two different ordered bases are related.

Let \(\mathbb{\mathcal{B}}=\{\mathbf{u}_{1},\cdots,\mathbf{u}_{n}\}\) and \(\mathbb{\mathcal{B}}’=\{\mathbf{u}_{1}’,\cdots,\mathbf{u}_{n}’\}\) be two ordered bases of \(\mathbb{R}^{n}\). There is a unique and invertible \(n\times n\) matrix \(P\) such that \[\left[\mathbf{v}\right]_{\mathbb{\mathcal{B}}’}=P^{-1}[\mathbf{v}]_{\mathbb{\mathcal{B}}},\] for every vector \(\mathbf{v}\) in \(\mathbb{R}^{n}\). The \(i-\)th column of \(P\) is the \(i-\)th vector of \(\mathbb{\mathcal{B}}’\) relative to the basis \(\mathbb{\mathcal{B}}\); that is \[P=\left[\begin{array}{ccc} | & & |\\{} [\mathbf{u}_{1}’]_{\mathbb{\mathcal{B}}} & \cdots & [\mathbf{u}_{n}’]_{\mathbb{\mathcal{B}}}\\ | & & | \end{array}\right]\]

Eigenvalue and eigenvector

Let \(A\) = square matrix. We say \(\mathbf{x}\neq\mathbf{0}\) is an eigenvector of \(\mathbf{A}\) if there exists a number \(\lambda\) such that \[\mathbf{A}\mathbf{x}=\lambda\mathbf{x}\] In this case \(\lambda\) is called an eigenvalue of \(A\).

Example: \[A=\left[\begin{array}{cc} 4 & -1\\ 2 & 1 \end{array}\right]\] Now consider two vectors \[\mathbf{v}_{1}=\left[\begin{array}{c} -1\\ 1 \end{array}\right]\quad\mathbf{v}_{2}=\left[\begin{array}{c} 2\\ 4 \end{array}\right].\] What is the influence of \(A\) on these vectors? \[\mathbf{y}_{1}=\mathbf{Av_{\mathbf{1}}}=\left[\begin{array}{cc} 4 & -1\\ 2 & 1 \end{array}\right]\left[\begin{array}{c} -1\\ 1 \end{array}\right]=\left[\begin{array}{c} -5\\ -1 \end{array}\right]\] and \[\mathbf{y}_{2}=\mathbf{A}\mathbf{v}_{2}=\left[\begin{array}{cc} 4 & -1\\ 2 & 1 \end{array}\right]\left[\begin{array}{c} 2\\ 4 \end{array}\right]=\left[\begin{array}{c} 4\\ 8 \end{array}\right].\] Note that \(\mathbf{y}_{1}\nparallel\mathbf{v}_{1}\) but \(\mathbf{y}_{2}\parallel\mathbf{v}_{1}\) (more specifically \(\mathbf{y}_{2}=2\mathbf{v}_{2})\) . This means in the first case, we get a new vector with a different direction when compared to the original vector. In the second case, the new vector has the same direction as the original one. The vector \(\mathbf{v}_{2}\) is called an eigenvector of \(\mathbf{A}\) and 2 is called an eigenvalue of \(\mathbf{A}\). Eigen is the German word for “self.”

  • !! A matrix might not have any real eigenvalue. For example a rotation matrix in \(\mathbb{R}^{2}\) unless \(\theta=0\) or \(\theta=\pi\).
  • !! In \(\mathbb{R}^{3}\), one is an eigenvalue of any rotation matrix. Geometically it means any rotation preseves one direction which is the axis of rotation.

How to determine eigenvalues and eigenvectors

Let

\(\mathbf{A}\) = \(n\times n\) squate matrix

\(\mathbf{x}\) = \(n\times1\) vector
To find \(\lambda\) and \(\mathbf{x}\) such that \(\mathbf{Ax}=\lambda\mathbf{x}\), we put \(\mathbf{x=Ix}\) where \(\mathbf{I}\) is the unit matrix of order \(n\). Then we can write \[\mathbf{Ax}=\lambda\mathbf{Ix\quad\Rightarrow\quad(A-\lambda}\mathbf{I)x=0}\] The equation has a nontrivial solution (\(\mathbf{x}\neq\mathbf{0})\) if and only if \(\mathbf{A-}\lambda\mathbf{I}\) is not invertible which means \[det(\mathbf{A-}\lambda\mathbf{I})=0.\]


  1. The elements of \(\mathbb{R}^{n}\) can be interpreted in two ways: as points or as vectors. The difference between vectors and points is that vectors permit “linear operations”: \(\mathbf{a}+\mathbf{b}\) and \(\lambda\mathbf{a}\). Points cannot be added. Adding the positions of London and Paris would have no geometrical meaning, at least no meaning independent of the special coordinate system used.↩︎
Close Menu