美文网首页
线性代数基础

线性代数基础

作者: JasonJe | 来源:发表于2018-11-01 18:13 被阅读0次

    线性代数

    向量

    \vec{\text x} = \left[ \begin{matrix} x_1\\ x_2\\ \cdots \\ x_n\\ \end{matrix} \right]

    1.线性相关与线性无关

    • 向量组线性相关

    存在一组不全为0的实数a_1, a_2, \cdots, a_n,使得:

    \sum_{i = 1} ^{n} a_i \vec{\text v_i} = \vec{\text 0}即至少有一个向量可以用其余向量线性表示。

    • 向量组线性无关

    当且仅当a_i = 0, i=1, 2, \cdots, n时,才有:

    \sum_{i = 1} ^{n} a_i\vec{\text v_i} = \vec{\text 0}即存在一个向量\vec{\beta},其不能等于向量组内任意向量,可由向量组内向量进行唯一的线性表示。

    2. 向量空间的维数

    一个向量空间所包含的最大线性无关向量的数据,称为向量空间的维数。

    3. 向量的点积(内积)

    \vec{\text{u}}\cdot \vec{\text{v}} = u_{1}v_{1} + u_{2}v_{2} + \cdots + u_{n}x_{n} = |\vec{\text{u}}||\vec{\text{v}}| cos(\vec{\text{u}}, \vec{\text{v}} ) = \vec{\text{u}}^\text T\vec{\text{v}} = \vec{\text{u}}^\text T\vec{\text{v}}

    import numpy as np
    
    u, v = np.array([1, 2, 3]), np.array([4, 5, 6])
    
    uv = u.dot(v)
    uv = np.dot(u, v)
    uv
    
    32
    

    4. 三维向量的叉积(外积)

    \vec{\text{w}} = \vec{\text{u}} \times \vec{\text{v}} = \left| \begin{matrix} \vec{\text{i}} & \vec{\text{j}} & \vec{\text{k}} \\ u_x & u_y & u_z \\ v_x & v_y & v_z \end{matrix} \right| \\ = (u_yv_z - u_zv_y)\vec{\text{i}} - (u_xv_z - u_zv_x)\vec{\text{j}} + (u_xv_y - u_yv_x)\vec{\text{k}}

    其中, \vec{\text{i}}, \vec{\text{j}}, \vec{\text{j}}x, y, z轴的单位向量。

    \vec{\text{u}} = u_x\vec{\text{i}} + u_y\vec{\text{j}} + u_x\vec{\text{k}}, \\ \vec{\text{v}} = v_x\vec{\text{i}} + v_y\vec{\text{j}} + v_z\vec{\text{k}}

    • \vec{\text{u}}\vec{\text{v}} 的叉积垂直于\vec{\text{u}}, \vec{\text{v}}构成的平面,其方向符合右手规则;

    • 叉积的模等于\vec{\text{u}}, \vec{\text{v}}构成的平面四边形的面积;

    • \vec{\text{u}} \times \vec{\text{v}} = - \vec{\text{v}} \times \vec{\text{u}}

    • \vec{\text{u}} \times (\vec{\text{v}} \times \vec{\text{w}}) = (\vec{\text{u}} \cdot \vec{\text{w}})\vec{\text{v}} - (\vec{\text{u}} \cdot \vec{\text{v}})\vec{\text{w}}

    import numpy as np
    
    u, v = np.array([1, 2, 3]), np.array([4, 5, 6])
    
    uv = np.cross(u, v).sum()
    uv
    
    0
    

    5. 三维向量的混合积

    \left[ \vec{\text{u}}\vec{\text v}\vec{\text w} \right] = (\vec{\text u}\times\vec{\text v})\cdot \vec{\text w} = \vec{\text u} \cdot (\vec{\text v} \times \vec{\text w})\\\ = \left| \begin{matrix} u_x & u_y & u_z \\\ v_x & v_y & v_z \\\ w_x & w_y & w_z \end{matrix} \right| = \left| \begin{matrix} u_x & v_x & w_x \\\ u_y & v_y & w_y \\\ u_z & v_z & w_z \end{matrix} \right|

    • 其物理意义为:以\vec{u}, \vec{v}, \vec{w}为三个棱边所围成平行六面体的体积。当\vec{u}, \vec{v}, \vec{w}构成右手系时,该平行六面体的体积为正号。

    6. 三维向量的并矢积

    \vec{\text u}\vec{ \text v} = \left[ \begin{matrix} u_xv_x & u_xv_y & u_xv_z \\ u_yv_x & u_yv_y & u_yv_z \\ u_zv_x & u_zv_y & u_zv_z \\ \end{matrix} \right]

    也记作 \sideset{\vec{\text u}}{\vec{\text v}}\bigotimes 或者 \vec{\text u}\vec{\text v}^\text T

    import numpy as np
    
    u, v = np.array([1, 2, 3]), np.array([4, 5, 6])
    
    uv = np.outer(u, v)
    uv
    
    array([[ 4,  5,  6],
           [ 8, 10, 12],
           [12, 15, 18]])
    

    7. Gram - Schmidt正交化

    \alpha_1, \alpha_2, \dots, \alpha_m(m\leq n)R^n中的一个线性无关向量组,若令

    \beta_1 = \alpha_1 \\ \beta_2 = \alpha_2 - \frac{\left\langle \alpha_2, \beta_1\right\rangle}{\left\langle\beta_1, \beta_1\right\rangle}\beta_1 \\ \beta_m = \alpha_m - \frac{\left\langle \alpha_m, \beta_1 \right\rangle}{\left\langle \beta_1, \beta_1 \right\rangle}\beta_1 - \frac{\left\langle \alpha_m, \beta_2 \right\rangle}{\left\langle \beta_2, \beta_2 \right\rangle}\beta_2 - \dots - \frac{\left\langle \alpha_m, \beta_{m-1} \right\rangle}{\left\langle \beta_{m-1}, \beta_{m -1}\right\rangle}\beta_{m-1}

    \beta_1, \beta_2, \dots, \beta_m就是一个正交向量组,若再令

    e_i = \frac{\beta_i}{|| \beta_i ||}(i = 1, 2, \dots, m)

    就得到一个标准正交向量组 e_1, e_2, \dots, e_m,且该向量组与\alpha_1, \alpha_2, \dots, \alpha_m 等价。

    import numpy as np
    
    A = np.array([[1,1,6],  ## numpy.linalg 是对列向量进行标准正交化
                  [1,2,4],
                  [1,3,2]])
    
    q, r = np.linalg.qr(A)
    q, r
    
    (array([[-5.77350269e-01,  7.07106781e-01,  4.08248290e-01],
            [-5.77350269e-01,  5.55111512e-17, -8.16496581e-01],
            [-5.77350269e-01, -7.07106781e-01,  4.08248290e-01]]),
     array([[-1.73205081, -3.46410162, -6.92820323],
            [ 0.        , -1.41421356,  2.82842712],
            [ 0.        ,  0.        ,  0.        ]]))
    

    矩阵

    \text{A} = \left[ \begin{matrix} a_{11} & a_{12} & \dots & a_{1n}\\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots &\vdots \\ a_{m1} & a_{m2} & \cdots & a_{mm} \end{matrix} \right]

    1. 矩阵的运算

    • 加法 \text{A} + \text{B} = \left[ \begin{matrix} a_{11} + b_{11} & a_{12} + b_{12} & \dots & a_{1n} + b_{1n}\\ a_{21} + b_{21} & a_{22} + b_{22} & \dots & a_{2n} + b_{2n}\\ \vdots & \vdots & \ddots &\vdots \\ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \cdots & a_{mm} + b_{mm} \end{matrix} \right]\text{A}, \text{B} 都为m\times n矩阵

    • 数乘 k\text{A} = \left[ \begin{matrix} ka_{11} & ka_{12} & \dots & ka_{1n}\\ ka_{21} & ka_{22} & \dots & ka_{2n} \\ \vdots & \vdots & \ddots &\vdots \\ ka_{m1} & ka_{m2} & \cdots & ka_{mm} \end{matrix} \right]

    • 乘积 \text{A}\text{B} = \left[ \begin{matrix} a_{11} & a_{12} & \dots & a_{1n}\\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots &\vdots \\ a_{m1} & a_{m2} & \cdots & a_{mm} \end{matrix} \right] \left[ \begin{matrix} b_{11} & b_{12} & \dots & b_{1s} \\ b_{21} & b_{22} & \dots & b_{2s} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \dots & b_{ns} \end{matrix} \right] \\ = \left[ \begin{matrix} a_{11}b_{11} + a_{12}b_{21} + \dots + a_{1n}b_{n1} & a_{11}b_{12} + a_{12}b_{22} + \dots + a_{1n}b_{n2} & \dots & a_{11}b_{1s} + a_{12}b_{2s} + \dots + a_{11}b_{ns} \\ a_{21}b_{11} + a_{22}b_{21} + \dots + a_{2n}b_{n1} & a_{21}b_{12} + a_{22}b_{22} + \dots + a_{2n}b_{n2} & \dots & a_{21}b_{1s} + a_{22}b_{2s} + \dots + a_{21}b_{ns} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1}b_{11} + a_{m2}b_{21} + \dots + a_{mn}b_{n1} & a_{m1}b_{12} + a_{m2}b_{22} + \dots + a_{mn}b_{n2} & \dots & a_{m1}b_{1s} + a_{m2}b_{2s} + \dots + a_{m1}b_{ns} \\ \end{matrix} \right]

    • 点乘(阿达马积) \text{A} \cdot \text{B} = \left[ \begin{matrix} a_{11} b_{11} & a_{12} b_{12} & \dots & a_{1n} b_{1n}\\ a_{21} b_{21} & a_{22} b_{22} & \dots & a_{2n} b_{2n}\\ \vdots & \vdots & \ddots &\vdots \\ a_{m1} b_{m1} & a_{m2} b_{m2} & \cdots & a_{mm} b_{mm} \end{matrix} \right]

    • 克罗内克积 \sideset{\text A}{\text B}\bigotimes =\left[ \begin{matrix} a_{11}\text{B} & a_{12}\text{B} & \dots & a_{1n}\text{B}\\ a_{21}\text{B} & a_{22}\text{B} & \dots & a_{2n}\text{B}\\ \vdots & \vdots & \ddots &\vdots \\ a_{m1}\text{B} & a_{m2}\text{B} & \cdots & a_{mm}\text{B} \end{matrix} \right]

    2. \text{A}^T、\text{A}^{-1}、\text{A}^*

    \text{A}^T,转置;
    \text{A}^{-1},矩阵的逆;
    \text{A}^*,伴随矩阵。

    反矩阵

    • \text{A}^{-1} = \frac{1}{|\text{A}|}\text{A}^*

    • (\text{A}^{-1})^{-1} = \text{A}

    • (\text{A}\text{B})^{-1} = \text{B}^{-1}\text{A}^{-1}

    • (k\text{A})^{-1} = \frac{1}{k}\text{A}^{-1}

    • (\text{A}^{n})^{-1} = \text{A}^{-n}

    • (\text{A}^{n})^{-1} = (\text{A}^{-1})^{n}

    • \text{A}^{-1}\text{A} = \text{A}\text{A}^{-1} = \text{I}_{n}


    • (\text{A}^T)^T = \text{A}

    • (\text{AB})^T = \text{B}^T\text{A}^T

    • (k\text{A})^T = k\text{A}^T

    • (\text{A} \pm \text{B})^T = \text{A}^T + \text{B}^T

    • (\text{A}^*)^* = |\text{A}|^{n-2}\text{A}(n\geq 3)

    • (\text{A}\text{B})^* = \text{B}^*\text{A}^*

    • (k\text{A})^* = k^{n-1}\text{A}^* (n\geq 2)

    • (\text{A}^{-1})^T = (\text{A}^{T})^{-1}

    • (\text{A}^{-1})^* = (\text{A}\text{A}^*)^{-1}

    • (\text{A}^*)^T = (\text{A}^T)^*

    • \text{A}\text{A}^* = \text{A}^*\text{A} = |\text{A}|\text{E}

    • |\text{A}^*|= |\text{A}|^{n - 1}(n \geq 2)

    • (k\text{A}^*) = k^{n-1}\text{A}^*

    • (\text{A}^*)^* = |\text{A}|^{n-2}\text{A}(n\geq3)

    • 若矩阵可逆,则\text{A}^* = |\text{A}|\text{A}^{-1},(\text{A}^*)^*=\frac{1}{|\text{A}|}\text{A}

    • \text{A}n阶方阵,则 r(\text{A}^*) = \begin{cases} n, & r(\text{A}) = n \\ 1, & r(\text{A}) = n-1 \\ 0, & r(\text{A}) < n-1 \end{cases}

    import numpy as np
    
    A = np.mat(np.random.randint(10, size = (3,3)))
    A = np.mat([[-3, 2, -5], [-1, 0, -2], [3, -4, 1]])
    print(A)
    print(A.T) # 转置
    print(A.I) # 逆
    print(np.linalg.det(A)) # 行列式
    print(np.dot(np.linalg.det(A), A.I)) # 伴随矩阵
    
    [[-3  2 -5]
     [-1  0 -2]
     [ 3 -4  1]]
    [[-3 -1  3]
     [ 2  0 -4]
     [-5 -2  1]]
    [[ 1.33333333 -3.          0.66666667]
     [ 0.83333333 -2.          0.16666667]
     [-0.66666667  1.         -0.33333333]]
    -6.0
    [[-8. 18. -4.]
     [-5. 12. -1.]
     [ 4. -6.  2.]]
    

    3. 矩阵的秩

    • r(\text{A}) = 行秩 = 列秩

    • r(\text{A}_{m\times n})\leq \min(m, n)

    • \text{A}\neq 0 \Rightarrow r(\text{A}) \geq 1

    • r(\text{A} \pm \text{B}) \leq r(\text{A}) + r(\text{B})

    • 初等变换不改变矩阵的秩

    • r(\text{A}) + r(\text{B}) - n \leq r(\text{A}\text{B}) \leq \min(r(\text{A}, r(\text{B}))),特别当\text{A}\text{B}=\text{O},则:r(\text{A}) + r(\text{B}) \leq n

    • \text{A}^{-1}存在,则r(\text{AB})=r(\text{B});若\text{B}^{-1}存在,则r(\text{AB}) = r(\text{A});若r(\text{A}_{m\times n}) = n,则r(\text{AB})=r(\text{B});若r(\text{A}_{m\times s}) = n,则r(\text{AB}) = r(\text{A})

    • r(\text{A}_{m\times s}) = n \Leftrightarrow \text{A}x = 0只有零解

    import numpy as np
    A = np.array([[1, 1], [2, 2]]) # rank = 1
    # A = np.array([[1, 2], [3, 4]]) # rank = 2
    A_rank = np.linalg.matrix_rank(A)
    A_rank
    
    1
    

    4. 特征值与特征向量

    • \text{A}n阶矩阵,若存在常数\lambdan维非零向量\vec{\text{X}},使得\text{A}\vec{\text{X}} = \lambda \vec{\text{X}}

    • \lambda\text{A}的一个特征值,则k\text{A} 特征值 k\lambdaa\text{A}+b\text{E} 特征值 a\lambda + b\text{A}^2 特征值 \lambda^2\text{A}^m 特征值 \lambda^mf(\text{A}) 特征值 f(\lambda)\text{A}^{T} 特征值 \lambda\text{A}^{-1} 特征值 \lambda^{-1}\text{A}^{*} 特征值 \frac{|\text{A}|}{\lambda}

    • \lambda_1,\lambda_2,\cdots,\lambda_n\text{A}n个特征值,则\sum_{i=1}^n=\lambda_i = \sum_{i = 1}^{n}a_{ii}=|\text{A}|

    • \lambda_1,\lambda_2,\cdots,\lambda_s\text{A}s个特征值,对应特征向量为\vec{a_1},\vec{a_2},\cdots,\vec{a_s},若\vec{\text{a}} = k_1\vec{a_1} + k_2\vec{a_2} + \cdots + k_s\vec{a_s},则\text{A}^n\vec{\text{a}} = k_1\lambda_1^n\vec{a_1} + k_2\lambda_2^n\vec{a_2} + \cdots + k_s\lambda_s^n\vec{a_s}

    import numpy as np
    
    A = np.mat([[-1, 1, 0], [-4, 3, 0], [1, 0, 2]])
    eigenvalue, featurevector = np.linalg.eig(A)
    eigenvalue, featurevector
    
    (array([2., 1., 1.]), matrix([[ 0.        ,  0.40824829,  0.40824829],
             [ 0.        ,  0.81649658,  0.81649658],
             [ 1.        , -0.40824829, -0.40824829]]))
    

    5. 相似矩阵

    • \text{A},\text{B}都是n阶矩阵,若存在可逆矩阵\text{P},使\text{P}^{-1}\text{AP}=\text{B},则称\text{B}\text{A}的相似矩阵, 并称矩阵\text{A}\text{B}相似,记为\text{A}\text{~}\text{B}
      对进行运算称为对进行相似变换,称可逆矩阵为相似变换矩阵。

    相关文章

      网友评论

          本文标题:线性代数基础

          本文链接:https://www.haomeiwen.com/subject/zcaxxqtx.html