0

Tutorial: Matrix Concepts in Machine Learning with Formulas and Examples

1. Determinant

Definition

The determinant of a square matrix ARn×nA \in \mathbb{R}^{n \times n} is a scalar value that tells us about the volume change under the linear transformation represented by AA, and whether the matrix is invertible.

Formula (2x2 matrix)

det(A)=abcd=adbc\text{det}(A) = \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc

Example

A=[2314]det(A)=2431=83=5A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} \Rightarrow \text{det}(A) = 2 \cdot 4 - 3 \cdot 1 = 8 - 3 = 5

Since det(A)0\text{det}(A) \neq 0, the matrix is invertible.


2. Invertibility

A matrix AA is invertible if and only if det(A)0\text{det}(A) \neq 0.

Inverse Formula (2x2 matrix)

A1=1det(A)[dbca]A^{-1} = \frac{1}{\text{det}(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

Example

A1=15[4312]=[0.80.60.20.4]A^{-1} = \frac{1}{5} \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix} = \begin{bmatrix} 0.8 & -0.6 \\ -0.2 & 0.4 \end{bmatrix}


3. Cholesky Decomposition

Only applies to symmetric positive definite matrices AA:

A=LLTA = LL^T

where LL is a lower triangular matrix.

Example

A=[4223]L=[2011]A = \begin{bmatrix} 4 & 2 \\ 2 & 3 \end{bmatrix} \Rightarrow L = \begin{bmatrix} 2 & 0 \\ 1 & 1 \end{bmatrix}

Used in Gaussian sampling, and optimization.


4. Eigenvalues and Eigenvectors

Definition

For matrix AA, if:

Av=λvA\mathbf{v} = \lambda\mathbf{v}

then λ\lambda is an eigenvalue and v\mathbf{v} is an eigenvector.

Characteristic Equation

det(AλI)=0\text{det}(A - \lambda I) = 0

Example

A=[2112]det(AλI)=2λ112λ=(2λ)21=0A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} \\ \text{det}(A - \lambda I) = \begin{vmatrix} 2-\lambda & 1 \\ 1 & 2-\lambda \end{vmatrix} = (2-\lambda)^2 - 1 = 0

(2λ)2=1λ=1,3(2-\lambda)^2 = 1 \Rightarrow \lambda = 1, 3


5. Orthogonal Matrix

A matrix QQ is orthogonal if:

QTQ=QQT=IQ^T Q = QQ^T = I

Example

Q=[1001]QTQ=IQ = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \Rightarrow Q^T Q = I

Used in preserving vector lengths and directions in transformations.


6. Diagonalization

Matrix AA is diagonalizable if:

A=PDP1, where D is diagonal with eigenvaluesA = PDP^{-1}, \text{ where } D \text{ is diagonal with eigenvalues}

Example

A=[2003]A is already diagonalA = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix} \Rightarrow A \text{ is already diagonal}


7. SVD (Singular Value Decomposition)

Every matrix ARm×nA \in \mathbb{R}^{m \times n} can be written as:

A=UΣVTA = U \Sigma V^T

Where:

  • URm×mU \in \mathbb{R}^{m \times m}: left singular vectors
  • ΣRm×n\Sigma \in \mathbb{R}^{m \times n}: diagonal matrix of singular values
  • VRn×nV \in \mathbb{R}^{n \times n}: right singular vectors

Example

A=[3113]SVD gives U,Σ,VTA = \begin{bmatrix} 3 & 1 \\ 1 & 3 \end{bmatrix} \Rightarrow \text{SVD gives } U, \Sigma, V^T


8. Dimensionality Reduction

PCA via SVD

  1. Center the data.
  2. Compute X=UΣVTX = U\Sigma V^T
  3. Reduce to kk dimensions: Xk=UkΣkX_k = U_k \Sigma_k

Example (2D -> 1D)

Data:

X=[2002]PCA picks major axis with highest varianceX = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \Rightarrow \text{PCA picks major axis with highest variance}


Summary Mind Map

graph TD
  Determinant -->|tests| Invertibility
  Invertibility -->|used in| Cholesky
  Determinant -->|used in| Eigenvalues
  Eigenvalues -->|determines| Eigenvectors
  Eigenvectors -->|constructs| OrthogonalMatrix
  OrthogonalMatrix -->|used in| Diagonalization
  Eigenvectors -->|used in| Chapter10
  Diagonalization -->|used in| SVD
  SVD -->|used in| Chapter10
  Cholesky -->|used in| Chapter6
  Eigenvalues -->|used in| Diagonalization

  classDef green fill:#cfe,stroke:#333,stroke-width:1px;
  Chapter6["Chapter 6\nProbability\n& distributions"]:::green
  Chapter10["Chapter 10\nDimensionality\nreduction"]:::green

This tutorial covered essential matrix operations in machine learning and statistics. Understanding these topics is crucial for deeper areas like PCA, Gaussian models, optimization, and neural network training.


All rights reserved

Viblo
Hãy đăng ký một tài khoản Viblo để nhận được nhiều bài viết thú vị hơn.
Đăng kí