Tutorial: Matrix Concepts in Machine Learning with Formulas and Examples
Báo cáo
Thêm vào series của tôi
1. Determinant
Definition
The determinant of a square matrix A∈Rn×n is a scalar value that tells us about the volume change under the linear transformation represented by A, and whether the matrix is invertible.
Formula (2x2 matrix)
det(A)=∣∣acbd∣∣=ad−bc
Example
A=[2134]⇒det(A)=2⋅4−3⋅1=8−3=5
Since det(A)=0, the matrix is invertible.
2. Invertibility
A matrix A is invertible if and only if det(A)=0.
Inverse Formula (2x2 matrix)
A−1=det(A)1[d−c−ba]
Example
A−1=51[4−1−32]=[0.8−0.2−0.60.4]
3. Cholesky Decomposition
Only applies to symmetric positive definite matrices A:
A=LLT
where L is a lower triangular matrix.
Example
A=[4223]⇒L=[2101]
Used in Gaussian sampling, and optimization.
4. Eigenvalues and Eigenvectors
Definition
For matrix A, if:
Av=λv
then λ is an eigenvalue and v is an eigenvector.
Characteristic Equation
det(A−λI)=0
Example
A=[2112]det(A−λI)=∣∣2−λ112−λ∣∣=(2−λ)2−1=0
(2−λ)2=1⇒λ=1,3
5. Orthogonal Matrix
A matrix Q is orthogonal if:
QTQ=QQT=I
Example
Q=[100−1]⇒QTQ=I
Used in preserving vector lengths and directions in transformations.
6. Diagonalization
Matrix A is diagonalizable if:
A=PDP−1, where D is diagonal with eigenvalues
Example
A=[2003]⇒A is already diagonal
7. SVD (Singular Value Decomposition)
Every matrix A∈Rm×n can be written as:
A=UΣVT
Where:
U∈Rm×m: left singular vectors
Σ∈Rm×n: diagonal matrix of singular values
V∈Rn×n: right singular vectors
Example
A=[3113]⇒SVD gives U,Σ,VT
8. Dimensionality Reduction
PCA via SVD
Center the data.
Compute X=UΣVT
Reduce to k dimensions: Xk=UkΣk
Example (2D -> 1D)
Data:
X=[2002]⇒PCA picks major axis with highest variance
This tutorial covered essential matrix operations in machine learning and statistics. Understanding these topics is crucial for deeper areas like PCA, Gaussian models, optimization, and neural network training.