Peter Bloem's PCA Introduction Series

  • It's one of the best linear algebra tutorial I've looked at
    <http://peterbloem.nl/blog/pca>
  • PCA has similarities with Autoencoder
  • Minimizing reconstruction error == Maximizing variance (a^2 = b^2 + c^2)
    • Best fit == Perpendicular Projection
    • Combined (not unique) solution v.s. greedy iterative solution
  • Ortho-normal matrix is a rotation/flipping-only transformation
    • Data normalization and basis transformations
  • The great human-face dataset example
  • Spectrum theory
  • We then looked at eigenvectors, and we show that the eigenvectors of the data covariance 𝐒 arise naturally when we imagine that our data was originally decorrelated with unit variance in all directions. To me, this provides some intuition for why PCA works so well when it does. We can imagine that our data was constructed by sampling independent latent variables 𝐳 and then mixing them up linearly.
  • SVD
  • Part 3 - Proof of the Spectrum Theorem

  • Determinant - Inflation

  • Non-trivial Null Space of $ | A - \lambda I | $

  • Complex Number and Multiplication

A digital garden, perpetually growing.