I think this is spot on. Depending on what you're doing, a matrix can be:
- A linear transformation
- A basis set of column vectors
- A set of equations (rows) to be solved
- (your example: parity equations for coding theory)
- The covariance of elements in a vector space
- The Hessian of a function for numerical optimization
- The adjacency representation of a graph
- Just a 2D image (compression algorithms)
... (I'm sure there are plenty of others)
For some of these, the matrix is really just a high dimensional number. You (rarely?) never think of covariance in a Kalman filter as a linear transform, but you still need to take its Eigen vectors if you want to draw ellipses.
The first three can reasonably be thought of as defining linear transformations. For linear systems of equations A x = b, x is an unknown vector in the input space that is mapped by A to b.
Both covariance matrices and Hessians are more naturally thought of as tensors, not matrices (and therefore not linear transformations). That is, they take in two vectors as input and produce a single real number as output.
As for graph adjacency matrix, this can actually be thought of as a linear transformation on the vector space where the basis vectors correspond to nodes in the graph. Linear combinations of these basis vectors correspond to probability distributions over the graph (if properly normalized).
2D images... Yes, these cannot really be interpreted as linear transformations. But I'd say these aren't really matrices in the mathematical sense.
If you squint hard enough, you can see all of them as linear transformations (even the 2D images :-).
I politely disagree about covariance and Hessians. I can squint and say that the Hessian provides a change in gradient when multiplied by a delta vector. Similarly for covariance... Or you could look at it as one half of the dot product for a Bhattacharyya distance, which is just a product of three matrices (row vector, square matrix, col vector). No need for tensors yet.
That is unless you decide to squint hard enough to see everything as tensors! :-)
Great points. I wrote my comment in response to the article claiming to be an intuitive guide to linear algebra, not an intuitive guide to matrices. According to wikipedia:
> Linear algebra is the branch of mathematics concerning linear equations, linear functions, and their representations in vector spaces through matrices. [0]
The Venn Diagram of Linear Algebra and Matrices definitely has a lot of non-overlap, of which your list covers some. This article should be renamed to be about matrices and not linear algebra, because it's not.
A covariance matrix naturally transforms from the measured space to a space where things are approximately unit Gaussian distributed. This is identical to the Z transform in 1D case.
This can be useful in, say, exotic options trading - a natural unit of measurement is how many ‘vols’ an underlier has moved, e.g. a 10-vol move is very large.
Not really the covariance matrix, though, but its Cholesky decomposition (which exists, as a covariance matrix is symmetric positive (semi)definite, as otherwise you could construct a linear combination with negative variance).
Useful stuff.
And vice versa, btw - take iid RV with unit variance, hit them with the Cholesky decomposition, and you have the desired covariance. Used all over Monte Carlo and finance and so on.
Well, it depends on what "the space" is. Every set of vectors (in a common ambient space) spans some space—often called the column space of a matrix, if they are the column vectors of the matrix.
Sure, every set of vectors will span the space they span. But the requirement that a basis span the space refers to the space it’s in, not the space it spans (otherwise every linearly independent set of vectors were a basis, spanning the space it spans.) I could go on :-)