Link Search Menu Expand Document

Volumes and determinants

Determinants also relate to volumes of geometric objects. It may seem silly but let’s look at the one-dimensional setting first. (In general, if you don’t understand the 1-d setting, then you should start your learning there.)

In R1=R\mathbb{R}^1 = \mathbb{R}, a vector is just a scalar or a element of R\mathbb{R}. The length of the vector aa is a|a|. In 1-d, length is volume. So we have the following relation between the volume of the line segment P(a)P(a) spanned by aa is Vol(P(a))=det(a)\operatorname{Vol}(P(a)) = |\det(a)| The determinant is a signed volume in 1-d.

With some confidence with one dimension, we turn to 2-d. Let’s start with a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} The columns give up two vectors v1=(ac), v2=(bd)\mathbf{v}_1 = \begin{pmatrix} a \\ c \end{pmatrix}, \ \mathbf{v}_2 = \begin{pmatrix} b \\ d \end{pmatrix} The vectors v1,v2\mathbf{v}_1, \mathbf{v}_2 span a parallelogram P=P(A)P = P(A).

Let’s check that Vol(P(A))=det(A)\operatorname{Vol}(P(A)) = |\det(A)|

Neither the volume nor the determinant depend on where we place the parallelogram in the plane so assume that the unlabelled vertex is at the origin.

Let’s rotate PP so that v1\mathbf{v}_1 lies along the xx-axis.

Notice that v1=(a0), v2=(bd)\mathbf{v}_1^\prime = \begin{pmatrix} a^\prime \\ 0 \end{pmatrix}, \ \mathbf{v}_2^\prime = \begin{pmatrix} b^\prime \\ d^\prime \end{pmatrix} so for the rotated matrix detA=ad\det A^\prime = a^\prime d^\prime which is exactly the volume of the rotated parallelogram PP^\prime (base ×\times height).

To finish, we want to see that the determinant also does not change up when rotating in R2\mathbb{R}^2. However, a rotation is a linear transformation! So it can be represented by a matrix when using the standard basis. For rotating counter-clockwise through an angle θ\theta from xx-axis, we have Rθ:=(cos(θ)sin(θ)sin(θ)cos(θ))R_\theta := \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix} as the matrix. Since A=RθAA^\prime = R_\theta A we get detA=det(Rθ)det(A)\det A^\prime = \det(R_\theta) \det(A) and detRθ=cos2θ+sin2θ=1.\det R_\theta = \cos^2 \theta + \sin^2 \theta = 1. So the determinant is unchanged also.

We can conclude that VolP=det(A)\operatorname{Vol} P = |\det(A)| in 2-d, like 1-d.

We feel pretty good about making the following statement.

For any n×nn \times n matrix AA we have VolP(A)=det(A)\operatorname{Vol} P(A) = |\det(A)| where P(A)P(A) is the parallelopiped spanned by the column vectors of AA.

In three dimensions, the parallelopiped looks like

For higher dimensions, we rely on our intuition from dimensions 1 through 3.

The proof of VolP(A)=det(A)\operatorname{Vol} P(A) = |\det(A)| can go one of two ways:

  • We can brute force things along lines of two dimensions above. Use explicit formulas and manipulations to match up the results.
  • Or we can reach for the power of abstraction and use the properties that completely capture the behavior Vol\operatorname{Vol} and det(A)|\det(A)|.

Proposition. If f,g:Matn×n(R)Rf,g : \operatorname{Mat}_{n \times n}(\mathbb{R}) \to \mathbb{R} are two functions which both satisfy

  • ϕ(L(i,j,c)A)=ϕ(A)\phi(L(i,j,c)A) = \phi(A)
  • ϕ(P(i,j)A)=ϕ(A)\phi(P(i,j)A) = \phi(A)
  • ϕ(S(i,c)A)=cϕ(A)\phi(S(i,c)A) = |c|\phi(A)
  • ϕ(In)=1.\phi(I_n) = 1.
  • and ϕ(A)=0\phi(A) = 0 for singular AA.

Then, f=gf = g.

Proof. (Expand to view)

From Gaussian Elimination, we know that any matrix can be written as A=CUA = C U where CC is a product of L(i,j,c),S(i,c)L(i,j,c), S(i,c), and P(i,j)P(i,j) and UU is in reduced row echelon form. Thus, ϕ(A)=ϕ(U)\phi(A) = \phi(U) and ϕ\phi only depends on the reduced row echelon form. We have seen that AA is invertible if and only U=InU = I_n. Thus, we have specified ϕ(A)\phi(A) for invertible AA completely from the properties.

For non-invertible AA, we know that ϕ(A)=0\phi(A) = 0.

Corollary. For any n×nn \times n matrix AA we have VolP(A)=det(A)\operatorname{Vol} P(A) = |\det(A)| where P(A)P(A) is the parallelopiped spanned by the column vectors of AA.

Proof. (Expand to view)

Both Vol(P())\operatorname{Vol}(P(-)) and det()|\det(-)| satisfy the conditions of the previous proposition.

We have already seen that det(A)\det(A) satisfies all the conditions with the exception that det(P(i,j)A)=det(A)\det(P(i,j)A) = -\det(A) But the absolute value doesn’t change.

For the volume, we know it doesn’t change with rigid motions. Let’s write $\mathbf{v}1,\ldots,\mathbf{v}_n$ be the _row vectors of AA. Then, we have an intuitively clear formula \operatorname{Vol} (P(A)) = \operatorname{ht}(\mathbf{v}_i}) \operatorname{Vol} (P_i) where PiP_i is the parallelopiped spanned by all the vectors $\mathbf{v}1,\ldots,\mathbf{v}_n$ _except vi\mathbf{v}_i and \operatorname{ht}(\mathbf{v}_i}) is height of vi\mathbf{v}_i above that parallelopiped. (We should really prove this too but we will rely a bit on “geometric intuition” here and below.)

From this formula, we see that Vol(P(P(i,j)A))=Vol(P(A))\operatorname{Vol}(P(P(i,j)A)) = \operatorname{Vol}(P(A)) and Vol(P(S(i,c)A))=cVol(P(A))\operatorname{Vol}(P(S(i,c)A)) = |c|\operatorname{Vol}(P(A)) using induction on nn.

For Vol(P(L(i,j,c)A))=Vol(P(A))\operatorname{Vol}(P(L(i,j,c)A)) = \operatorname{Vol}(P(A)) we note that the height of vj+cvi\mathbf{v}_j + c\mathbf{v}_i has the same height above PjP_j as vj\mathbf{v}_j.

Finally, the volume of P(In)P(I_n) is the volume of the unit cube in nn-dimensions which is 11. And if AA is singular, then it spans at most a (n1)(n-1)-dimensional subspace and must have 00 volume.

This corollary tells us that the determinant can also be viewed as a way to provide signed volumes in all dimensions. This is useful for multi-variable integration, for example, which is why you find determinants showing up in change of variable formulae for multi-variable integrals.