Differential Forms

$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\C}{\mathbb{C}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$

Motivation

Differential forms are the mathematics for doing integration on oriented surfaces. They abstract and simplify concepts like divergence, gradient and curl, and unify Green’s theorem, Gauss’ theorem and others. It’s quite elegant, and useful for electrodynamics and complex analysis, among other things.

Overview

In a 1D integral like $\int_0^1f(x)dx$, we have a thing we are integrating over (the real unit interval) and a thing we are integrating, namely $f(x)$. The integral is a sort of inner product like “pairing” which combines these two objects to obtain a real number. We could write $\langle [0,1], f \rangle \in \mathbb{R}$.

In n dimensions, the objects that we want to integrate over are manifolds $M$ (picture the surface of a potato: that’s a 2-manifold), and the objects that we are integrating are differential forms, which we tend to write with Greek letters like $\alpha$. More on differential forms below, but briefly, $\alpha \in \Omega^{k}(M_n)$, is a differential $k$-form on a n-manifold $M$, where $\Omega^{k}(M) =: M \to \Lambda^k(M)$ and $\Lambda^k(M)$ is the vector space of antisymmetric k-multilinear maps on $\RR^n$.

As in 1D, there’s a pairing of n-manifolds and a differential n-forms to obtain a real number: that’s integration. Again, we could write $\langle M, \alpha \rangle \in \mathbb{R}$.

Manifolds form a category $\mathcal{M}$, with the maps being smooth functions between manifolds. Differential forms form a category too, given by a contravariant functor $F^k$ on the category of manifolds which takes a manifold $M$ to $\Omega^{k}(M)$, and acts on maps as follows:

$$ F^kf\omega(H_1,\ldots,H_k) = \omega(df(H_1),\ldots,df(H_k)) $$

The image of a smooth map $f \in \mathcal{M}$, namely $F^kf$, is called the pullback, or rather, $f^*\omega =: F^kf(\omega)$ is called the pullback of $\omega$ along $f$.

Forms are invariant with respect to integration, under the pullback, in the following sense:

$$ \int_{\phi(M)}\alpha = \int_{M} \phi^*\alpha $$

This means we can calculate integrals on an $n$-manifold $M$ by pulling back along a map $\mathbb{R}^n \to M$, and thus reducing the problem to one of integration on $\mathbb{R}^n$, which can be done by e.g. Riemann integration.

For $F^k$ and $F^{k+1}$, there is a natural transformation $F^k \to F^{k+1}$ known as the exterior derivative $d$, to be defined concretely below. More precisely, the components of the natural transformation, which take $k$-forms to $(k+1)$-forms are called $d$. Being maps in a category of vector spaces, linearity is clearly obeyed, and we’ll see a form of the product rule.

There is also a map from $k$-manifolds to $(k-1)$-manifolds, by mapping a manifold to its boundary. This operator is called $\partial$. Note that $\partial \circ \partial = \{\}$. Boundaries have no boundary.

The culmination of all of this is Stokes’ theorem, which establishes a relationship between $\partial$ and $d$:

$$ \langle \partial M, \alpha = \langle M, d\alpha \rangle $$

or

$$ \int_{\partial M} \alpha = \int_{M} d\alpha $$

The exterior derivative is the “transpose” of the boundary operator. Note that in 1D, this is the fundamental theorem of calculus: $\int_{[a,b]}df = f(b)-f(a)$, where $a$ and $b$ form the boundary of the interval, and the integral is a sum. In higher dimensions, it encompasses Green’s theorem, Stokes’ (other) theorem, and the divergence theorem.

Since $\partial \circ \partial = \{\} $

$$\int_{M} dd\alpha = \int_{\partial \partial M} \alpha = 0 $$

which means that $d \circ d = 0$.

So forms in the image of $d$ (which we call exact), are in the kernel of $d$ (which we call closed). The converse is not always true, and the extent to which it is false is measured by something called the de Rham cohomology, which I don’t know about.

The naturality condition, namely commutativity of $d$ and the pullback $F^kf$ is important. Using this fact, we can show:

$$ \int_{\gamma}df = \int_{[a,b]} \gamma^{*}df = \int_{[a,b]} d(\gamma^{*}f) = \int_{[a,b]} d(f\circ\gamma) = f(\gamma(b))-f(\gamma(a))$$

which is to say that exact 1-forms are path-independent, which comes up in e.g. thermodynamics.

Working in a basis

$k$-forms on $\RR^n$ themselves form a vector space, and have a basis formed of the wedge products of the 1-forms on that space, so that for any $k$-form $\omega$:

$$\sum_{1\leq i_1 \lt …\lt i_k \leq n} A_{i_1,\ldots,i_k}(y)dy_{i_1}\wedge\ldots\wedge dy_{i_k}$$

This means that we are summing over all permutations of $(1…k)$ which satisfy the constraint of increasing order.

A top form is an $n$-form on $\RR^n$. It lives on the 1D vector space, and spanned by $x_1\wedge\ldots\wedge x_n$.

Further, there are various algebraic operations, like the hodge star and contraction, which operate on spaces of antisymmetric multilinear maps in important ways. The hodge star takes a k-form on an m-dim manifold to a (m-k) form on that manifold. A special case is the cross product.

The most important operation is the exterior product $\wedge$, which take an $n$-form and an $m$-form, takes their tensor product, and antisymmetrizes it (there’s a canonical way to do this), to produce a $(n+m)$-form. $\wedge$ has the property that $x\wedge x = 0$, which turns out to be enormously useful in calculations.

Pullback in a basis

Suppose

$$\omega = \sum_{1\leq i_1 \lt …\lt i_p \leq n} A_{i_1,\ldots,i_p}(y)dy_{i_1}\wedge\ldots\wedge dy_{i_p}$$

Then

$$f^{*}\omega = \sum_{1\leq i_1 \lt …\lt i_p \leq n} A_{i_1,\ldots,i_p}(f(x))df_{i_1}\wedge\ldots\wedge df_{i_p}$$

This is an extremely useful formula when actually calculating anything. For instance, take a $1$-form $\alpha$ on $[a,b]$, which necessarily is of the form $f(x)dx$. Then for any $\phi : [c,d]\to[a,b]$, $\phi^{*}\alpha = f(\phi(y))\phi’(y)dy$. As shown below, this means that integrals are pullback invariant (by using the substitution formula).

Exterior derivative in a basis

First recall the total derivative, which just acts on plain old functions, and is written also $d$. Take $f : U \subset V \to W$. Then $df : V \to V_x \to W_{f(x)}R$, where $V_k$ is a copy of $V$ and the second arrow indicates a linear map.

By the chain rule:

$$df = \sum_{i=1}^n \frac{\partial f}{\partial x_i}dx_i $$

We take the $dx_i$ literally here, as differential 1-forms.

For $\omega = \sum_{i_1\lt \ldots \lt i_k}a_{i_1\ldots i_k} dx_{i_1}\wedge\ldots \wedge dx_{i_k}$

$$ d\omega = \sum_{i_1\lt \ldots \lt i_k}da_{i_1\ldots i_k}\wedge dx_{i_1}\wedge\ldots \wedge dx_{i_k} $$

Divergence, gradient, and curl

These are usually defined as operations on vector or scalar fields, but can be expressed in terms of the exterior derivative. This makes working in other coordinates and seeing basic formulas a lot easier:

First let $U = \lambda x \langle \cdot, x \rangle$ and its inverse $L$ witness the isomorphism between 1-forms and vector fields. Then, for a function (note that a function is a 0-form) $f$ or scalar field $F$:

$$ \nabla f = (L\circ d)(f) $$

$$ \nabla \times F = (L\circ \star \circ d\circ U)(F) $$

$$ \nabla \cdot F = (\star \circ d \circ \star \circ U)(F) $$

Note that

$$\nabla \cdot (\nabla \times F) = (\star \circ d \circ \star \circ U\circ L\circ \star \circ d\circ U)(F)$$

$$ = (\star \circ d \circ d\circ U)(F) = 0 $$

since $d^2 = 0$. Similarly:

$$ \nabla \times (\nabla f) = (L\circ \star \circ d\circ U\circ L\circ d)(f) = (L\circ \star \circ d\circ d)(f) = 0 $$

Worked example

As a simple example, take: $\omega_0 = \frac{x dy-ydx}{x^2+y^2}$

This is a 1-form on $\RR/{0}$, i.e. on the punctured plane.

TODO: edit the rest of the notes below, to reflect the changes above TODO

Here is another very important example, the pullback of $\omega_0$ above along $f(r,\theta)=(r\cos\theta, r\sin\theta)$:

$$ f^*\omega_0 = \frac{r\cos\theta(\sin\theta dr+r\cos\theta d\theta) - r\sin\theta(\cos\theta dr-r\sin\theta d\theta)}{r^2} $$ $$= r\frac{r\cos\theta\cos\theta d\theta+r\sin\theta\sin\theta d\theta}{r^2} = d\theta $$

Example:

$\omega_0 = \frac{x dy-ydx}{x^2+y^2}$

We have already calculated its pullback along polar coordinates, so for $\gamma(t)=(\cos(t),\sin(t))$:

$$ \int_{\gamma}\omega_0 = \int_0^{2\pi}\gamma^{*}\omega_0 = \int_0^{2\pi} dt = 2\pi $$

Vector fields

Any vector field $v: V\to V$ is naturally associated with a first order differential operator:

$$ D_v(f) = \lambda x \to df_x(v) $$

For the vector field $v_i$ which simply translates $V$ so that the origin is sent to some point $x_i$, for $x_i$ a (dual) basis vector, $D_{v_i}(f) = \frac{\partial f}{\partial x_i}$. A lot is hidden in the clever choice of notation here.

In fact, we can write any vector field $v$ in terms of a “basis” of partial derivatives:

$$ \sum_{i=1}^na_i\frac{\partial}{\partial x_i} $$

where $a_i$ is a function corresponding to $x_i \circ v$.

Invariance of integral under pullback

The integrals in the following are Riemann integrals, and the following is the standard change of variables formula:

$$ \int_{[a,b]} f = \int_{\phi^{-1}([a,b])} (f\circ \phi)|\phi’| $$

Suppose $\phi$ is orientation preserving, then (and with Riemann integrals in the middle expressions):

$$ \int_{[a,b]} \alpha = \int_{[a,b]} f = \int_{\phi^{-1}([a,b])} (f\circ \phi)|\phi’| = \int_{[c,d]}\phi^{*}\alpha $$

Both the integral of differential $n$-forms, and the invariance of the pullback extend for $n\gt 1$.

Closed and exact forms

By the fact that $d^2=0$ (see above), we know that any exact form is closed. The converse holds true on star shaped domains (domains where a point can be chosen such that every other point can be reached by a straight line). The proof (p118) is very nice and used a lot of elementary results

An obvious consequence of the above is that integrals of exact forms over loops are $0$. This isn’t in contradiction to the result that $\int_{\gamma}\omega_0=2\pi$ since $\omega_0$ is not exact on this domain.

Special cases of Stokes’ theorem

Stokes’ theorem is very general, and a number of common formulae turn out to just be special cases. The divergence theorem is one example, another is Green’s theorem. The setting of Green’s theorem is a bounded domain $B$ in $\RR^2$, with boundary $\Gamma$. Consider some $1$-form on the domain which must take the form $P_1dx_1+P_2dx_2$. Then, by Stokes’ theorem:

$$\int_B \left(\frac{\partial P_2}{\partial x_1} - \frac{\partial P_1}{\partial x_2} \right) = \int_{\Gamma}P_1dx_1+P_2dx_2$$

Very useful in science and engineering, since this lets you calculate an area integral by integrating around its boundary. Similarly the divergence theorem, but in 3D:

$$ \int P_1dx_2\wedge dx_3 + P_2dx_3\wedge dx_1 + P_3dx_1\wedge dx_2 = \int (\partial{P_1}{x_1} + \partial{P_2}{x_2} + \partial{P_3}{x_3})dx_1\wedge dx_2 \wedge dx_3 $$

In fact, the fundamental theorem of calculus is itself a special case, where the domain is an interval, and the boundary is the two end points.

Borsuk’s theorem

For $D^2$ the 2D unit disk, suppose we have $f: D^2 \to \partial D^2$ with $x\in\partial D^2 \Rightarrow f(x)=x$. This is impossible. Proceeding by contradiction, first note that for $\omega_0=\frac{x dy-ydx}{x^2+y^2}$:

$$ d\omega_0 = 0$$

(by calculation above) so that $df^{*}\omega_0=f^{*}d\omega_0=0$, which means that $f^{*}\omega_0$ is closed, and since it is on a star-shaped domain, also exact. This, in turn, means that

$$ \oint_{\partial D^2} f^{*}\omega_0 = 0 $$

Since it’s the integral of an exact form on a loop, and therefore $0$.

But since integrals are invariant under pullbacks, this would mean that

$$ \oint_{\partial D^2} \omega_0 = 0 $$

in contradiction to

$$ \oint_{\partial D^2} \omega_0 = 2\pi $$

which is shown above.

Brouwer’s fixed point theorem

Suppose we had a function $f: D^2\to D^2$ with no fixed point. Then for any $x$, we could define $F(x)$ as the point on the boundary reached by continuing a straight line from $x$ through $f(x)$ until the boundary. But $F$ would then violate Borsuk’s theorem. Contradiction.

de Rham theorem

Consider a 1-form $\omega$ on the punctured plane. The de Rham theorem says that $\omega=\lambda\frac{x dy-ydx}{x^2+y^2}+df$

Consider $H^1(R^2/{0})$, the first de Rham cohomology group, defined as the space of closed forms on the punctured plane quotiented by their difference being exact. That is, two forms are equivalent in $H^1$ if their difference is an exact form.

A consequence of de Rham theorem is that $H^1$ is 1D. To see this, choose $\omega\in \ker d$ so that by de Rham, we have $\omega=\lambda\frac{x dy-ydx}{x^2+y^2}+df$. Then $df = \omega-\lambda\frac{x dy-ydx}{x^2+y^2}$, which means $\omega-\lambda\frac{x dy-ydx}{x^2+y^2}\in im~d$.

But that means that $[\omega]$, i.e. the equivalence class of $\omega$, which is an element of $H^1$, is in the span of $\frac{x dy-ydx}{x^2+y^2}$.