For a vector field defined on a domain , a Helmholtz decomposition is a pair of vector fields and such that:
Here, is a scalar potential, is its gradient, and is the divergence of the vector field . The irrotational vector field is called a gradient field and is called a solenoidal field or rotation field. This decomposition does not exist for all vector fields and is not unique.[8]
Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump functions that are defined on a bounded domain. Then, a vector potential can be defined, such that the rotation field is given by , using the curl of a vector field.[16]
Let be a vector field on a bounded domain , which is twice continuously differentiable inside , and let be the surface that encloses the domain . Then can be decomposed into a curl-free component and a divergence-free component as follows:[17]
Suppose we have a vector function of which we know the curl, , and the divergence, , in the domain and the fields on the boundary. Writing the function using delta function in the form
where is the Laplacian operator, we have
Now, changing the meaning of to the vector Laplacian operator, we can move to the right of theoperator.
where we have used the vector Laplacian identity:
differentiation/integration with respect to by and in the last line, linearity of function arguments:
The term "Helmholtz theorem" can also refer to the following. Let C be a solenoidal vector field and d a scalar field on R3 which are sufficiently smooth and which vanish faster than 1/r2 at infinity. Then there exists a vector field F such that
if additionally the vector field F vanishes as r → ∞, then F is unique.[18]
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type.[18] The proof is by a construction generalizing the one given above: we set
where represents the Newtonian potential operator. (When acting on a vector field, such as ∇ × F, it is defined to act on each component.)
The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose Ω is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field u ∈ (L2(Ω))3 has an orthogonal decomposition:[19][20][21]
where φ is in the Sobolev spaceH1(Ω) of square-integrable functions on Ω whose partial derivatives defined in the distribution sense are square integrable, and A ∈ H(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.
For a slightly smoother vector field u ∈ H(curl, Ω), a similar decomposition holds:
Note that in the theorem stated here, we have imposed the condition that if is not defined on a bounded domain, then shall decay faster than . Thus, the Fourier transform of , denoted as , is guaranteed to exist. We apply the convention
The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.
Now consider the following scalar and vector fields:
A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component.[22] This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field . Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have
Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:
Since and ,
we can get
so this is indeed the Helmholtz decomposition.[23]
The generalization to dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.
Let be a vector field on a bounded domain which decays faster than for and .
The scalar potential is defined similar to the three dimensional case as:
where as the integration kernel is again the fundamental solution of Laplace's equation, but in d-dimensional space:
with the volume of the d-dimensional unit balls and the gamma function.
For , is just equal to , yielding the same prefactor as above.
The rotational potential is an antisymmetric matrix with the elements:
Above the diagonal are entries which occur again mirrored at the diagonal, but with a negative sign.
In the three-dimensional case, the matrix elements just correspond to the components of the vector potential .
However, such a matrix potential can be written as a vector only in the three-dimensional case, because is valid only for .
As in the three-dimensional case, the gradient field is defined as
The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix:
In three-dimensional space, this is equivalent to the rotation of the vector potential.[8][24]
Following the same steps as above, we can write
where is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol,
which is valid in dimensions, where is a -component multi-index. This gives
We can therefore write
where
Note that the vector potential is replaced by a rank- tensor in dimensions.
Because is a function of only , one can replace , giving
Integration by parts can then be used to give
where is the boundary of . These expressions are analogous to those given above for three-dimensional space.
The Hodge decomposition is closely related to the Helmholtz decomposition,[25] generalizing from vector fields on R3 to differential forms on a Riemannian manifoldM. Most formulations of the Hodge decomposition require M to be compact.[26] Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem.
Most textbooks only deal with vector fields decaying faster than with at infinity.[16][13][27] However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than with , which is substantially less strict.
To achieve this, the kernel in the convolution integrals has to be replaced by .[28]
With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.[12][13][24][29]
In general, the Helmholtz decomposition is not uniquely defined.
A harmonic function is a function that satisfies .
By adding to the scalar potential , a different Helmholtz decomposition can be obtained:
For vector fields , decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity.
Because is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.[31]
This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.
In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator is called the Stokes operator.[32]
The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect.[8][37]
In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free).[38] In this way, the complex shear modulus can be calculated without contributions from compression waves.
The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.[15][39]
^William Woolsey Johnson: An Elementary Treatise on the Integral Calculus: Founded on the Method of Rates Or Fluxions. John Wiley & Sons, 1881. See also: Method of Fluxions.
^James Byrnie Shaw: Vector Calculus: With Applications to Physics. D. Van Nostrand, 1922, p. 205. See also: Green's Theorem.
^Joseph Edwards: A Treatise on the Integral Calculus. Volume 2. Chelsea Publishing Company, 1922.
^R. Dautray and J.-L. Lions. Spectral Theory and Applications, volume 3 of Mathematical Analysis and Numerical Methods for Science and Technology. Springer-Verlag, 1990.
^V. Girault, P.A. Raviart: Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986.
^A. M. Stewart: Longitudinal and transverse components of a vector field. In: Sri Lankan Journal of Physics 12, pp. 33–42, 2011, doi:10.4038/sljp.v12i0.3504arXiv:0801.0335
^ abErhard Glötzl, Oliver Richters: Helmholtz Decomposition and Rotation Potentials in n-dimensional Cartesian Coordinates. 2020, arXiv:2012.13157.
^Frank W. Warner: The Hodge Theorem. In: Foundations of Differentiable Manifolds and Lie Groups. (= Graduate Texts in Mathematics 94). Springer, New York 1983, doi:10.1007/978-1-4757-1799-0_6.
^Cantarella, Jason; DeTurck, Dennis; Gluck, Herman (2002). "Vector Calculus and the Topology of Domains in 3-Space". The American Mathematical Monthly. 109 (5): 409–442. doi:10.2307/2695643. JSTOR2695643.
^Sheldon Axler, Paul Bourdon, Wade Ramey: Bounded Harmonic Functions. In: Harmonic Function Theory (= Graduate Texts in Mathematics 137). Springer, New York 1992, pp. 31–44, doi:10.1007/0-387-21527-1_2.
^Alexandre J. Chorin, Jerrold E. Marsden: A Mathematical Introduction to Fluid Mechanics (= Texts in Applied Mathematics 4). Springer US, New York 1990, doi:10.1007/978-1-4684-0364-0.
^Tomoharu Suda: Construction of Lyapunov functions using Helmholtz–Hodge decomposition. In: Discrete & Continuous Dynamical Systems – A 39.5, 2019, pp. 2437–2454, doi:10.3934/dcds.2019103.
^Tomoharu Suda: Application of Helmholtz–Hodge decomposition to the study of certain vector fields. In: Journal of Physics A: Mathematical and Theoretical 53.37, 2020, pp. 375703. doi:10.1088/1751-8121/aba657.
^Heinz-Otto Peitgen, Hartmut Jürgens, Dietmar Saupe: Strange Attractors: The Locus of Chaos. In: Chaos and Fractals. Springer, New York, pp. 655–768. doi:10.1007/978-1-4757-4740-9_13.
George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists, 4th edition, Academic Press: San Diego (1995) pp. 92–93
George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists – International Edition, 6th edition, Academic Press: San Diego (2005) pp. 95–101
Rutherford Aris, Vectors, tensors, and the basic equations of fluid mechanics, Prentice-Hall (1962), OCLC299650765, pp. 70–72