## Tensor Analysis

### Vector Algebra

The basic algebraic operations, i.e., non-differential, in vector calculus are referred to as vector algebra. These operations are defined for a vector space and globally applied to a vector field associated with space. Basic algebraic operations consist of:

### Vector Calculus

Vector calculus is the mathematical study of the change of vectors. Similar to the calculus of scalars, vector calculus includes differential calculus and integral calculus. Changes of the vector, both differential and integral, are formulated through several vector operators. Vector operators are defined in terms of del. Del is a vector differential operator, usually represented by the nabla symbol $\nabla $. In a three-dimensional Cartesian coordinate system with coordinates $(x_{1} ,x_{2} ,x_{3} )$and a standard basis$(e_{1} ,e_{2} ,e_{3} )$, del is written as \[\nabla =\left(\frac{\partial }{\partial x_{1} } ,\frac{\partial }{\partial x_{2} } ,\frac{\partial }{\partial x_{3} } \right)=\frac{\partial }{\partial x_{1} } e_{1} +\frac{\partial }{\partial x_{2} } e_{2} +\frac{\partial }{\partial x_{3} } e_{3} =\frac{\partial }{\partial x_{i} } e_{i} ,\] where $e_{i} $ is the unit vector along the direction of the coordinate axes $i$ and $i$=1, 2, or 3.

Other common vector operators include gradient, divergence, and curl, which are defined using del in the following.

Gradient is a generalization of the concept of derivative of a function in one dimension into a function in several dimensions. The operator maps a scalar field to a vector field. \[{\rm grad}=\nabla \] \[\begin{array}{l} {\nabla u=\frac{\partial u}{\partial x_{1} } e_{1} +\frac{\partial u}{\partial x_{2} } e_{2} +\frac{\partial u}{\partial x_{3} } e_{3} =\frac{\partial u}{\partial x_{i} } e_{i} } \\ {\nabla u=\frac{\partial u}{\partial x_{1} } e_{1} +\frac{\partial u}{\partial x_{2} } e_{2} +\frac{\partial u}{\partial x_{3} } e_{3} =\frac{\partial \left(u_{j} e_{j} \right)}{\partial x_{i} } e_{i} =\frac{\partial u_{j} }{\partial x_{i} } e_{i} e_{j} } \end{array}\]

Divergence is a vector operator that measures the magnitude of the source or sink in a vector field at any given point, in terms of a signed scalar. The operator maps a vector field to a scalar field. \[{\rm div}=\nabla \cdot \] \[\nabla \cdot u =\frac{\partial \left(u_{j} e_{j} \right)}{\partial x_{i} }\cdot e_{i}=\frac{\partial u_{j} }{\partial x_{i} } e_{i} \cdot e_{j}=\frac{\partial u_{j} }{\partial x_{i} }\delta _{ij}=\frac{\partial u_{i} }{\partial x_{i} }\]

Curl is a vector operator that describes the infinitesimal rotation of a three-dimensional vector field. The operator maps a vector field to another vector field. \[\begin{array}{l} {{\rm curl}=\nabla \times } \\ {\nabla \times u=\left|\begin{array}{ccc} {e_{i} } & {e_{j} } & {e_{k} } \\ {\frac{\partial }{\partial x_{i} } } & {\frac{\partial }{\partial x_{j} } } & {\frac{\partial }{\partial x_{k} } } \\ {u_{i} } & {u_{j} } & {u_{k} } \end{array}\right|=\frac{\partial u_{i} }{\partial x_{j} } e_{k} \varepsilon _{ijk} } \end{array}\]

Laplacian measures the difference between the value of the scalar/vector field with its average on infinitesimal balls. The operator maps between scalar/vector fields. \[\nabla ^{2} ={\rm div\; grad}=\nabla \cdot \nabla =\Delta \] \[\Delta u=\frac{\partial \left(\frac{\partial u}{\partial x_{j} } e_{j} \right)}{\partial x_{i} } \cdot e_{i} =\frac{\partial ^{2} u_{i} }{\partial x_{i}^{2} } \] Therefore, these operators could change the order of the vectors that they act on. $\space$### Theorems

Three common theorems in calculus, i.e., Green's theorem, Stoke's theorem and Gauss (divergence) theorem, are frequently encountered in the derivation of the equations in multiphysics. These theorems for the calculus of scalars can be generalized to higher-order tensors.

Green's theorem states that the integral of the scalar curl of a vector field over some regions in a plane equals the line integral of the vector field over the closed curve bounding the region oriented in the counter-clockwise direction. This theorem can be formulated using vector operators as \[\int _{S}\nabla \cdot F dS=\oint _{\partial S}F\cdot n dL.\]

In Stokes' theorem, the integral of the curl of a vector field over a surface in $R^{3} $ equals the line integral of the vector field over the closed curve bounding the surface. The corresponding mathematical formulation is \[\mathop{\int\!\!\!\!\int}\nolimits _{\sigma \subset R^{3} }\nabla \times F\cdot d\sigma =\oint _{\partial \sum }F\cdot dr . \]

The divergence theorem says that the integral of the divergence of a vector field over a volume equals the integral of thefluxthrough the closed surface bounding the volume, which is formulated as \[\mathop{\int\!\!\!\!\int\!\!\!\!\int}\nolimits _{V\subset R^{3} }\left(\nabla \cdot F\right)dV =\oint _{\partial V}F\cdot dS . \]

It is worthwhile to point out that the above three theorems can be formulated using a general form: \[\int _{\Omega }\left(\nabla *{\rm \; }f\right) d{\rm \Omega }=\int _{\partial \Omega }\left(n*{\rm \; }f\right) d{\rm \Omega =}\int _{\partial \Omega }d\Omega *{\rm \; }f , \] where $*$ is used to represent any tensor product, i.e., inner, outer and cross products.

$\space$### Tensor Product

The tensor product is a type of operation between tensors. This operation also applies to vectors, though we skipped it in the section for vectors. In linear algebra, the term outer product is typically used to refer to the tensor product of two vectors. In the dyadic context, dyadic product, outer product, and tensor product all share the same definition and meaning, and thus are used synonymously. However, thetensor productis the most general and abstract term among them. There are several equivalent terms and notations for this product: 1. The dyadic product of two vectors $u$ and $v$ is denoted by the juxtaposition of them. 2. Theouter product of two column vectors $u$ and $v$ is denoted and defined as$u\otimes v$ or $uv^{{\rm T}} $, where T means transpose. 3. Thetensor product of two vectors $u$ and $v$ is denoted by$u\otimes v$. The above usages can be proven to be equivalent. Consider a three-dimensional Euclidean space, in which we have two the following vectors: \[u=u_{1} e_{1} +u_{2} e_{2} +u_{3} e_{3} \] \[v=v_{1} e_{1} +v_{2} e_{2} +v_{3} e_{3} \] where $e_{1} $,$e_{2} $,$e_{3} $ are the standard basis vectors in this vector space. Then the dyadic product of $u$ and $v$ can be represented as a sum: \[uv=u_{1} v_{1} e_{1} e_{1} +u_{1} v_{2} e_{1} e_{2} +u_{1} v_{3} e_{1} e_{3} \] \[+u_{2} v_{1} e_{2} e_{1} +u_{2} v_{2} e_{2} e_{2} +u_{2} v_{3} e_{2} e_{3} \] \[+u_{3} v_{1} e_{3} e_{1} +u_{3} v_{2} e_{3} e_{2} +u_{3} v_{3} e_{3} e_{3} \] While using row and column vectors, the result of the outer product or tensor product of$u$and $v$ is a 3$\times$3 matrix as: \[uv=u\otimes v=uv^{{\rm T}} =\left(\begin{array}{c} {u_{1} } \\ {u_{2} } \\ {u_{3} } \end{array}\right)\left(\begin{array}{ccc} {v_{1} } & {v_{2} } & {v_{3} } \end{array}\right)=\left(\begin{array}{ccc} {u_{1} v_{1} } & {u_{1} v_{2} } & {u_{1} v_{3} } \\ {u_{2} v_{1} } & {u_{2} v_{2} } & {u_{2} v_{3} } \\ {u_{3} v_{1} } & {u_{3} v_{2} } & {u_{3} v_{3} } \end{array}\right) \] These two operations are thus essentially equivalent. Tensor Product is associative and distributive but not commutative. As for the associative law, we have \[\left(uv\right)w=u\left(vw\right)=u\left(vw\right) \] The associative low is compatible with scalar multiplication for any scalar$\alpha $: \[\left(\alpha u\right)v==\alpha \left(uv\right)=u\left(\alpha v\right) \] The inner product is used interchangeably with the dot product in many occasions. Their difference lies in that the inner product generalizes the dot product toabstract vector spacesover a field of scalars, either real orcomplex numbers. The definition of inner product in this way is usually written as$\left\langle a,b\right\rangle $. It is seen from the symbols that the major difference between outer product and inner product is the symbol ".", which is called contraction or tension contraction. The contract can be defined as follows: \[e_{i} \cdot e_{j} =\delta _{ij} \] In a simple way, we can understand the contraction operation as relating two vectors on two sides of the operator using the same Cartesian coordinate system. Therefore, the two sets of bases associated with the two vectors degenerate into the Kronecker delta. So, you can see that this operation reduces the order of a tensor by two. Besides dot product, two types of double-dot products are also useful in manipulating dyads: the vertical and horizontal double-dot products [Taber, 2004], \[\sigma :\tau =\left(\sigma _{ij} e_{i} e_{j} \right):\left(\tau _{kl} e_{k} e_{l} \right)=\sigma _{ij} \tau _{kl} \left(e_{i} \cdot e_{k} \right)\left(e_{j} \cdot e_{l} \right)=\sigma _{ij} \tau _{kl} \delta _{ik} \delta _{jl} =\sigma _{il} \tau _{il} \] \[\begin{array}{l} {\sigma \cdot \cdot \tau =\left(\sigma _{ij} e_{i} e_{j} \right)\cdot \cdot \left(\tau _{kl} e_{k} e_{l} \right)} \\ {{\rm \; \; \; \; \; \; }=\sigma _{ij} \tau _{kl} \left(e_{i} \cdot e_{l} \right)\left(e_{j} \cdot e_{k} \right)=\sigma _{ij} \tau _{kl} \delta _{il} \delta _{jk} =\sigma _{ik} \tau _{ki} } \end{array} \] These two operators are not commutative.