Tag Archives: determinant

Partial derivatives

Let u:\mathbb{R}^n\times\mathbb{R}\to\mathbb{R},(x,t)\mapsto u(x,t) be a smooth map, and define the partial derivative as u_{ij}:=\frac{\partial^2 u}{\partial x_i\partial x_j} for each 1\le i,j\le n. Then \det(u_{ij}):\mathbb{R}^n\times\mathbb{R}\to\mathbb{R} is also a smooth map. The derivative with respect to t is

\frac{d}{dt}\det(u_{ij})=\frac{d}{dt}(\sum_{k_i}\delta^{k_1\cdots k_n}_{\ 1\cdots\ n}u_{k_1 1}\cdots u_{k_n n})=\sum_{k_i}\delta^{k_1\cdots k_n}_{\ 1\cdots\ n}\frac{d}{dt}(u_{k_1 1}\cdots u_{k_n n})

=\sum_{k_i}\delta^{k_1\cdots k_n}_{\ 1\cdots\ n}\sum_{j}u_{k_1 1}\cdots u_{k_{j-1} j-1}\cdot\frac{d}{dt}u_{k_j j}\cdot u_{k_{j+1} j+1}\cdots u_{k_n n}

=\sum_{j}\sum_{k_i}\delta^{k_1\cdots k_n}_{\ 1\cdots\ n}u_{k_1 1}\cdots u_{k_{j-1} j-1}\cdot\frac{d}{dt}u_{k_j j}\cdot u_{k_{j+1} j+1}\cdots u_{k_n n}

=\sum_{j}\det(u_{i1},\cdots, u_{i, j-1},\frac{d}{dt}u_{i j},u_{i, j+1},\cdots,u_{in})=\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot A_{ij},

where the A_{ij} represents the i,j element of the matrix cofactors. In particular

\frac{d}{dt}\log\det(u_{ij})=\frac{1}{\det(u_{ij})}\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot A_{ij}=\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot u^{ij}.

Now let’s consider the second derivative for a special case that u is affine with respect to t, that is, \frac{d}{dt}u_{ij} is independent of t:

\frac{d^2}{dt^2}\log\det(u_{ij})=(\frac{d}{dt}\frac{1}{\det(u_{ij})})\cdot\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot A_{ij}

– – – – – – – – – – – – +\frac{1}{\det(u_{ij})}\cdot\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot (\frac{d}{dt}A_{ij})=I+I\!I, where

I=-(\sum_{1\le i,j\le n}\frac{d}{dt}u_{i j}\cdot u^{ij})^2\le0, and

I\!I\cdot\det(u_{ij})=\sum_{ij}\frac{d}{dt}u_{i j}\cdot (\sum_{k\neq i,l\neq j}\frac{d}{dt}u_{kl}\cdot B_{ij,kl})

– – – – – – =\sum_{k\neq i,l\neq j}\frac{d}{dt}u_{i j} \cdot\frac{d}{dt}u_{kl}\cdot B_{ij,kl}.

I do not know if I\!I is zero (if it is, then the minus function -\log\det(u_{ij}) will be convex).


In linear algebra, a symmetric matrix A\in \mathcal{M}_n(\mathbb{R}) is said to be positive definite if v^\tau A v is positive, for any non-zero vector v\in\mathbb{R}^n\backslash\{{\bf 0}\}.

Proposition. A symmetric A, and a symmetric and positive-definite matrix B can be simultaneously diagonalized: that is, there exists X such that X^\tau AX=\mathrm{diag}(a_1,\cdots,a_n) and X^\tau BX=I.

Proposition. Suppose A,B are symmetric matrices with A>0 and AB>0. ThenB>0.
Proof. Since B is symmetric, there exists an orthogonal matrix O with O^\tau BO=\mathrm{diag}(b_1,\cdots,b_n). Clearly C=O^\tau AO>0 and O^\tau AOO^\tau BO=O^\tau ABO>0. So the (ii)-entries c_{ii}b_i>0 and c_{ii}>0. Therefore b_i>0 and B>0.

Proposition. Suppose A,B>0. If A^2-B^2>0, then A-B>0.
Proof. Note that 2(A^2-B^2)=(A+B)(A-B)+(A-B)(A+B)>0. Then (A-B)(A+B)>0. Combining with (A+B)>0 we get (A-B)>0.