Some notations

7. Let f be an Anosov diffeomorphism and g\in\mathcal{U}(f) be close enough, which leads to a Holder continuous conjugate h_g:M\to M with g\circ h_g=h_g\circ f. Ruelle found an explicit formula of h_g.

Let f,g:M\to M be two homeomorphisms, d(f,g)=\sup_M d(fx,gx), and \mathcal{U}(f,\epsilon)=\{g \text{ homeo and }d(f,g)<\epsilon\}. Let g\in \mathcal{U}(f,\epsilon). Then the map X_g:x\in M \mapsto \exp^{-1}_{fx}(gx)\in T_{fx}M gives a shifted-vector field on M, which induces a diffeomorhism \mathcal{U}(f,\epsilon)\to \mathcal{X}(0_f,\epsilon), g\mapsto X_g.
Let f be a C^r diffeomprhism. Then \mathcal{X}^r(0_f,\epsilon)\to \mathcal{U}^r(f,\epsilon), g\mapsto X_g induces the local Banach structure and turns \mathrm{Diff}^r(M) into a Banach manifold.

Let X_g\circ f^{-1}=X_g^s+X_g^u be the decomposition of the correction X_g\circ f^{-1} with respect to the hyperbolic splitting TM= E_g^s\oplus E_g^u. Then the derivative of g\mapsto h_g in the direction of X_g is given by the vector field \displaystyle \sum_{n\ge 0}Dg^n X^s_g-\sum_{n\ge1}Dg^{-n}X^u_g.

6. Let M be a compact orientable surface of genus g\ge1, s\ge1 and let \Sigma=\{p_1,\cdots,p_s\} be a subset of M. Let \kappa= (\kappa_1,\cdots,\kappa_s) be a s-tuple of positive integers with \sum (\kappa_i-1) =2g-2.

A translation structure on (M,\Sigma) of type \kappa is an atlas on M\backslash\Sigma
for which the coordinate changes are translations, and such that each singularity p_i
has a neighborhood which is isomorphic to the \kappa_i-fold covering of a neighborhood
of 0 in \mathbb{R}^2\backslash\{0\}.

The Teichmüller space Q_{g,\kappa}= Q(M,\Sigma,\kappa) is the set of such structures modulo isotopy relative to \Sigma. It has a canonical structure of manifold.

5. Dynamical Borel–Cantelli lemmas. Chernov and Kleinbock established the SBC property for certain families of cylinders in the setting of topological Markov chains and for certain classes of dynamically defined rectangles in the setting of Anosov diffeomorphisms preserving Gibbs measures. Dolgopyat has related BC results for sequences of balls in uniformly partially hyperbolic systems preserving a measure equivalent to Lebesgue which
have exponential decay of correlations with respect to Hölder observables.

A sequence of real numbers p(m) is said to be the decay rate of a dynamical system (X,\mu,T), if |E(f\cdot g\circ T^m)-E(f)\cdot E(g)|\le p(m)\cdot\|f\|_{BV}\cdot\|g\|_{L^1} for all g\in L^1(\mu) and all f with bounded variation.

D. Kim; C. Gupta, M. Nicol andW. Ott: (summable decay of correlations implies the SBC property) assume the decay rates p(m) satisfies \sum_m p(m)<\infty, then strong Borel-Cantelli property holds for any sequence of subsets A_k with \sum_k\mu(A_k)=\infty.

Haydn, Nicol, Persson and Vaienti: (I) under certain assumptions on the measure, then a sufficiently high polynomial rate of decay of correlations for Lipschitz observables implies Borel–Cantelli for all sequence of balls B_i with \mu(B_i)\ge i^{-\gamma}, for some \gamma\in(0,1); (II) exponential decay
of correlations implies Borel–Cantelli for all the sequence of balls with \mu(B_i)\ge i^{-1}.

4. Borel-Cantelli Lemma. Let (\Omega,P) be a probability space and \{A_n\} be a sequence of events with \sum_n P(A_n)<\infty, then P(A_n i.o.)=P(\limsup_{n\to\infty}A_n)=0.

Proof. Consider X=\sum_n I_{A_n} be the number of events that occur. Then the expectation \mathbb{E}X<\infty and hence X<\infty, a.s.

Second Borel-Cantelli Lemma. Let (\Omega,P) be a probability space and \{A_n\} be a sequence of independent events with \sum_n P(A_n)=\infty, then P(A_n i.o.)=P(\limsup_{n\to\infty}A_n)=1.

Proof. Let m<n<\infty. Then P(\bigcup_{[k\ge m]}A_k)\ge 1-P(\bigcap_{[m,n]}A^c_k)=1-\prod_{[m,n]}(1-P(A_n))
\ge 1-\prod_{[m,n]}e^{-P(A_k)}=1-e^{-\sum_{[m,n]}P(A_k)}\to 1-e^{-\infty}=1 as n\to\infty. So P(\bigcup_{[k\ge m]}A_k)=1 for all m\ge1 and hence P(A_n i.o.)=P(\limsup_{n\to\infty}A_n)=1.

In fact, the so called strong Borel-Cantelli property holds: \displaystyle \frac{\sum_{1\le k\le n}I_{A_k}(x)}{\sum_{1\le k\le n}P(A_k)}\to 1 for almost every point x.

Kolmogorov’s 0-1 Law. Let \mathcal{T}=\bigcap_{n\ge1}\sigma(X_k:k\ge n) be the tail of \sigma-field (events of remote future). Let X_1,\cdots,X_n,\cdots be independent and A\in\mathcal{T} be a tail event. Then P(A)\in\{0,1\}.

Kolmogorov’s maximal inequality. Suppose X_1,\cdots,X_n,\cdots are independent, \mathbb{E}(X_i)=0 and \text{Var}(X_i)<\infty. Then P(\max_{1\le k \le n}|S_n|\ge r)\le r^{-2}\text{Var}(S_n).

Compare with Chebyshev inequality: P(|S_n|\ge r)\le r^{-2}\text{Var}(S_n).

—————-

3. Let f:M\to M be a orientation-preserving diffeomorphism and m be the volume measure induced by some volume form \omega. Let D_xf:T_xM\to T_{fx}M be the tangent map between two normed space and J(f,x)=\det(D_xf) be the Jacobian. We want to consider the Radon-Nikodym derivative of f^\ast m with respect to m. To this end let A\subset M be a measurable subset. Then

f^\ast m(A)=m(f^{-1}A)=\int I_{f^{-1}A}(x)dm(x)
=\int_M I_{A}(fx)dm(x)=\int_M I_A(y)\cdot J(f^{-1},y)dm(y).
So \displaystyle\phi(x)=\frac{df^\ast m}{dm}(x)=J(f^{-1},x)=\frac{1}{J(f,f^{-1}x)}.

More generally we start with d\mu=\phi\cdot dm. Then f^\ast\mu(A)=\mu(f^{-1}A)=\int I_{A}(fx)\phi(x)dm(x) =\int I_A(y)\phi(f^{-1}y)\cdot J(f^{-1},y)dm(y). So \frac{df^\ast\mu}{dm}(x)=\phi(f^{-1}x)\cdot J(f^{-1},x) and \frac{df^\ast\mu}{d\mu}(x)=\frac{\phi(f^{-1}x)}{\phi(x)}\cdot J(f^{-1},x)

———–

2. Let’s consider the ODE \dot{x}=F(x). Suppose \phi(t,x) solves the ODE with the initial \phi(0,x)=x. So \dot{\phi}(t,x)=F(\phi(t,x)). Taking differential w.r.t. x, we get a matrix equation: D_x\dot{\phi}(t,x)=D_xF(\phi(t,x))\cdot D_x\phi(t,x). Then check that \dot{J}(t,x)=\frac{d}{d t}(\det D_x\phi(t,x))=\mathrm{tr}(D_xF(\phi(t,x)))\cdot \det D_x\phi(t,x) =\mathrm{div}F(\phi(t,x))\cdot J(t,x).
So the Jacobian is given by J(t,x)=e^{\int_0^t \mathrm{div}F(\phi(s,x))dt}. In particular \dot{J}(0,x)=\mathrm{div}F(x).

For a vector field X\in\mathfrak{X}(M), its divergence can be defined with respect to a given volume form \omega. That is, \mathrm{div}X\cdot\omega=d(\imath_X\omega)=\mathfrak{L}_X\omega=\frac{d}{dt}|_{t=0}\phi^\ast_t\omega.
So the vector field is divergence-free, \mathrm{div}F=0, if and only if the induced flow is volume-preserving.

Let (M,g) be a Riemannian manifold, \omega=\sqrt{\det g}dx^1\wedge\cdots\wedge dx^n. Then \mathrm{div}X\cdot\omega=d(\imath_X\omega)=(\sum\partial_i (V^i\sqrt{\det g}))dx^1\wedge\cdots\wedge dx^n=\frac{\sum\partial_i (V^i\sqrt{\det g})}{\sqrt{\det g}}\omega. So \mathrm{div}X=\frac{\sum\partial_i (V^i\sqrt{\det g})}{\sqrt{\det g}}. f:M\to\mathbb{R} be a smooth function and \nabla\!f be the gradient vector field. Then the induced gradient flow is volume-preserving iff \Delta\!f=\mathrm{div}(\nabla\!f)=0, that is, f is a harmonic function. According to maximum value principle, either M is noncompact, or f is a constant.

Let (M,\omega) be a symplectic manifold, H:M\to\mathbb{R} be a smooth function and X_H be the symplectic gradient vector field. Then \mathrm{div}(X_H)\omega^n=d(\imath_{X_H}\omega^n)=d(dH\wedge\omega^{n-1})=0. That is, (time-independent) Hamiltonian flow is always volume-preserving (the time-dependent version is also true and preserves the symplectic form).

The curl of the gradient is always the zero vector: \nabla\times(\nabla\phi)=\mathbf{0}.

The divergence of the curl of any vector field \mathbf{A} is always zero: \nabla\cdot(\nabla\times\mathbf{A})=0.

————-

1. Let f\in\mathrm{PH}(M), A_f(x) be the accessibility class containing the point x. There are some different levels of accessibility of f:

  • topologically accessible: \overline{A_f(x)}=M for some point x\in M,
  • measure-theoretically accessible: m(A)=0 or 1 for every measurable su-saturated set A,
  • essentially accessible: m(A_f(x))=1 for some point x\in M (the for a.e.),
  • accessible: A_f(x)=M for some point x\in M (then for all).
  • These are some formal definitions and need examples to distinguish them.

    Advertisements
    Post a comment or leave a trackback: Trackback URL.

    Leave a Reply

    Please log in using one of these methods to post your comment:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s

    %d bloggers like this: