Martingale and its application to dynamical systems

In the last week of May I attended two lectures given by Professor Matthew Nicol.

Let (\Omega,\mu) be a prob space with a \sigma-algebra \mathcal{B}. Let \mathcal{F}\prec \mathcal{B} be a sub \sigma-algebra.

Example. f(x)=2x (\text{mod} 1) on \mathbb{T}, and \mathcal{B} be the Borel \sigma-algebra. Let \mathcal{F}=f^{-1}\mathcal{B}. Note that (0.2,0.3)\notin\mathcal{F}.

Let Y be a \mathcal{B}-measurable r.v. and Y\in L^1(\mu). The conditional expectation E(Y|\mathcal{F}) is the unique \mathcal{F}-measurable r.v. Z satisfying Z^{-1}(a,b)\in \mathcal{F} for all (a,b), and \int_F Z d\mu=\int _A Y d\mu for all A\in \mathcal{F}.

Note that E(Y|\mathcal{F})=Y if and only if Y is \mathcal{F}-measurable; and E(Y|\mathcal{F})=E(Y) if Y is independent of \mathcal{F}.

Let (X_n)_{n\ge 0} be a stationary ergodic process with stationary initial distribution \mu. A basic problem is to find sufficient conditions on (X_n)_{n\ge 0} and on functions \phi\in L^2_0(\mu) such that \displaystyle S_n(\phi)=\sum_{k=1}^n \phi(X_k) satisfies the central limit theorem (CLT) \displaystyle \frac{1}{\sqrt{n}}S_n(\phi) \to N(0,\sigma^2), where the limit variance is given by \displaystyle \sigma^2(\phi)=\lim_{n\to\infty}\frac{1}{n}E(S^2_n(\phi)).

Let f be a conservative diffeomorphism on (M,m). There are two operators: \phi\mapsto U\phi=\phi\circ f, and \phi\mapsto P\phi via \int P\phi\cdot \psi=\int \phi\cdot \psi\circ f for all test function \psi.

Property. PU(\phi)=\phi (vol-preserving) and UP(\phi)=E(\phi|f^{-1}\mathcal{B}).

Let \mathcal{F}_n be an increasing sequence of \sigma-algebras. Then a sequence of r.v. S_n is called a martingale w.r.t. \mathcal{F}_n, if S_n is \mathcal{F}_n-measurable, E(S_{n+1}|\mathcal{F}_n)=S_n.

Let \mathcal{F}_n be a decreasing sequence of \sigma-algebras. Then a sequence of r.v. S_n is called a reverse martingale w.r.t. \mathcal{F}_n, if S_n is \mathcal{F}_n-measurable, E(S_{n}|\mathcal{F}_m)=S_m for any n\le m.

Theorem. Let \{X_n:n\ge 1\} be a stationary ergodic sequence of (reverse) martingale differences w.r.t. \{\mathcal{F}_n\}. Suppose E(X_n)=0, and \sigma^2=\text{Var}(X_i)>0. Then \displaystyle \frac{1}{\sigma\sqrt{n}}\sum_{i=1}^n X_i \to N(0,1) in distribution.

Gordin: Suppose (f,m) is ergodic. Consider the Birkhoff sum \displaystyle \sum_{i=1}^n \phi\circ f^i for some \phi with \int \phi=0. The time series \phi\circ f^i can be approximated by martingale differences provided the correlations decay quickly enough.

Suppose there exists p(n) with \sum  p(n) < \infty, such that \|P^n\phi\|\le C\cdot p(n)\|\phi\|. Then define \displaystyle g=\sum_{n\ge 1}P^n\phi, and let X=\phi+g-g\circ f.

Property. Let f:M\to M be such that f^{-n}\mathcal{B} is decreasing. \displaystyle S_n=\sum_{i=1}^n X\circ f^i is a reverse martingale with respect to f^{-n}\mathcal{B}.

Proof. Note that PX=P\phi+Pg-PUg=0. Then E(X|f^{-1}\mathcal{B})=UP(X)=U0=0.
Let k < n. It remains to show E(X\circ f^k|f^{-n}\mathcal{B})=0. To this end, we pick an element A\in f^{-n}\mathcal{B} and write it as A=f^{-k-1}C for some C\in f^{k+1-n}\mathcal{B}. Then \displaystyle \int_A X\circ f^k dm=\int_{f^{-1}C}X dm =\int_{f^{-1}C} E(X|f^{-1}\mathcal{B}) dm=\int_{f^{-1}C}0 dm=0. This completes the proof.

Three theorems of Gordin. Let (\Omega,\mu,T) be an invertible \mu-preserving ergodic system, X\in L^1(\mu) and X_k(x)=X(T^kx) be a strictly stationary ergodic sequence.

(*) \displaystyle \limsup_{n\to\infty}\frac{1}{\sqrt{n}}E|S_n| < \infty

Theorem 1. Suppose there exists \displaystyle \mathcal{F}_k\subset T^{-1}\mathcal{F}_k=\mathcal{F}_{k+1} such that \displaystyle \sum_{k\ge 0} E|E(X_0|\mathcal{F}_{-k})|<\infty, \displaystyle \sum_{k\ge 0} E|X_0-E(X_0|\mathcal{F}_{k})| < \infty. Then (*) implies \displaystyle \lambda:=\lim_{n\to\infty}\frac{1}{\sqrt{n}}E|S_n| exists, and \displaystyle \frac{1}{\sqrt{n}}S_n\to N(0,\lambda^2\pi/2) in distribution (degenerate if \lambda=0).

–Mixing condition. Let \displaystyle \alpha(n):=\sup\{P(A\cap B)-P(A)P(B):A\in\mathcal{F}^0_{-\infty}, B\in\mathcal{F}^{\infty}_n\}.

Theorem 2. Suppose for some 1/p+1/q=1, X\in L^p(\mu) and \displaystyle \sum_{n\ge 1}\alpha(n)^{1/q} < \infty. Then (*) implies the conclusion of Theorem 1.

–uniform mixing condition. Let \displaystyle \phi(n):=\sup\{P(B|A)-P(B):A\in\mathcal{F}^0_{-\infty}, \mu(A) > 0, B\in\mathcal{F}^{\infty}_n\}.

Theorem 3. Suppose X\in L^1(\mu) and \displaystyle \sum_{n\ge 1}\phi(n) < \infty. Then (*) implies the conclusion of Theorem 1.

Cuny–Merlevede: not only the CLT, but also the ASIP holds under the above conditions.

Note that we started with an invariant measure m. The operator U and P can be defined for all non-conservative maps. To emphasize the difference, we use \hat P. Suppose \hat P h=h for some h\in L^1(m). Then \mu=hm is an absolutely continuous invariant prob. measure:

\displaystyle \int \phi\circ f d\mu=\int \phi\circ f h dm=\int \phi\cdot \hat P h dm=\int \phi hdm=\int\phi d\mu.

Then we can rewrite \displaystyle P\phi=\frac{1}{h}\hat P(h\phi), in the sense that \displaystyle \int P(\phi)\cdot \psi d\mu=\int \phi\cdot \psi\circ f d\mu  =\int \phi h\cdot \psi\circ f dm
\displaystyle =\int\hat P(\phi h)\cdot \psi dm  \int \frac{1}{h}\hat P(\phi h)\cdot \psi d\mu.

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: