Category Archives: Probability

Some simple dynamical systems

Dynamical formulation of Prisoner’s dilemma
Originally, consider the two players, each has a set of stratagies, say \mathcal{A}=\{a_{i}\} and \mathcal{B}=\{b_{j}\}. The pay-off P_k=P_k(a_{i},b_{j}) for player k depends on the choices of both players.

Now consider two dynamical systems (M_i,f_i). The set of stratagies consists of the invariant probability measures, and the pay-off functions can be

\phi_k(\mu_1,\mu_2)=\int \Phi_k(x,y)d\mu_1 d\mu_2, where \mu_i\in\mathcal{M}(f_i);

\psi_k(\mu_1,\mu_2)=\int \Phi_k(x,y)d\mu_1 d\mu_2-h(f_i,\mu_i).

The frist one is related to Ergodic optimization. The second one does sound better, since one may want to avoid a complicated (measured by its entropy) stratagy that has the same \phi pay-off.

Gambler’s Ruin Problem
A gambler starts with an initial fortune of $i,
and then either wins $1 (with p) or loses $1 (with q=1-p) on each successive gamble (independent of the past). Let S_n denote the total fortune after the n-th gamble. Given N>i, the gambler stops either when S_n=0 (broke), or S_n=N (win), whichever happens first.

Let \tau be the stopping time and P_i(N)=P(S_\tau=N) be the probability that the gambler wins. It is easy to see that P_0(N)=0 and P_N(N)=1. We need to figure out P_i(N) for all i=1,\cdots,N-1.

Let S_0=i, and S_n=S_{n-1}+X_n. There are two cases according to X_1:

X_1=1 (prob p): win eventually with prob P_{i+1}(N);

X_1=-1 (prob q): win eventually with prob P_{i-1}(N).

So P_i(N)=p\cdot P_{i+1}(N)+q\cdot P_{i-1}(N), or equivalently,
p\cdot (P_{i+1}(N)-P_i(N))=q\cdot (P_i(N)-P_{i-1}(N)) (since p+q=1), i=1,\cdots,N-1.

Recall that P_0(N)=0 and P_N(N)=1. Therefore P_{i+1}(N)-P_i(N)=\frac{q^i}{p^i}(P_1(N)-P_{0}(N))=\frac{q^i}{p^i}P_1(N), i=1,\cdots,N-1. Summing over i, we get 1-P_1(N)=P_1(N)\cdot\sum_{1}^{N-1}\frac{q^i}{p^i}, P_1(N)=\frac{1}{\sum_{0}^{N-1}\frac{q^i}{p^i}}=\frac{1-q/p}{1-q^N/p^N} (if p\neq .5) and P_1(N)=\frac{1}{N} (if p= .5). Generally P_i(N)=P_1(N)\cdots\sum_{0}^{i-1}\frac{q^j}{p^j}=\frac{1-q^i/p^i}{1-q^N/p^N} (if p\neq .5) and P_1(N)=\frac{i}{N} (if p= .5).

Observe that for fixed i, the limit P_i(\infty)=1-q^i/p^i>0 only when p>.5, and P_i(\infty)=0 whenever p\le .5.

Finite Blaschke products
Let f be an analytic function on the unit disc \mathbb{D}=\{z\in\mathbb{C}: |z|<1 \} with a continuous extension to \overline{\mathbb{D}} with f(S^1)\subset S^1. Then f is of the form

\displaystyle f(z)=\zeta\cdot\prod_{i=1}^n\left({{z-a_i}\over {1-\bar{a_i}z}}\right)^{m_i},

where \zeta\in S^1, and m_i is the multiplicity of the zero a_i\in \mathbb{D} of f. Such f is called a finite Blaschke product.

Proposition. Let f be a finite Blaschke product. Then the restriction f:S^1\to S^1 is measure-preserving if and only if f(0)=0. That is, a_i=0 for some i.

Proof. Let \phi be an analytic function on \overline{\mathbb{D}}. Then \int_{S_1}\phi d(\theta)=\phi(0) and \int_{S_1}\phi\circ f d(\theta)=\phi\circ f(0).

Significance: there are a lot of measure-preserving covering maps on S^1.

Kalikov’s Random Walk Random Scenery
Let X=\{1,-1\}^{\mathbb{Z}}, and \sigma:X\to X to the shift \sigma((x_n))=(x_{n+1}). More generally, let A be a finite alphabet and p be probability vector on A, and Y=A^{\mathbb{Z}}, \nu=p^{\times\mathbb{Z}}. Consider the skew-product T:X\times Y\to X\times Y, (x,y)\mapsto (\sigma x, \sigma^{x_0}y). It is clear that T preserves any \mu\times \nu, where \mu is \sigma-invariant.

Proposition. Let \mu=(.5,.5)^{\times\mathbb{Z}}. Then h(T,\mu\times \nu)=h(\sigma,\mu)=\log 2 for all (A,p).

Proof. Note that T^n(x,y)\mapsto (\sigma^n x, \sigma^{x_0+\cdot+x_{n-1}}y). CLT tells that \mu(x:|x_0+\cdot+x_{n-1}|\ge\kappa\cdot \sqrt{n})< \delta(\kappa) as n\to\infty, where \delta(\kappa)\to 0 as \kappa\to\infty. There are only 2^{n+\kappa \sqrt{n}} different n-strings (up to an error).

Significance: this gives a natural family of examples that are K, but not isomorphic to Bernoulli.

 Creation of one sink. 1D case. Consider the family f_t:x\mapsto x^2+2-t, where 0\le t\le 2. Let t_\ast the first parameter such that the graph is tangent to the diagonal at x_\ast=f_{t_\ast}(x_\ast). Note that x_\ast is parabolic. Then for t\in(t_\ast,t_\ast+\epsilon), f_t(x)=x has two solutions x_1(t)<x_2(t), where x_1(t) is a sink, and x_2(t) is a source.

2D case. Let B=[-1,1]\times[-\epsilon,\epsilon] be a rectangle, f be a diffeomorphism such that f(B) is a horseshoe lying  above B of shape ‘V’. Moreover we assume |\det Df|<1. Let f_t(x,y)=f(x,y)-(0,t) such that f_1(B) is the regular horseshoe intersection:  V . Clearly there exists a fixed point p_1 of f_1 in B. We assume \lambda_1(1)<-1<\lambda_2(1)<0. Then Robinson proved that f_t admits a fixed point in B which is a sink.

First note that for any t, and any fixed point of f_t (if exists), it is not on the boundary of B. Since p_1 is a nondegenerate fixed point of f_1, the fixed point continues to exist for some open interval (t_1,1) (assume it is maximal, and denote the fixed point by p_t). Clearly t_1>0. Note that p_{t_1} is also fixed by f_{t_1}, since it is a closed property. If there is some moment with \lambda_1(t)=\lambda_2(t) for the fixed point p_t of f_t, then it is already a sink, since \det Df=\lambda_1\cdot\lambda_2<1. So in the following we consider the case \lambda_1\neq\lambda_2 for all p_t, t\in[t_1,1]. Then the continuous dependence of parameters implies that both are continuous functions of t. The fixed point p_{t_1} must be degenerate, since the fixed point ceases to exist beyond t_1, which means: \lambda_i(t_1)=1 for some i\in\{1,2\}.

Case 1. \lambda_1(t_1)=1. Note that \lambda_1(1)<-1. So \text{Re}\lambda_1(t_\ast)=0 for some t_\ast\in(t_1,1), which implies that \lambda_1(t_\ast)=ai for some a\neq 0. In particular, \lambda_2(t_\ast)=-ai, and a^2=|\det Df|<1. So p_{t_\ast} is a (complex) sink.

Case 2. \lambda_2(t_1)=1. Note that \lambda_2(1)<0. Similarly \text{Re}\lambda_2(t_\ast)=0 for some t_\ast\in(t_1,1).

So in the orientation-preserving case there always exists a complex sink. In the orientation-reversing case (\lambda_2(1)\in(0,1)), we need modify the argument for case 2:

Case 2′. \lambda_2(t_1)=1. Note that \lambda_2(1)\in(0,1). So |\lambda_1(t_\ast)|<1 for some t_\ast\in(t_1,1). We pick t_\ast close to t_1 in the sense that |\lambda_1(t_\ast)|>|\det Df|, which implies |\lambda_1(t_\ast)|\ast<1, too. So p_{t_\ast} is also a sink.

Advertisements