Minggu 2
Definitions
State: possible value a random variable can take
In the case where we are measuring markov chain of whether it rains or not, the state space is
Absorbing state: State where the transition probability to itself is 1
All other transition probabilities for such state is 0, and it will ALWAYS transition to itself.
Process: The entire evolving random system, not just one observation at one time.
Markov chain: random variable at index n+1 is only dependent with the random variable at n, i.e., the previous “state”, and only on that previous state
state n+1 is depedent on n, and only n.
Transition probability (): Probability of transitioning from state i to state j in one time step
Transition matrix : Full set of all in matrix form.
Chapman-Kolmogorov: Transition probability to go from state i to j in n + m steps.
We say n + m steps to mathematically say we can decompose any multi-step transition into two parts.
Example:
- 5-step transition from rain to rain =
- (2 steps to rain, then 3 steps rain→rain) +
- (2 steps to sunny, then 3 steps sunny→rain)
- 7-step transition:
- 1-step + 6-step
- 2-step + 5-step
- 3-step + 4-step
- Whatever decomposition is convenient
The equation just formalizes: “to get somewhere in multiple steps, you must pass through some intermediate state.”
n-step Transition Matrix () : Probability that state i goes to j in n + m steps
By Chapman-Kolmogorov equation:
Also
State vs sample
A sample is a realized observation from a distribution - a concrete outcome you observe.
A state is a possible value the random variable can take - it exists whether or not you’ve observed it.
Example: Transition matrix
Consider state space whether it rains today (state 0) or will it be sunny (state 0).
Let the transition probabilities:
- : Probability it rains (0) today, then stay in rains (0) tomorrow
- : Probability It’s sunny (1) today, then change to rains (0) tomorrow
The transition matrix is
Where
- : Probability it rains (0) today, then change to sunny (1) tomorrow
- : Probability it’s sunny (1) today, then stay sunny tomorrow
Example: n-step transition matrix
Following from Example Transition matrix, let
Using Chapman-Kolmogorov equation, we can compute the probability that if today is rains, then 4 days later it will be rains () as:
Thus,
And,
- : If it rains today, probability it rains 4 days later is 57.49%
- : If it rains today, probability it’s sunny 4 days later is 42.51%
- : If it’s sunny today, probability it rains 4 days later is 56.68%
- : If it’s sunny today, probability it’s sunny 4 days later is 43.32%
Note: Each row sums to 1 (total probability from any starting state).
Examples: More complex n-step transition matrix
Urn Ball Replacement - Step-by-Step Illustration
Setup:
- Urn always has exactly 2 balls (red or blue)
- At each step: pick a ball, replace it with probability 0.8 same color, 0.2 opposite color
- Initial: both balls red (state 2)
- Find: P(5th ball selected is red)
State Space:
- State 0: 0 red balls (both blue)
- State 1: 1 red ball, 1 blue ball
- State 2: 2 red balls (both blue)
Understanding Transitions from State 2 (both red):
You pick a ball. Since both are red, you definitely pick red.
- With prob 0.8: replace with red → stay at 2 red balls (P_{22} = 0.8)
- With prob 0.2: replace with blue → now have 1 red, 1 blue (P_{21} = 0.2)
- P_{20} = 0 (impossible to go from 2 red to 0 red in one step)
Understanding Transitions from State 1 (1 red, 1 blue):
You pick a ball. Equal chance of picking red or blue (each prob 0.5).
Case 1: Pick red (prob 0.5)
- Replace with red (prob 0.8) → still 1R, 1B (state 1)
- Replace with blue (prob 0.2) → now 0R, 2B (state 0)
Case 2: Pick blue (prob 0.5)
- Replace with blue (prob 0.8) → still 1R, 1B (state 1)
- Replace with red (prob 0.2) → now 2R, 0B (state 2)
So:
- P_{10} = 0.5 × 0.2 = 0.1
- P_{11} = 0.5 × 0.8 + 0.5 × 0.8 = 0.8
- P_{12} = 0.5 × 0.2 = 0.1
Understanding Transitions from State 0 (both blue):
Symmetric to state 2:
- P_{00} = 0.8 (pick blue, replace with blue)
- P_{01} = 0.2 (pick blue, replace with red)
- P_{02} = 0
Transition Matrix:
Finding P(5th ball is red):
The 5th ball is red if at the moment of the 5th selection, we pick a red ball.
Starting at state 2 (both red initially), after 4 replacements we need the distribution over states, then calculate probability of picking red from each state.
Compute P^(4) starting from state 2, then:
- From state 0: prob of picking red = 0
- From state 1: prob of picking red = 0.5
- From state 2: prob of picking red = 1
P(5th ball red) = P^(4){20} × 0 + P^(4){21} × 0.5 + P^(4)_{22} × 1
Computing P^(4) using Chapman-Kolmogorov
We’ll compute P^(4) = P^(2) × P^(2), breaking the 4 steps into two 2-step segments.
Step 1: Compute P^(2) = P × P
Step 2: Compute P^(4) = P^(2) × P^(2)
Final Answer:
Examples: Random walk model
A Markov chain whose state space is given by the integers i = 0, ±1, ±2, … is said to be a random walk if, for some number ,
The preceding Markov chain is called a random walk for we may think of it as being a model for an individual walking on a straight line who at each point of time either takes one step to the right with probability p or one step to the left with probability .
Examples: Gambling model
Consider a gambler who, at each play of the game, either wins $1 with probability or loses $1 with probability . If we suppose that our gambler quits playing either when he goes broke or he attains a fortune of , then the gambler’s fortune is a Markov chain having transition probabilities
States 0 and are called absorbing states since once entered they are never left. Note that the preceding is a finite state random walk with absorbing barriers (states 0 and ).
Gambler’s Ruin Simulation
Parameters: p = 0.6 (win probability), N = 10 (target), starting fortune = $5
Simulation 1:
- Start: $5
- Game 1: P₅,₆ = 0.6 → Win → $6
- Game 2: P₆,₅ = 0.4 → Lose → $5
- Game 3: P₅,₆ = 0.6 → Win → $6
- Game 4: P₆,₇ = 0.6 → Win → $7
- Game 5: P₇,₆ = 0.4 → Lose → $6
- Game 6: P₆,₇ = 0.6 → Win → $7
- Game 7: P₇,₈ = 0.6 → Win → $8
- Game 8: P₈,₉ = 0.6 → Win → $9
- Game 9: P₉,₁₀ = 0.6 → Win → $10
- Game 10: P₁₀,₁₀ = 1.0 → Stay → $10 (absorbed)
- Result: Reached target in 9 games
Simulation 2:
- Start: $5
- Game 1: P₅,₄ = 0.4 → Lose → $4
- Game 2: P₄,₃ = 0.4 → Lose → $3
- Game 3: P₃,₂ = 0.4 → Lose → $2
- Game 4: P₂,₃ = 0.6 → Win → $3
- Game 5: P₃,₂ = 0.4 → Lose → $2
- Game 6: P₂,₁ = 0.4 → Lose → $1
- Game 7: P₁,₀ = 0.4 → Lose → $0
- Game 8: P₀,₀ = 1.0 → Stay → $0 (absorbed)
- Result: Went broke in 7 games
Transition probabilities:
- = p = 0.6 for i ∈ {1,2,…,9}
- = 1-p = 0.4 for i ∈ {1,2,…,9}
- P₀,₀ = 1 (absorbing)
- P₁₀,₁₀ = 1 (absorbing)
The transition matrix P is (N+1) × (N+1). For N=10:
Where:
- Row i represents current state i (fortune = $i)
- Column j represents next state j
- States: 0, 1, 2, …, 10
- P₀,₀ = 1 (row 1: stay broke)
- P₁₀,₁₀ = 1 (row 11: stay at $10)
- For i ∈ {1,…,9}: P_{i,i-1} = 0.4, P_{i,i+1} = 0.6
Proof: Chapman-Kolmogorov Equation
Minggu 3
Classification of States
Accessible State
State is said to be accessible from state if for some .
This means that starting from state , it is possible to eventually enter state .
If is not accessible from , then:
Communication
Two states and communicate if they are mutually accessible:
- (accessible)
- (accessible)
Notation:
Properties (equivalence relation):
- Reflexive: (since )
- Symmetric: If , then
- Transitive: If and , then
Proof of transitivity: If and , then by Chapman-Kolmogorov:
Communicating Classes
States that communicate form a class. Classes are either:
- Identical (same class), or
- Disjoint (no overlap)
The Markov chain is irreducible if there is only one class (all states communicate).
Recurrent vs Transient
Let = probability that, starting from state , the process will ever return to state .
- Recurrent: (will return with probability 1)
- Transient: (positive probability of never returning)
Key properties:
- If is recurrent: returns infinitely many times
- If is transient: number of visits to has geometric distribution with finite mean
- State is recurrent iff expected number of time periods in state is infinite
Periodic vs Aperiodic
State has period if unless is divisible by , and is the greatest such integer.
- Periodic: period
- Aperiodic: period (can return at irregular times)
Limiting Probability
Regular Transition Probability Matrix
A TPM is regular if has all positive entries for some .
Properties:
- If has no zero elements for some , then will also have no zero elements for all
- A regular chain is necessarily irreducible and aperiodic
Limiting Distribution Theorem
For a regular TPM on finite state space, there exists a unique probability vector such that:
The limiting distribution satisfies:
Example: Social Class Problem
Given TPM:
Solving :
With , solving yields:
- (lower class)
- (middle class)
- (upper class)
Long-run fractions: 7.69% lower, 62.50% middle, 29.81% upper.
First Step Analysis
Hitting Probability
For a Markov chain, define (first time hitting boundary).
The hitting probability can be computed using first step analysis:
With boundary conditions:
Expected Time to Absorption
Let be the expected time to reach an absorbing state.
Using first step analysis:
where with boundary conditions .
Rangkum
| Sifat | Definisi |
|---|---|
| Accessible () | for some |
| Communication () | and |
| Recurrent | (returns infinitely often) |
| Transient | (finite expected visits) |
| Periodic | Returns at regular intervals () |
| Aperiodic | Returns at irregular times () |
| Irreducible | All states communicate |
| Regular | has all positive entries for some |