Lecture material is stored at directory “2 Personal/files/College/Semester 6/Model Stokastik 1”.
Minggu 1
Review: Probability and Random Variables
- Random Variable: A variable that takes on its values by chance.
- Discrete RV: Takes on a finite or countable set of values. Described by Probability Mass Function (PMF) .
- Continuous RV: Takes on a continuum of values. Described by Probability Density Function (PDF) where .
- Distribution Function (CDF): .
- Mean (Expectation): or .
- Variance: .
Review: Conditional Probability and Expectation
- Conditional Probability: for .
- Law of Total Probability: for a partition .
- Conditional Expectation: Discrete: . Continuous: .
- Law of Iterative Expectations: .
Definition: Classification of Stochastic Processes
A stochastic process is classified by its state space () and its index set ():
- Discrete-time, Discrete state space: , . Example: Markov Chain.
- Continuous-time, Discrete state space: , . Example: Poisson Process.
- Discrete-time, Continuous state space: , .
- Continuous-time, Continuous state space: , . Example: Brownian Motion.
Minggu 2
Definition: Stochastic process
A stochastic process is a collection of random variables indexed by .
- Process: Depends on time
- Stochastic: Probabilistic
- Stochastic Process: Sequence (implies ordering) of random variables indexed by time
Definition: State
Values of stochastic process is referred to as states of the process
About State vs Sample
- State: A possible value the random variable can take - it exists whether or not you’ve observed it.
- Sample: A realized observation from a distribution - a concrete outcome you observe.
In the case where we are measuring markov chain of whether it rains or not, the state space is
Definition: Markov property
Let : Stochastic process
Markov Property is the property that, given the value of the values of for are not influenced by the values of ,
That is, the probability of any particular future behavior of the process, when its current state is known exactly, is not altered by additional knowledge concerning its past behavior.
Definition: Markov process
A Markov Process is a stochastic process with the markov property
Definition: Absorbing state
State where the transition probability to itself is 1
All other transition probabilities for such state is 0, and it will ALWAYS transition to itself.
Definition: Process
The entire evolving random system, not just one observation at one time.
Definition: Discrete-time Markov chain
Random variable at index n+1 is only dependent with the random variable at n
A state depends on the previous “state”, and only on that previous state
Tip
state n+1 is depedent on n, and only n.
Definition: Transition probability
Suppose that the (conditional) probability of being in state given that is in state is constant over time, i.e., Then is called the one-step transition probability
Tip
Definition: Transition probability matrix (TPM)
Full set of all in matrix form.
is transition probability from state i to j
Often abbreviated to TPM
Interpretation:
- is probability of transitioning from state 1 to state 2 in one step (Transition probability)
- transition probability from state 3 to 4
- transition probability from state 5 to 5
Procedure: Proving a Valid TPM
- All entries
- Each row sums to 1: for all
Definition: Chapman-Kolmogorov
Transition probability to go from state i to j in n + m steps.
where is number of all states
See proof
We say n + m steps to mathematically say we can decompose any multi-step transition into two parts.
We can then “recurse” it by breaking the n steps to k + l step, and so forth.
Example:
- 5-step transition from rain to rain =
- (2 steps to rain, then 3 steps rain→rain) +
- (2 steps to sunny, then 3 steps sunny→rain)
- 7-step transition:
- 1-step + 6-step
- 2-step + 5-step
- 3-step + 4-step
- Whatever decomposition is convenient
The equation just formalizes: “to get somewhere in multiple steps, you must pass through some intermediate state.”
Definition: n-step Transition Matrix
Probability that state i goes to j in n + m steps
By Chapman-Kolmogorov equation:
Also (see proof)
Proof: n-step Transition Matrix Identity
Theorem: For any , the n-step Transition Matrix is the -th power of the Transition matrix , i.e., .
Proof by Induction:
-
Base Case (): By definition of Transition probability, is the probability of transitioning from state to in one step, which is . Thus, .
-
Inductive Hypothesis: Assume holds for some .
-
Inductive Step: Consider . By the Chapman-Kolmogorov equation with : Substituting and using the inductive hypothesis : The right-hand side corresponds to the entry in row and column of the matrix product . Therefore, .
By induction, the identity holds for all .
About state vs sample
A sample is a realized observation from a distribution - a concrete outcome you observe.
A state is a possible value the random variable can take - it exists whether or not you’ve observed it.
Example: Transition matrix
Consider state space whether it rains today (state 0) or will it be sunny (state 1).
Let the transition probabilities:
- : Probability it rains (0) today, then stay in rains (0) tomorrow
- : Probability It’s sunny (1) today, then change to rains (0) tomorrow
The transition matrix is
Where
- : Probability it rains (0) today, then change to sunny (1) tomorrow
- : Probability it’s sunny (1) today, then stay sunny tomorrow
Example: n-step transition matrix
Following from Example Transition matrix, let
Using Chapman-Kolmogorov equation, we can compute the probability that if today is rains, then 4 days later it will be rains () as:
Thus,
And,
- : If it rains today, probability it rains 4 days later is 57.49%
- : If it rains today, probability it’s sunny 4 days later is 42.51%
- : If it’s sunny today, probability it rains 4 days later is 56.68%
- : If it’s sunny today, probability it’s sunny 4 days later is 43.32%
Note: Each row sums to 1 (total probability from any starting state).
Examples: More complex n-step transition matrix
Urn Ball Replacement - Step-by-Step Illustration
Setup:
- Urn always has exactly 2 balls (red or blue)
- At each step: pick a ball, replace it with probability 0.8 same color, 0.2 opposite color
- Initial: both balls red (state 2)
- Find: P(5th ball selected is red)
Understanding Transitions from State 2 (both red):
You pick a ball. Since both are red, you definitely pick red.
- With prob 0.8: replace with red → stay at 2 red balls (P_{22} = 0.8)
- With prob 0.2: replace with blue → now have 1 red, 1 blue (P_{21} = 0.2)
- P_{20} = 0 (impossible to go from 2 red to 0 red in one step)
Understanding Transitions from State 1 (1 red, 1 blue):
You pick a ball. Equal chance of picking red or blue (each prob 0.5).
Case 1: Pick red (prob 0.5)
- Replace with red (prob 0.8) → still 1R, 1B (state 1)
- Replace with blue (prob 0.2) → now 0R, 2B (state 0)
Case 2: Pick blue (prob 0.5)
- Replace with blue (prob 0.8) → still 1R, 1B (state 1)
- Replace with red (prob 0.2) → now 2R, 0B (state 2)
So:
- P_{10} = 0.5 × 0.2 = 0.1
- P_{11} = 0.5 × 0.8 + 0.5 × 0.8 = 0.8
- P_{12} = 0.5 × 0.2 = 0.1
Understanding Transitions from State 0 (both blue):
Symmetric to state 2:
- P_{00} = 0.8 (pick blue, replace with blue)
- P_{01} = 0.2 (pick blue, replace with red)
- P_{02} = 0
Finding P(5th ball is red):
The 5th ball is red if at the moment of the 5th selection, we pick a red ball.
Starting at state 2 (both red initially), after 4 replacements we need the distribution over states, then calculate probability of picking red from each state.
Compute P^(4) starting from state 2, then:
- From state 0: prob of picking red = 0
- From state 1: prob of picking red = 0.5
- From state 2: prob of picking red = 1
P(5th ball red) = P^(4){20} × 0 + P^(4){21} × 0.5 + P^(4)_{22} × 1
Computing P^(4) using Chapman-Kolmogorov
We’ll compute P^(4) = P^(2) × P^(2), breaking the 4 steps into two 2-step segments.
Step 1: Compute P^(2) = P × P
Step 2: Compute P^(4) = P^(2) × P^(2)
Final Answer:
Definition: Random Walk Markov Chain
A Random Walk is a Markov chain whose state space is given by the integers where at each point of time the process either moves:
- One step to the right with probability
- One step to the left with probability
where .
Examples: Random walk model
A Markov chain whose state space is given by the integers i = 0, ±1, ±2, … is said to be a random walk if, for some number ,
The preceding Markov chain is called a random walk for we may think of it as being a model for an individual walking on a straight line who at each point of time either takes one step to the right with probability p or one step to the left with probability .
Examples: Gambling model
Consider a gambler who, at each play of the game, either wins $1 with probability or loses $1 with probability . If we suppose that our gambler quits playing either when he goes broke or he attains a fortune of , then the gambler’s fortune is a Markov chain having transition probabilities
States 0 and are called absorbing states since once entered they are never left. Note that the preceding is a finite state random walk with absorbing barriers (states 0 and ).
Gambler’s Ruin Simulation
Parameters: p = 0.6 (win probability), N = 10 (target), starting fortune = $5
Simulation 1:
- Start: $5
- Game 1: P₅,₆ = 0.6 → Win → $6
- Game 2: P₆,₅ = 0.4 → Lose → $5
- Game 3: P₅,₆ = 0.6 → Win → $6
- Game 4: P₆,₇ = 0.6 → Win → $7
- Game 5: P₇,₆ = 0.4 → Lose → $6
- Game 6: P₆,₇ = 0.6 → Win → $7
- Game 7: P₇,₈ = 0.6 → Win → $8
- Game 8: P₈,₉ = 0.6 → Win → $9
- Game 9: P₉,₁₀ = 0.6 → Win → $10
- Game 10: P₁₀,₁₀ = 1.0 → Stay → $10 (absorbed)
- Result: Reached target in 9 games
Simulation 2:
- Start: $5
- Game 1: P₅,₄ = 0.4 → Lose → $4
- Game 2: P₄,₃ = 0.4 → Lose → $3
- Game 3: P₃,₂ = 0.4 → Lose → $2
- Game 4: P₂,₃ = 0.6 → Win → $3
- Game 5: P₃,₂ = 0.4 → Lose → $2
- Game 6: P₂,₁ = 0.4 → Lose → $1
- Game 7: P₁,₀ = 0.4 → Lose → $0
- Game 8: P₀,₀ = 1.0 → Stay → $0 (absorbed)
- Result: Went broke in 7 games
- = p = 0.6 for i ∈ {1,2,…,9}
- = 1-p = 0.4 for i ∈ {1,2,…,9}
- P₀,₀ = 1 (absorbing)
- P₁₀,₁₀ = 1 (absorbing)
The Transition matrix P is (N+1) × (N+1). For N=10:
Where:
- Row i represents current state i (fortune = $i)
- Column j represents next state j
- States: 0, 1, 2, …, 10
- P₀,₀ = 1 (row 1: stay broke)
- P₁₀,₁₀ = 1 (row 11: stay at $10)
- For i ∈ {1,…,9}: P_{i,i-1} = 0.4, P_{i,i+1} = 0.6
Proof: Chapman-Kolmogorov Equation
Minggu 3
Definition: Accessible State
This means that starting from state , it is possible to eventually enter state .
If is not accessible from , then:
Definition: Communication
Two states and communicate if they are mutually accessible:
- (accessible)
- (accessible)
Notation:
Properties (equivalence relation):
- Reflexive: (since )
- Symmetric: If , then
- Transitive: If and , then
Proof of transitivity: If and , then by Chapman-Kolmogorov:
Definition: Communicating Classes
States that communicate form a class. Classes are either:
- Identical (same class), or
- Disjoint (no overlap)
Definition: Irreducible
The Markov chain is irreducible if there is only one communicating class (all states are communicative).
Definition: Recurrent vs Transient
Transient: Will eventually leave and never come back
Recurrent: Will always come back, no matter how many times it leaves
Notice that unlike transition probability, this talks about whether or not state will return back to itself in the future.
Let = probability that, starting from state , the process will ever return to state .
- Recurrent: (will return with probability 1)
- Transient: (positive probability of never returning)
Key properties:
- If is recurrent: returns infinitely many times
- If is transient: number of visits to has geometric distribution with finite mean
- State is recurrent iff expected number of time periods in state is infinite
Definition: Period, Periodic, Aperiodic
the GCD of all steps at which it is possible to return to state .
The period of state is defined as:
- Periodic: period — returns are confined to multiples of
- Aperiodic: period — no such rhythm, can return at irregular times
About aperiodic vs periodic state
Everyday illustrations of Periodic and Aperiodic states:
Periodic State
A state has period if you can only return to that state in a number of steps () that is a multiple of (i.e., is divisible by ).
Everyday Example: A Pendulum or a Swing
Imagine a child on a swing. Let’s define the State as “The swing is at the highest point on the right side.”
- If the swing is at the highest point now (Step 0), it must swing down to the middle, then to the left, then back to the middle, before returning to the right.
- It takes exactly 2 full “ticks” of the movement to return to that specific spot.
- You can only be in that state at Step 2, Step 4, Step 6, etc.
- Because you can never be in that state at Step 1, 3, or 5, the state is Periodic with a period of .
Aperiodic State
A state is aperiodic if its period . This means you can return to the state at irregular times, and there is no fixed “rhythm” or multiple you must follow.
Everyday Example: Checking Your Phone for Notifications
Imagine the State is “You are looking at your phone screen.”
- You look at your phone now (Step 0).
- You might look at it again 1 minute later ().
- Or you might wait and look at it 5 minutes later ().
- Or 12 minutes later ().
- Because it is possible to return to the state in just 1 step (), the greatest common divisor of all possible return times is 1.
- There is no “rule” saying you can only check every 3 minutes; you can return to this state whenever you like. Therefore, it is Aperiodic.
Cheatsheet: State Classification
| Sifat | Definisi |
|---|---|
| Accessible () | for some |
| Communication () | and |
| Recurrent | (returns infinitely often) |
| Transient | (finite expected visits) |
| Periodic | Returns at regular intervals () |
| Aperiodic | Returns at irregular times () |
| Irreducible | All states communicate |
| Regular | has all positive entries for some |
Example: Transient state
When an absorbing state occurs, it’s absorbed there forever and never goes to other states, which makes the other states “transient”.
Note that the transient definition only talks about the probability of a state returning to itself, without mentioning about “absorbing”. absorbing states are just one scenario that produces it.
Other example is if states and are two disjoint communicating classes, neither is necessarily transient just from that alone.
Transience depends on whether you can escape the class. If from class there is a positive probability of transitioning into but not vice versa, then is transient — because once you leave, you never return.
Definition: Limiting Probability
For a Markov chain , the limiting probability of state represents the long-run probability that the process will be in state , independent of the initial state .
where is the n-step transition probability from state to state .
The limiting probability is related to the mean recurrence time (expected time to return to state starting from ) by:
Definition: Regular Transition Probability Matrix
A TPM is regular if:
- For every pair , there is a path for which ;
- There is at least one state for which .
Equivalently, has all positive entries for some .
Properties:
- If has no zero elements for some , then will also have no zero elements for all
- A regular chain is necessarily irreducible and aperiodic (see proof)
Condition 2 ensures the chain is aperiodic.
Without it, a chain could be irreducible (condition 1 satisfied) but still periodic — meaning it cycles through states in a fixed pattern and never settles into a stationary distribution. A self-loop forces the chain to be able to “stay” at a state, breaking any fixed cycle and guaranteeing period 1.
Together, irreducible + aperiodic = regular = guaranteed convergence to a unique stationary distribution regardless of initial state.
Theorem: Limiting Distribution Theorem
Let : Transition probability matrix (TPM) on finite state space
Then
The limiting distribution satisfies:
Definition: Stationary Distribution
A probability vector is a stationary distribution if it satisfies: If the initial state is chosen according to , then the probability of being in state at any time is also (i.e., for all ).
Definition: Ergodic State
A state is ergodic if it is both positive recurrent (expected return time is finite) and aperiodic (period ).
- In a finite irreducible Markov chain, all states are positive recurrent.
- If the chain is also aperiodic, all states are ergodic.
- For an irreducible ergodic Markov chain, the limiting distribution always exists and is equal to the unique stationary distribution.
- The limiting distribution, when it exists, is always a stationary distribution. However, a stationary distribution may exist without a limiting distribution (e.g., a periodic chain can have a stationary distribution but no limiting distribution).
About positive recurrent state
Positive recurrent
I imagine a scenario of “closed” communicative state.
So in Markov chain 0→1→2→0, then , because we expect that if we start from 0 to eventually return back to 0 in 3 steps.
The example above is “deterministic”. When there are other possible states each state can take, then it’s “probabilistic”, and we deal with the notion of expectation of the time to take to go back to 0.
Null recurrent
Random walks on causes . The way I see it is we can be ANYWHERE at , which is , which means it is possible to “return” to state , but the expected time is basically infinite, since we could’ve traversed anywhere also.
Example: Social Class Problem
Given TPM:
Solving :
Matching components:
- From equation 1:
- From equation 3:
- Substituting into :
- Subtituting to and we obtain:
Comparison: Limiting vs. Hitting Probability
Although both often use the notation in lecture slides, they represent fundamentally different concepts:
| Feature | Limiting Probability | Hitting Probability |
|---|---|---|
| Question | ”In the long run, what % (proportion) of time is spent in state ?" | "Starting at state , what is the chance I ever reach state ?” |
| System State | Describes steady state (long-run). | Describes the fate of a single path. |
| Dependence | Independent of starting state. | Heavily depends on starting state . |
| Matrix Math | Left-multiplication: | Right-multiplication: |
| Indices | Summing over columns: | Summing over rows: |
| Boundaries | No boundaries; requires . | Requires boundaries (Target=1, Other Absorbing=0). |
TIP
- Use Limiting Probability when asked for “Long-run proportion” or “Steady state”.
- Use Hitting Probability (via First Step Analysis) when asked for “Probability of reaching/absorption” or “Chance of winning”.
Procedure: First Step Analysis
First Step Analysis (FSA) allows you to calculate specific properties of a Markov Chain, such as the probability of reaching a target state or the time until absorption, by analyzing the very first transition.
Prerequisites
- A defined Transition Probability Matrix (TPM).
- Clear identification of target states (hitting boundaries) and absorbing states.
Calculating hitting probabilities
Use this procedure to determine the probability that a process starting in state will eventually hit a specific target state .
- Identify boundary conditions. Set:
- Formulate equations for transient states. For each state that is not a boundary or absorbing state, express the hitting probability as a weighted sum of the probabilities from its possible next states:
- Solve the linear system. Solve the resulting set of equations for all unknown values.
Outcome: You will obtain the exact probability of the process ever entering the target state .
Calculating expected time to absorption
Use this procedure to find the average number of steps required to reach any absorbing state from a starting state .
- Set boundary conditions. For all absorbing states, the expected time to absorption is 0 ().
- Formulate equations for transient states. For each transient state , add 1 (representing the current step) to the expected time remaining from all possible next states :
- Solve the linear system. Solve the resulting equations to find the expected time for each state.
Outcome: You will determine the mean number of transitions until the process terminates in an absorbing state.
Example: First Step Analysis - Gambler’s Ruin
Using the scenario from Examples Gambling model, consider a gambler with and win probability (lose probability ).
We want to find the probability of reaching the target () starting from state .
Hitting probability
Let be the probability of hitting state starting from state .
-
Boundary Conditions: (Target reached), (Broke).
-
Equations for Transient States:
-
Solution:
Expected time to absorption
Let be the expected number of steps until reaching state .
-
Boundary Conditions:
-
Equations:
-
Solution:
steps.
Proof: Regular chain is necessarily irreducible and aperiodic
A TPM is regular if has all positive entries for some .
This means that after steps, there is a non-zero probability of moving from any state to any state .
1. Proof of Irreducibility
Definition: A chain is irreducible if all states communicate (). This means for any two states and , it is possible to go from to and from to .
Proof:
- By the definition of a regular chain, there exists some such that for all .
- Since , state is accessible from state ().
- Similarly, since , state is accessible from state ().
- Because every state can reach every other state, they all belong to the same communicating class.
- Conclusion: The chain is irreducible.
2. Proof of Aperiodicity
Definition: A state is aperiodic if its period . The period is the Greatest Common Divisor (GCD) of all such that .
Proof:
- In a regular chain, we know for all states . This means is one possible time the process can return to state .
- Your notes mention: “If has no zero elements… then will also have no zero elements for all .”
- Why? Because . To get from to in steps, you just need to take one step to any intermediate state , and then take steps to . Since is all positive, and you must be able to go somewhere in one step, a path always exists.
- This implies that if , then as well.
- The period must be a common divisor of and .
- The only positive integer that divides two consecutive integers ( and ) is 1.
- Conclusion: for all states, so the chain is aperiodic.
Summary
- Irreducible because “all positive entries” means you can get anywhere from anywhere.
- Aperiodic because “all positive entries” means you can return to a state at any time , breaking any fixed “rhythm” or period.
Minggu 4
Definition: Mean Time Spent in Transient States
For a finite Markov chain, let be the set of transient states. The Transition matrix restricted to transient states is (or ).
The matrix contains the expected number of time periods spent in transient state given starting in transient state :
Properties:
- (or ): Expected number of visits to state starting from .
- : Identity matrix of size .
About relation of matrix mu and absorption time
In relation of the mean time matrix (Definition Mean Time Spent in Transient States) to the expected time to absorption (Expected time to absorption):
Essentially,
- is a single value (or vector element) representing total time until any absorption.
- is a matrix containing detailed information about visits to each transient state before absorption.
Definition: Hitting Probability (Transient)
The probability that the Markov chain ever enters transient state given that it starts in transient state is :
where if and otherwise.
Example: Gambler’s Ruin (Matrix Analysis)
For (lose prob ), starting with 3 units:
- (Expected time at 5 units)
- (Expected time at 2 units)
- Probability of ever hitting 1:
Minggu 5
Definition: Branching Processes
A population model where each individual produces offspring with probability .
is the size of the -th generation.
| Moment | Formula |
|---|---|
| Mean offspring | |
| Expected size at gen | |
| Variance at gen |
Definition: Extinction Probability ()
Let — the expected number of offspring per individual, i.e. the mean of the offspring distribution .
- If : (certain extinction, unless ).
- If : is the smallest positive solution to:
Where:
- (left side) — the extinction probability we’re solving for
- — number of offspring a single individual produces
- — probability of producing exactly offspring
- (right side) — probability that all offspring lines eventually go extinct (each offspring independently has extinction probability , and there are of them)
The right side sums over all possible offspring counts: probability of having offspring × probability all lineages die out. This is a self-consistency equation — appears on both sides because extinction of the whole population requires extinction of every sub-lineage.
Definition: Time Reversible Markov Chains
A stationary Markov chain is time reversible if it satisfies the detailed balance equation
Tip
is the long-run/“overall” proportion/probability of time spent in state . is the transition probability from to in one time step.
alone = probability of going given you’re already in .
= probability of going unconditionally, accounting for how often you’re even in to begin with.
Detailed balance says this frequency is equal in both directions.
Essentially, the rate at which the process goes from is equal to the rate from .
Theorem: Kolmogorov’s Criterion
A chain is time reversible iff for any cycle of states , the product of transition probabilities is the same in both directions:
Examples
- Random Walk on Graphs (with symmetric weights ): is time reversible.
- Ehrenfest Urn: State (number of balls in Urn 1) is time reversible with .
- One-closer rule: Reordering rule that is time reversible, unlike move-to-front.
Definition: Markov Chain Monte Carlo (MCMC)
Method to sample from a distribution by constructing a time reversible Markov chain.
Metropolis-Hastings Algorithm:
- Propose jump from to with .
- Accept with probability .
Minggu 6
Definition: Counting Process
A stochastic process is a counting process if represents the total number of “events” that occur by time .
Properties:
- is integer valued
- If , then
- equals the number of events in
About Independent and Stationary Increments
- Independent increments: Numbers of events in disjoint time intervals are independent
- Stationary increments: Distribution of number of events in any interval depends only on the length of the interval (not the starting point)
Not all counting processes possess both properties.
Definition: Poisson Distribution
A random variable has Poisson distribution with parameter if:
MGF:
Theorem: Sum of Poisson Random Variables
If and are independent, then .
Theorem: Poisson-Binomial Distribution
Let and conditional on , let . Then (unconditional on ) is Poisson distributed with parameter .
About Axioms of Poisson Distribution
For a process occurring with rate , let be the number of occurrences in . For infinitesimal :
Where means .
Definition: Poisson Process
A counting process is a Poisson process of rate if:
- Independent increments
Equivalently: for all .
About Properties of Poisson Process
- Time of first event
- Inter-arrival times (time between -th and -th event) are i.i.d.
- Waiting time (time until -th event)
Definition: Inter-arrival Times
The sequence where is the elapsed time between the -th and -th event. For Poisson process: .
This follows from the memoryless property — the process “restarts” probabilistically at each event.
Definition: Waiting Times
is the time of occurrence of the -th event. For Poisson process: with PDF:
Relationship: .
Theorem: Conditional Distribution of Arrival Times
Given , the arrival times have the same distribution as the order statistics of i.i.d. random variables.
Joint density: for .
Paraphrased: given events occurred in , the event times are distributed independently and uniformly over (as unordered variables).
Definition: Type I and Type II Events (Thinning)
Given Poisson process of rate . Each event is classified as type I with probability or type II with probability , independently. Let and count type I and II events.
Theorem: Thinning
Thinning a Poisson process with probability yields a Poisson process with rate .
Theorem: Independence of Thinned Processes
and are independent Poisson processes.
Generalizes to types: if events are classified into types with probabilities , then the resulting processes are independent Poisson processes with rates .
Definition: Exponential Distribution
A continuous random variable has exponential distribution with parameter if:
- Mean:
- Variance:
- MGF: for
About Memoryless Property
for all .
The exponential distribution is the only continuous distribution with this property. An item that has survived hours is “as good as new” — the remaining lifetime has the same distribution as the original.
About Failure Rate Function
represents the conditional probability density that a -year-old item will fail.
For exponential: (constant). The failure rate function uniquely determines the distribution.
Definition: Hyperexponential Random Variable
A mixture of exponentials: where and .
CDF:
Failure rate:
As , (converges to the slowest-failing type).
Definition: Hypoexponential Random Variable
Sum of independent exponentials with distinct rates: where , .
PDF: where
Tail:
As , remaining lifetime ≈ exponential with rate .
Definition: Coxian Random Variable
where is random and . Arises when an item goes through stages and may quit after each stage with probability .
Definition: Nonhomogeneous Poisson Process
A counting process with intensity function where:
- Independent increments
Then where (mean value function).
Definition: Compound Poisson Process
where is Poisson with rate and are i.i.d. random variables independent of .
Cheatsheet: Poisson Process Summary
| Concept | Formula |
|---|---|
| distribution | |
| Inter-arrival times | i.i.d. |
| Waiting time | |
| Given | Arrival times = order statistics of uniform |
| Thinning (prob ) | Independent Poisson with rate |
| Nonhomogeneous |
Example: Exponential Random Variables and Expected Discounted Returns
Example: Time Spent in a Bank
Example: Automobile Accident Damage
Example: Commodity Ordering
Example: Analyzing Greedy Algorithms for the Assignment Problem
Example: Cells in the Body
Example: Customers in Line
Example: Function o(h)
Example: Immigrants into a Territory
Example: Nonnegative Offers
Example: System with Classified Individuals
Example: The Coupon Collecting Problem
Example: An Infinite Server Queue
Example: Minimizing the Number of Encounters
Example: Tracking the Number of HIV Infections
Example: Insurance Claims
Example: An Optimization Example
Minggu 7
Definition: Mixed Poisson Process
Example: Nonhomogeneous Poisson Process — Demand of a Facility
Example: Bugs in Operating Time
Example: Siegbert Hot Dog Stand
Example: The Output Process of an Infinite Server Poisson Queue
Example: Families Migrate to an Area
Example: Busy Periods in Single-Server Poisson Arrival Queues
Lemma 4.1
Lemma 4.2
Lemma 4.3
Minggu 8
Definition: Conditional Poisson Process
Example: Gamma Density
Example: Insurance Company Policyholders
Rangkum
| Sifat | Definisi |
|---|---|
| Ergodic State | Positive Recurrent + Aperiodic |
| Stationary Distribution | and |
| Expected time spent in transient states | |
| Probability of ever hitting state from | |
| Extinction probability for branching processes | |
| Detailed Balance (Time Reversible) | |
| Poisson Process | |
| Inter-arrival times | |
| Waiting times | |
| independent | Thinning with probabilities |