Poisson Processes

A Poisson process sample path
Figure 1: A Poisson process sample path

A Poisson process is a continuous-time stochastic process which counts the arrival of randomly occurring events. Commonly cited examples which can be modeled by a Poisson process include radioactive decay of atoms and telephone calls arriving at an exchange, in which the number of events occurring in each consecutive time interval are assumed to be independent. Being piecewise constant, Poisson processes have very simple pathwise properties. However, they are very important to the study of stochastic calculus and, together with Brownian motion, forms one of the building blocks for the much more general class of Lévy processes. I will describe some of their properties in this post.

A random variable N has the Poisson distribution with parameter {\lambda}, denoted by {N\sim{\rm Po}(\lambda)}, if it takes values in the set of nonnegative integers and

\displaystyle  {\mathbb P}(N=n)=\frac{\lambda^n}{n!}e^{-\lambda} (1)

for each {n\in{\mathbb Z}_+}. The mean and variance of N are both equal to {\lambda}, and the moment generating function can be calculated,

\displaystyle  {\mathbb E}\left[e^{aN}\right] = \exp\left(\lambda(e^a-1)\right),

which is valid for all {a\in{\mathbb C}}. From this, it can be seen that the sum of independent Poisson random variables with parameters {\lambda} and {\mu} is again Poisson with parameter {\lambda+\mu}. The Poisson distribution occurs as a limit of binomial distributions. The binomial distribution with success probability p and m trials, denoted by {{\rm Bin}(m,p)}, is the sum of m independent {\{0,1\}}-valued random variables each with probability p of being 1. Explicitly, if {N\sim{\rm Bin}(m,p)} then

\displaystyle  {\mathbb P}(N=n)=\frac{m!}{n!(m-n)!}p^n(1-p)^{m-n}.

In the limit as {m\rightarrow\infty} and {p\rightarrow 0} such that {mp\rightarrow\lambda}, it can be verified that this tends to the Poisson distribution (1) with parameter {\lambda}.

Poisson processes are then defined as processes with independent increments and Poisson distributed marginals, as follows.

Definition 1 A Poisson process X of rate {\lambda\ge0} is a cadlag process with {X_0=0} and {X_t-X_s\sim{\rm Po}(\lambda(t-s))} independently of {\{X_u\colon u\le s\}} for all {s\le t}.

An immediate consequence of this definition is that, if X and Y are independent Poisson processes of rates {\lambda} and {\mu} respectively, then their sum {X+Y} is also Poisson with rate {\lambda+\mu}.

Note that this definition does give consistent distributions for the increments {X_t-X_s} across different time intervals. That is, if {t_1\le t_2\le\cdots\le t_n} is an increasing sequence of times and the increments {X_{t_{k+1}}-X_{t_k}\sim{\rm Po}(\lambda(t_{k+1}-t_k))} are independent, then the sum {X_{t_n}-X_{t_1}=\sum_{k=1}^{n-1}(X_{t_{k+1}}-X_{t_k})} will have the {{\rm Po}(\lambda(t_n-t_1))} distribution. The processes given by Definition 1 are also called homogeneous Poisson processes, to distinguish them from the inhomogeneous case where the rate {\lambda} is time-dependent. I will also consider the inhomogeneous case below.

As the Poisson distribution is supported by the nonnegative integers, it follows from the definition that Poisson processes are almost surely increasing and integer valued. In fact, an alternative definition can be given as processes which count a sequence of random times occurring at exponentially distributed intervals. A random variable T has the exponential distribution with parameter (or rate) {\lambda\ge0}, denoted by {{\rm Exp}(\lambda)}, if it takes values in the nonnegative real numbers and

\displaystyle  {\mathbb P}(T\ge t)=e^{-\lambda t}

for all {t\ge 0}. The exponential distribution satisfies the memoryless property,

\displaystyle  {\mathbb P}(T\ge t+s\mid T\ge t)={\mathbb P}(T\ge s),

for all {s,t\ge 0}.

I will say that stochastic process X is a counting process if there exists an increasing sequence of random variables {0<T_1\le T_2\le\cdots} taking values in {{\mathbb R}_+\cup\{\infty\}} such that {T_n<T_{n+1}} whenever {T_n} is finite and

\displaystyle  X_t\equiv\sum_{n=1}^\infty 1_{\{T_n\le t\}} (2)

for all {t\ge 0}. That is, {X_t} counts the number of times in {\{T_1,T_2,\ldots\}} which are no greater than t. This is equivalent to X being right-continuous, piecewise constant, starting at zero, and having jump size {\Delta X=1} at each discontinuity. Then, Poisson processes count the arrival of times at exponentially distributed intervals.

Lemma 2 If {S_1,S_2,\ldots} are independent random variables with the {{\rm Exp}(\lambda)} distribution, and {T_n=\sum_{k=1}^nS_k}, then X defined by (2) is a Poisson process of rate {\lambda}.

Proof: Let {\mathcal{F}_t=\sigma(X_s\colon s\le t)} be the natural filtration of X. We start by showing that, for each time s, the random variable

\displaystyle  \tau_s\equiv\inf\left\{t\ge 0\colon X_{s+t}>X_s\right\}

is exponential of rate {\lambda} independently of {\mathcal{F}_s}. In fact, restricting to the event

\displaystyle  \left\{X_s=n\right\}=\left\{T_n\le s, T_n+S_{n+1}> s\right\}\in\mathcal{F}_s

the memoryless property of the exponential distribution gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb P}(\tau_s\ge t\mid\mathcal{F}_s)&\displaystyle={\mathbb P}(S_{n+1}>(s-T_n)+t\mid\mathcal{F}_s)\smallskip\\ &\displaystyle={\mathbb P}(S_{n+1}> t)=e^{-\lambda t}. \end{array}

As this holds on {\bigcup_n\{X_s=n\}}, which has probability 1, {\tau_s} is indeed exponential of rate {\lambda}.

This shows that, for {s<t}, the {\{0,1\}}-valued random variable {\min(X_t-X_s,1)} is independent of {\mathcal{F}_s} and equals 0 with probability {e^{-\lambda(t-s)}}. For any {n>0} setting {t_k=s+k(t-s)/n} it follows that

\displaystyle  \sum_{k=1}^n\min(X_{t_k}-X_{t_{k-1}},1) (3)

has the binomial distribution, {{\rm Bin}(n,p)}, with success probability {p=1-e^{-\lambda(t-s)/n}}. Letting n increase to infinity, {np} tends to {\lambda(t-s)}. So, taking the limit, (3) tends to the {{\rm Po}(\lambda(t-s))} distribution independently of {\mathcal{F}_s}. Furthermore, (3) equals {X_t-X_s} for large n, giving {X_t-X_s\sim{\rm Po}(\lambda(t-s))}. ⬜

As it is such a fundamental process, it is not surprising that there are in fact many ways in which Poisson processes can be characterized. In fact, they are the only counting processes with stationary independent increments.

Theorem 3 Any counting process with stationary independent increments is a homogeneous Poisson process.

Compare this with the characterization of Brownian motion as the only continuous process with stationary independent increments (up to a scaling factor and drift term). I give a proof of Theorem 3 below in the more general context of non-stationary increments and inhomogeneous Poisson processes (Theorem 9 and Corollary 10).

Now, suppose that we have a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. Then, a Poisson process, X, with respect to this space is defined to be an adapted process satisfying Definition 1 and such that {X_t-X_s} is independent of {\mathcal{F}_s} for all {s\le t}. This is also referred to as an {\{\mathcal{F}_t\}}-Poisson process. Note, in particular, if X satisfies Definition 1 then it is automatically a Poisson process with respect to its natural filtration {\mathcal{F}_t=\sigma(X_s\colon s\le t)}. There are several useful characterizations of {\{\mathcal{F}_t\}}-Poisson processes, which I list below.

Theorem 4 Let X be a cadlag adapted process with {X_0=0}. Then, for any {\lambda\ge 0}, the following are equivalent.

  1. X is an {\{\mathcal{F}_t\}}-Poisson process of rate {\lambda}.
  2. At each time t,
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t+1\mid\mathcal{F}_t)=\lambda\delta t+o(\delta t),\smallskip\\ &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t\mid\mathcal{F}_t)=1-\lambda\delta t+o(\delta t), \end{array} (4)

    for {\delta t>0}.

  3. X is a counting process and, for each time t, the random variable

    \displaystyle  \tau_t\equiv\inf\{s\ge 0\colon X_{t+s}>X_t\}

    has the {{\rm Exp}(\lambda)} distribution independently of {\mathcal{F}_t}.

  4. X is a counting process such that {X_t-\lambda t} is a martingale.
  5. X is a counting process such that {X_t-\lambda t} is a local martingale.
  6. For all bounded measurable functions {f\colon{\mathbb R}\rightarrow{\mathbb R}}, the process

    \displaystyle  f(X_t)-\lambda\int_0^t \left( f(X_s+1)-f(X_s)\right)\,ds

    is a martingale.

  7. For each {a\in{\mathbb R}},

    \displaystyle  \exp\left(iaX_t-\lambda t(e^{ia}-1)\right)

    is a martingale.

The proof of these statements is given below in the more general, but no more difficult, context of inhomogeneous Poisson processes. For now, I briefly go through each of these statements. The second condition gives the intuitive property of a Poisson process being such that the probability of an event occurring in each interval of size {\delta t} being equal to {\lambda\delta t} to leading order. The {o(\delta t)} terms in (4) just denote something which is equal to {\delta t} multiplied by a term which tends to zero (in probability) as {\delta t\rightarrow 0}.

The third statement of Theorem 4 says that, at any time, the time to wait until the next event is always distributed as {{\rm Exp}(\lambda)}. This was the idea used above in the proof of Lemma 2.

A compensated Poisson process sample path
Figure 2: Compensated Poisson process

The martingale {M_t=X_t-\lambda t} given by statement 4 above is known as a compensated Poisson process. Note also that, being an FV process, M has quadratic variation {[M]_t=\sum_{s\le t}\Delta M_s^2=X_t}. Therefore, {M^2-X} is also a martingale. In fact,

\displaystyle  M^2_t-\lambda t = M_t^2-X_t +M_t

is a martingale. Taking {W=\lambda^{-1/2}M} provides a counterexample showing that, in Lévy’s characterization of Brownian motion as a continuous local martingale W such that {W^2_t-t} is a local martingale, the continuity of W is indeed a necessary condition. In fact, in the limit {\lambda\rightarrow\infty}, it can be shown that W does converge in distribution to a Brownian motion.

A consequence of the compensated process {M_t=X_t-\lambda t} being a martingale is that the jump times of X are totally inaccessible. That is, {{\mathbb P}(\Delta X_\tau\not=0)=0} for each predictable stopping time {\tau}.

Lemma 5 The jump times of a Poisson process are totally inaccessible.

Proof: Let {\tau} be a predictable stopping time and {\tau_n\le\tau} be stopping times announcing {\tau}. Replacing {\tau} by {\tau\wedge t} if necessary, we can suppose that {\tau} is bounded. Then, by optional sampling,

\displaystyle  {\mathbb E}[\Delta X_\tau]={\mathbb E}[\Delta M_\tau]=\lim_{n\rightarrow\infty}{\mathbb E}[M_\tau-M_{\tau_n}]=0.

So, as {\Delta X} is a nonnegative random variable with zero expectation, {{\mathbb P}(\Delta X\not= 0)=0}. ⬜

In fact, as I will show later in these notes, the jump times of an adapted counting process X are totally inaccessible if and only if it has a continuous compensator V. That is, V is a continuous process making XV a local martingale.

Condition 6 of Theorem 4 says that the infinitesimal generator of the Poisson process of rate {\lambda} is {Gf(x)=\lambda(f(x+1)-f(x))}. In fact, the following is an immediate consequence of this statement,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[f(X_{t+s})\mid\mathcal{F}_t\right] &\displaystyle=f(X_t)+sGf(X_t)+\int_0^s{\mathbb E}\left[Gf(X_{t+u})-Gf(X_t)\mid\mathcal{F}_t\right]\,du\smallskip\\ &\displaystyle=f(X_t)+sGf(X_t)+o(s) \end{array}

where {o(s)/s\rightarrow 0} (in probability) as {s\rightarrow 0}.

The final condition of Theorem 4 is equivalent to the statement that {X_t-X_s} is independent of {\{X_u\colon u\le s\}} with characteristic function {\exp(\lambda(t-s)(e^{ia}-1))}. It is, however, often more useful to express it in this martingale form. For example, below I use it to show that a finite collection of Poisson processes is independent if and only if none of their jump times coincide (almost surely).

Inhomogeneous Poisson Processes

The definition of Poisson processes given above is easily extended to rates which are time-dependent,

\displaystyle  \lambda\colon{\mathbb R}_+\rightarrow{\mathbb R}_+,\ t\rightarrow\lambda_t.

As long as {\lambda_t} is locally integrable, we instead require that {X_t-X_s} has the Poisson distribution with parameter {\int_s^t\lambda_u\,du}. Equivalently, letting {\Lambda_t=\int_0^t\lambda_s\,ds} be the cumulative rate (or cumulative intensity), we can write this as {X_s-X_s\sim{\rm Po}(\Lambda_t-\Lambda_s)}. One advantage of writing things in terms of {\Lambda} instead of the `instantaneous rate’ {\lambda} is that we can generalize to the situation where {\Lambda} is not differentiable. In that case, {\lambda_t=d\Lambda_t/dt} need not be well defined (except in the sense of distributions).

Then, an inhomogeneous (or, non-homogeneous) Poisson process is defined as follows.

Definition 6 Let {\Lambda\colon{\mathbb R}_+\rightarrow{\mathbb R}} be a continuous increasing function. Then, X is a Poisson process of cumulative rate {\Lambda} if {X_0=0} and {X_t-X_s\sim{\rm Po}(\Lambda_t-\Lambda_s)} independently of {\{X_u\colon u\le s\}} for all {s\le t}.

It follows from this definition that if X is a Poisson process of cumulative rate {\Lambda} and {\theta\colon{\mathbb R}_+\rightarrow{\mathbb R}} is a continuous increasing process with {\theta(0)=0}, then the time-changed process {Y_t=X_{\theta(t)}} will be Poisson with cumulative rate {\Lambda\circ\theta}. This gives an easy way of constructing inhomogeneous Poisson processes. First, use Lemma 2 to construct a homogeneous Poisson process X of rate 1 and set {Y_t=X_{\Lambda_t}} to transform it into a Poisson process with cumulative rate {\Lambda}.

I now state and prove the equivalence of each of the statements of Theorem 4 in the more general context of inhomogeneous Poisson processes. In equation (5) below, {o(\delta\Lambda_t)} denotes terms which are a product of {\delta\Lambda_t} with a term which goes to zero in probability as {\delta t\rightarrow 0}.

Theorem 7 Let X be a cadlag adapted process with {X_0=0}. Then, for any continuous increasing function {\Lambda\colon{\mathbb R}_+\rightarrow{\mathbb R}}, the following are equivalent.

  1. X is an {\{\mathcal{F}_t\}}-Poisson process of cumulative rate {\Lambda}.
  2. At each time t,
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t+1\mid\mathcal{F}_t)=\delta\Lambda_t+o(\delta\Lambda_t),\smallskip\\ &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t\mid\mathcal{F}_t)=1-\delta\Lambda_t+o(\delta\Lambda_t), \end{array} (5)

    for {\delta t>0}, setting {\delta\Lambda_t=\Lambda_{t+\delta t}-\Lambda_t}.

  3. X is a counting process and, for each time t, the random variable

    \displaystyle  \tau_t\equiv\inf\{s\ge 0\colon X_{t+s}>X_t\}

    is independent of {\mathcal{F}_t} and satisfies {{\mathbb P}(\tau_t\ge s)=\exp(\Lambda_t-\Lambda_{t+s})}.

  4. X is a counting process such that {X_t-\Lambda_t} is a martingale.
  5. X is a counting process such that {X_t-\Lambda_t} is a local martingale.
  6. For all bounded measurable functions {f\colon{\mathbb R}\rightarrow{\mathbb R}}, the process

    \displaystyle  f(X_t)-\int_0^t \left( f(X_s+1)-f(X_s)\right)\,d\Lambda_s

    is a martingale.

  7. For each {a\in{\mathbb R}},

    \displaystyle  \exp\left(iaX_t-(e^{ia}-1)\Lambda_t\right)

    is a martingale.

Proof: I start by proving equivalence of 1,4,5,6,7, and will then prove equivalence of the second and third statements.

(1) implies (4): The definition of a Poisson process with cumulative rate {\Lambda} immediately implies that {X_t-X_s} has mean {\Lambda_t-\Lambda_s} independently of {\mathcal{F}_s}, for {s\le t}. So, setting {M_t=X_t-\Lambda_t},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[M_t\mid\mathcal{F}_s\right] &\displaystyle={\mathbb E}\left[X_t-X_s\mid\mathcal{F}_s\right]+X_s-\Lambda_t\smallskip\\ &\displaystyle=\Lambda_t-\Lambda_s+X_s-\Lambda_t=M_s, \end{array}

and M is a martingale as required.

(4) implies (5): This is trivial, as all cadlag martingales are also local martingales.

(5) implies (6): By assumption, {M_t=X_t-\Lambda_t} is a local martingale. As X is a counting process, {f(X)} will be piecewise constant and,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle f(X_t)&\displaystyle=f(X_0)+\sum_{s\le t}1_{\{\Delta X_s=1\}}(f(X_{s-}+1)-f(X_{s-}))\smallskip\\ &\displaystyle=f(X_0)+\int_0^t (f(X_{s-}+1)-f(X_{s-}))\,dX_s. \end{array}

So, the process {M^f=f(X)-\int(f(X+1)-f(X))\,d\Lambda} can be written as a stochastic integral with respect to the local martingale M

\displaystyle  M^f_t=f(X_0)+\int_0^t(f(X_{s-}+1)-f(X_{s-}))\,dM_s.

By preservation of the local martingale property, {M^f} is a local martingale. As f is assumed bounded, {M^f} is also uniformly bounded over each finite time interval and is therefore a proper martingale.

(6) implies (7): Property (6) generalizes to complex-valued functions f, simply by applying it to the real and imaginary parts separately. Then, using {f(x)=e^{iax}} shows that

\displaystyle  M_t\equiv e^{iaX_t}-\int_0^t(e^{ia}-1)e^{iaX_s}\,d\Lambda_s

is a martingale. Setting {U=e^{iaX_t}} and {V_t=\exp(-(e^{ia}-1)\Lambda_t)} gives {dV=-V(e^{ia}-1)\,d\Lambda}. Using integration by parts,

\displaystyle  d(UV)=V\,dU+U\,dV=V(dU-U(e^{ia}-1)\,d\Lambda)=V\,dM.

By preservation of the local martingale property, this shows that {UV=\exp(iaX-(e^{ia}-1)\Lambda)} is a local martingale. Furthermore, as it is uniformly bounded over each finite time interval, UV is a proper martingale.

(7) implies (1): Letting M be the martingale {\exp(iaX_t-(e^{ia}-1)\Lambda_t)} gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[e^{ia(X_t-X_s)}\;\middle\vert\;\mathcal{F}_s\right]&\displaystyle={\mathbb E}\left[M_t\mid\mathcal{F}_s\right]\exp(-iaX_s+(e^{ia}-1)\Lambda_t)\smallskip\\ &\displaystyle=M_s\exp(-iaX_s+(e^{ia}-1)\Lambda_t)\smallskip\\ &\displaystyle=\exp((e^{ia}-1)(\Lambda_t-\Lambda_s)) \end{array}

for all {s\le t}. Comparing this with the characteristic function of the Poisson distribution shows that {X_t-X_s} has the {{\rm Po}(\Lambda_t-\Lambda_s)} distribution independently of {\mathcal{F}_s}.

This completes the proof of equivalence of 1,4,5,6 and 7. We now move on to the equivalence of these properties with 2 and 3.

(1) implies (2): If X is Poisson with cumulative rate {\Lambda} then

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t+1\mid\mathcal{F}_t)= \delta\Lambda_t e^{-\delta\Lambda_t}=\delta\Lambda_t+o(\delta\Lambda_t),\smallskip\\ &\displaystyle{\mathbb P}(X_{t+\delta t}=X_t\mid\mathcal{F}_t)=e^{-\delta\Lambda_t}=1-\delta\Lambda_t+o(\delta\Lambda_t) \end{array}

as required.

(2) implies (7): Fixing any {a\in{\mathbb R}}, we need to show that

\displaystyle  M_t\equiv\exp(iaX_t-(e^{ia}-1)\Lambda_t)

is a martingale. Applying (5) to the expectation of {e^{ia\delta X_t}},

\displaystyle  {\mathbb E}[e^{ia X_{t+\delta t}}\mid\mathcal{F}_t]=e^{ia(X_t+1)}\delta\Lambda_t+e^{iaX_t}(1-\delta\Lambda_t)+o(\delta\Lambda_t).

Rearranging this a bit gives the following,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[M_{t+\delta t}\mid\mathcal{F}_t]&\displaystyle=M_t\exp(-(e^{ia}-1)\delta\Lambda_t)\left(e^{ia}\delta\Lambda_t+1-\delta\Lambda_t\right)+o(\delta\Lambda_t)\smallskip\\ &\displaystyle=M_t+o(\delta\Lambda_t). \end{array}

Fixing a time {s<t}, an {\mathcal{F}_s}-measurable and bounded random variable Z, taking expectations gives

\displaystyle  {\mathbb E}[ZM_{t+\delta t}]={\mathbb E}[ZM_t]+o(\delta\Lambda_t).

Here, bounded convergence ensures that expectations of the {o(\delta\Lambda_t)} terms are themselves {o(\delta\Lambda_t)}. So, the derivative {d{\mathbb E}[ZM_t]/d\Lambda_t} is zero over {t\ge s}, and {{\mathbb E}[ZM_t]} is constant over this range, giving {{\mathbb E}[ZM_t]={\mathbb E}[ZM_s]}. Therefore, M is a martingale.

(1) implies (3): If X is a Poisson process with cumulative rate {\Lambda} then,

\displaystyle  {\mathbb P}(\tau_t>s\mid\mathcal{F}_t)={\mathbb P}(X_{t+s}-X_t=0\mid\mathcal{F}_t)=e^{\Lambda_{t+s}-\Lambda_t}

as required.

(3) implies (2): This follows along the same lines as the proof of Lemma 2 given above. Condition (3) says that {{\mathbb P}(X_t-X_s=0\mid\mathcal{F}_s)=e^{\Lambda_s-\Lambda_t}} for {s\le t}. In particular, {X_t=X_s} almost surely, whenever {\Lambda_s=\Lambda_t}. Consequently, X is constant over all intervals for which {\Lambda} is constant.

Fix times {s<t} and, for each n, choose times {s=t_0\le t_1\le\cdots\le t_n=t} such that {\Lambda_{t_k}=\Lambda_s+k(\Lambda_t-\Lambda_s)/n}. Then,

\displaystyle  \sum_{k=1}^n\min(X_{t_k}-X_{t_{k-1}},1) (6)

has the {{\rm Bin}(n,p)} distribution independently of {\mathcal{F}_s}, with success probability {p=1-(e^{\Lambda_s-\Lambda_t})/n}. Letting n increase to infinity, np tends to {\Lambda_t-\Lambda_s}. So, (6) tends to the Poisson distribution with parameter {\Lambda_t-\Lambda_s}.

Finally, as X is constant on periods of constancy of {\Lambda}, for large enough n it will contain at most one discontinuity in each of the intervals {[t_{k-1},t_k]}, in which case (6) is equal to {X_t-X_s}. So, {X_t-X_s} has the Poisson distribution with parameter {\Lambda_t-\Lambda_s} independently of {\mathcal{F}_s}. ⬜

An immediate consequence of Definition 6 is that the sum of a finite set of independent Poisson processes is itself Poisson, and their cumulative rates add. There is also a very simple condition for such processes to be independent — their jump times must be almost surely distinct. The necessity of this condition follows from the fact that any pair of independent and continuously distributed random variables are almost surely distinct. We now prove that, for Poisson processes, having disjoint sets of jump times is also a sufficient condition.

Lemma 8 Let {X^1,\ldots,X^n} be Poisson processes with cumulative rates {\Lambda^1,\ldots,\Lambda^n} respectively, and defined on the same filtered probability space.

Then, they are independent if and only if their jump times are almost surely disjoint. In that case, {X^1+\cdots+X^n} is a Poisson process with cumulative rate {\Lambda^1+\cdots+\Lambda^n}.

Proof: As mentioned above, if they are independent, then the jump times will be disjoint. We just need to show the converse. I make use of property 7 of Theorem 7 stating that, for real numbers {a_1,a_2,\ldots,a_n}, the processes

\displaystyle  U^k_t\equiv\exp\left(ia_kX^k_t-(e^{ia_k}-1)\Lambda^k_t\right)

are martingales, for k=1,2,…,n. Next, I make use of the result that, if M, N are local martingales then {MN-[M,N]} is a local martingale. In particular, if they are FV processes whose jump times are almost surely disjoint, the quadratic covariation {[M,N]_t=\sum_{s\le t}\Delta M_s\Delta N_s} is zero, so MN is a local martingale. Inductively, this shows that the product of any finite number of FV local martingales with pairwise disjoint sets of jump times is itself a local martingale. In our case,

\displaystyle  U_t\equiv\prod_kU^k_t=\prod_k\exp\left(ia_kX^k_t-(e^{ia_k}-1)\Lambda^k_t\right)

is a local martingale and, as it is uniformly bounded over each finite time interval, this is a proper martingale. Taking conditional expectations {{\mathbb E}[U_t\mid\mathcal{F}_s]=U_s}, for {s\le t}, gives the following

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\exp\left(\sum_kia_k(X^k_t-X^k_s)\right)\;\middle\vert\;\mathcal{F}_s\right] &\displaystyle=\prod_k\exp\left((e^{ia_k}-1)(\Lambda^k_t-\Lambda^k_s)\right)\smallskip\\ &\displaystyle=\prod_k{\mathbb E}\left[\exp\left(ia_k(X^k_t-X^k_s)\right)\right]. \end{array} (7)

This computes the joint characteristic function of {(X^1_t,\ldots,X^n_t)} in terms of the product of the individual characteristic functions, showing that they are independent. A simple induction extends this to a sequence of times {t_0<t_1<\cdots<t_m},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\exp\left(\sum_{j,k}ia_{j,k}(X^k_{t_j}-X^k_{t_{j-1}})\right)\;\middle\vert\;\mathcal{F}_{t_0}\right] &\displaystyle=\prod_{j,k}\exp\left((e^{ia_{j,k}}-1)(\Lambda^k_{t_j}-\Lambda^k_{t_{j-1}})\right)\smallskip\\ &\displaystyle=\prod_{j,k}{\mathbb E}\left[\exp\left(ia_{j,k}(X^k_{t_j}-X^k_{t_{j-1}})\right)\right].\smallskip\\ &\displaystyle=\prod_k{\mathbb E}\left[\exp\left(\sum_jia_{j,k}(X^k_{t_j}-X^k_{t_{j-1}}\right)\right]. \end{array}

This is just the result of taking the expectation conditional on {\mathcal{F}_{t_{j-1}}} and applying (7) across the interval {[t_{j-1},t_j]}, successively for {j=m,m-1,...}. This shows that the characteristic function of the joint process {(X^1,\ldots,X^n)} at finite sets of times is the product of the characteristic functions of the individual processes. So, they are independent. ⬜

Finally, for this post, we characterize Poisson processes as the counting processes with independent increments property. Compare with the previous post characterizing continuous processes with independent increments.

Theorem 9 Any counting process with the independent increments property, and which is continuous in probability, is an (inhomogenous) Poisson process.

Proof: Suppose that X is a counting process with the independent increments property, and set

\displaystyle  \phi(t)={\mathbb P}(X_t=0)

for {t\ge 0}. This is a nonnegative decreasing function, and the aim is to show that {\Lambda=-\log\phi} is the cumulative rate for X. By continuity in probability, {X_{t_n}\rightarrow X_t} in probability for any sequence {t_n\rightarrow t}, so {\phi(t_n)\rightarrow\phi(t)}, and {\phi} is continuous. Next, suppose that {\phi(t)=0} for some t. We may choose t minimal such that this is the case. So, by the independent increments property,

\displaystyle  \phi(t)={\mathbb P}(X_t=X_s=0)=\phi(s){\mathbb P}(X_t=X_s)

for any {s<t}. By the choice of t, {\phi(s)\not=0}. Also, by continuity in probability, {{\mathbb P}(X_t=X_s)} is nonzero for s close to t. Therefore, {\phi(t)\not=0}, contradicting the assumption.

We have shown that {\phi} is continuous, decreasing and nonzero. Let {\Lambda_t=-\log\phi(t)}, which is continuous and increasing. Finally, by the independent increments property,

\displaystyle  {\mathbb P}(X_t=X_s\mid\mathcal{F}_s)={\mathbb P}(X_t=X_s)=\phi(t)/\phi(s)=\exp(\Lambda_s-\Lambda_t).

That X is a Poisson process with cumulative rate {\Lambda} is now given by condition 3 of Theorem 7. ⬜

In particular, if X has stationary independent increments, then it is a homogeneous Poisson process. This was stated above in Theorem 3.

Corollary 10 Any counting process with stationary independent increments is a homogeneous Poisson process.

Proof: In order to apply Theorem 9 we must show that the counting process X with stationary independent increments is continuous in probability. For a sequence of times {t_n\rightarrow t}, stationarity of the increments implies that {X_{t_n}-X_t} is distributed as {X_{t_n-t}} whenever {t_n\ge t} and as {-X_{t-t_n}} when {t_n<t}. In either case, {X_{t_n}-X_t\sim\pm X_{\vert t_n-t\vert}} which, by right-continuity of X, tends to zero in probability as {n\rightarrow\infty}. So, X is continuous in probability.

By Theorem 9, X is a Poisson process with some cumulative rate {\Lambda}. By stationarity of the increments,

\displaystyle  e^{\Lambda_t-\Lambda_{t+s}}=\phi(t+s)/\phi(t)={\mathbb P}(X_{t+s}-X_t=0)={\mathbb P}(X_s=0)=e^{-\Lambda_s}.

So, {\Lambda_{t+s}=\Lambda_t+\Lambda_s}, giving {\Lambda_t=\lambda t} for some constant {\lambda}. Then, X is a Poisson process of rate {\lambda}. ⬜

8 thoughts on “Poisson Processes

  1. Dear Mr.George Lowther ,
    This is one of the best explanation i had come across so far.Self explanatory.Like to receive this article.
    Look forward to see in my mail box.

    Keep doing best work like this.

    All the best.

  2. Hi,

    I was wondering why you stopped your generalization to inhomogeneous Poisson processes i.e. those for which \Lambda is a deterministic increasing function, and did not include the case of increasing processes ?

    If I put it in another way, what equivalences still hold in theorem 7, if we ask for \Lambda to be only an increasing process adapted (with respect to some large enough filtration), and maybe how general are those processes in the area of counting processes ?

    Best regards

    1. Hi. The case where \Lambda is a continuous deterministic increasing process is relatively simple to deal with, and is the only case where you are assured that the counting process has the Poisson distribution.

      The more general case where \Lambda is just assumed to be continuous, adapted and increasing involves more advanced ideas. Actually, in this case \Lambda will be the compensator of X, which is something I mentioned in my recent post on special semimartingales. I am planning on doing a post on compensators (the next post, I think) and I could include a generalization of Theorem 7. The case where \Lambda is continuous corresponds to X being quasi-left-continuous. More generally, you can take \Lambda to be a right-continuous and increasing predictable process with the constraint ΔΛ ≤ 1 (assuming that X can only have jumps of size 1).

  3. Hello,

    Thank you for the great explanation. I wondered if you could let me know a textbook or paper where I coulf find a proof for what you say just before Lemma 5, namely: “A consequence of the compensated process {M_t=X_t-\lambda t} being a martingale is that the jump times of X are totally inaccessible”.

    Thanks for your help in advance.

    All the best.

    1. This is quite quick to prove. Let T be the time of the n’th jump of X. Also let S be a predictable stopping time. As M is a martingale, E[ΔMS]=0 (predictable times are fair). Then, as the jumps of X and M are all of size 1,

      \setlength\arraycolsep{2pt}\begin{array}{rl} \displaystyle\mathbb{P}(S=T)&\displaystyle\le\mathbb{P}(\Delta X_S\not=0)\smallskip\\ &\displaystyle=\mathbb{E}[\Delta M_S]\smallskip\\ &\displaystyle=0. \end{array}

      So T satisfies the definition of a totally inaccessible stopping time.

      Alternatively, use the result that a cadlag increasing integrable process is quasi-left-continuous if and only if its compensator is continuous (quasi-left-continuous = jump times are totally inaccessible). Any textbook which covers similar material to these notes should also include these results.

      (apologies for the very slow response here, I’ve been busy lately and not had much time to update the blog).

  4. Hi, I think there is a typo on the first line of the second paragraph. X\sim {rm Po(\lambda)} should be latex N\sim {rm Po(\lambda)}

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s