Rao’s Quasimartingale Decomposition

In this post I’ll give a proof of Rao’s decomposition for quasimartingales. That is, every quasimartingale decomposes as the sum of a submartingale and a supermartingale. Equivalently, every quasimartingale is a difference of two submartingales, or alternatively, of two supermartingales. This was originally proven by Rao (Quasi-martingales, 1969), and is an important result in the general theory of continuous-time stochastic processes.

As always, we work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}. It is not required that the filtration satisfies either of the usual conditions — the filtration need not be complete or right-continuous. The methods used in this post are elementary, requiring only basic measure theory along with the definitions and first properties of martingales, submartingales and supermartingales. Other than referring to the definitions of quasimartingales and mean variation given in the previous post, there is no dependency on any of the general theory of semimartingales, nor on stochastic integration other than for elementary integrands.

Recall that, for an adapted integrable process X, the mean variation on an interval {[0,t]} is

\displaystyle  {\rm Var}_t(X)=\sup{\mathbb E}\left[\int_0^t\xi\,dX\right],

where the supremum is taken over all elementary processes {\xi} with {\vert\xi\vert\le1}. Then, X is a quasimartingale if and only if {{\rm Var}_t(X)} is finite for all positive reals t. It was shown that all supermartingales are quasimartingales with mean variation given by

\displaystyle  {\rm Var}_t(X)={\mathbb E}\left[X_0-X_t\right]. (1)

Rao’s decomposition can be stated in several different ways, depending on what conditions are required to be satisfied by the quasimartingale X. As the definition of quasimartingales does differ between texts, there are different versions of Rao’s theorem around although, up to martingale terms, they are equivalent. In this post, I’ll give three different statements with increasingly stronger conditions for X. First, the following statement applies to all quasimartingales as defined in these notes. Theorem 1 can be compared to the Jordan decomposition, which says that any function {f\colon{\mathbb R}_+\rightarrow{\mathbb R}} with finite variation on bounded intervals can be decomposed as the difference of increasing functions or, equivalently, of decreasing functions. Replacing finite variation functions by quasimartingales and decreasing functions by supermartingales gives the following.

Theorem 1 (Rao) A process X is a quasimartingale if and only if it decomposes as

\displaystyle  X=Y-Z (2)

for supermartingales Y and Z. Furthermore,

  • this decomposition can be done in a minimal sense, so that if {X=Y^\prime-Z^\prime} is any other such decomposition then {Y^\prime-Y=Z^\prime-Z} is a supermartingale.
  • the inequality
    \displaystyle  {\rm Var}_t(X)\le{\mathbb E}[Y_0-Y_t]+{\mathbb E}[Z_0-Z_t], (3)

    holds, with equality for all {t\ge0} if and only if the decomposition is minimal.

  • the minimal decomposition is unique up to a martingale. That is, if {X=Y-Z=Y^\prime-Z^\prime} are two such minimal decompositions, then {Y^\prime-Y=Z^\prime-Z} is a martingale.

Continue reading “Rao’s Quasimartingale Decomposition”


Quasimartingales are a natural generalization of martingales, submartingales and supermartingales. They were first introduced by Fisk in order to extend the Doob-Meyer decomposition to a larger class of processes, showing that continuous quasimartingales can be decomposed into martingale and finite variation terms (Quasi-martingales, 1965). This was later extended to right-continuous processes by Orey (F-Processes, 1967). The way in which quasimartingales relate to sub- and super-martingales is very similar to how functions of finite variation relate to increasing and decreasing functions. In particular, by the Jordan decomposition, any finite variation function on an interval decomposes as the sum of an increasing and a decreasing function. Similarly, a stochastic process is a quasimartingale if and only if it can be written as the sum of a submartingale and a supermartingale. This important result was first shown by Rao (Quasi-martingales, 1969), and means that much of the theory of submartingales can be extended without much work to also cover quasimartingales.

Often, given a process, it is important to show that it is a semimartingale so that the techniques of stochastic calculus can be applied. If there is no obvious decomposition into local martingale and finite variation terms, then, one way of doing this is to show that it is a quasimartingale. All right-continuous quasimartingales are semimartingales. This result is also important in the general theory of semimartingales with, for example, many proofs of the Bichteler-Dellacherie theorem involving quasimartingales.

In this post, I will mainly be concerned with the definition and very basic properties of quasimartingales, and look at the more advanced theory in the following post. We work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}. It is not necessary to assume that either of the usual conditions, of right-continuity or completeness, hold. First, the mean variation of a process is defined as follows.

Definition 1 The mean variation of an integrable stochastic process X on an interval {[0,t]} is

\displaystyle  {\rm Var}_t(X)=\sup{\mathbb E}\left[\sum_{k=1}^n\left\vert{\mathbb E}\left[X_{t_k}-X_{t_{k-1}}\;\vert\mathcal{F}_{t_{k-1}}\right]\right\vert\right]. (1)

Here, the supremum is taken over all finite sequences of times,

\displaystyle  0=t_0\le t_1\le\cdots\le t_n=t.

A quasimartingale, then, is a process with finite mean variation on each bounded interval.

Definition 2 A quasimartingale, X, is an integrable adapted process such that {{\rm Var}_t(X)} is finite for each time {t\in{\mathbb R}_+}.

Continue reading “Quasimartingales”

The Doob-Meyer Decomposition

The Doob-Meyer decomposition was a very important result, historically, in the development of stochastic calculus. This theorem states that every cadlag submartingale uniquely decomposes as the sum of a local martingale and an increasing predictable process. For one thing, if X is a square-integrable martingale then Jensen’s inequality implies that {X^2} is a submartingale, so the Doob-Meyer decomposition guarantees the existence of an increasing predictable process {\langle X\rangle} such that {X^2-\langle X\rangle} is a local martingale. The term {\langle X\rangle} is called the predictable quadratic variation of X and, by using a version of the Ito isometry, can be used to define stochastic integration with respect to square-integrable martingales. For another, semimartingales were historically defined as sums of local martingales and finite variation processes, so the Doob-Meyer decomposition ensures that all local submartingales are also semimartingales. Going further, the Doob-Meyer decomposition is used as an important ingredient in many proofs of the Bichteler-Dellacherie theorem.

The approach taken in these notes is somewhat different from the historical development, however. We introduced stochastic integration and semimartingales early on, without requiring much prior knowledge of the general theory of stochastic processes. We have also developed the theory of semimartingales, such as proving the Bichteler-Dellacherie theorem, using a stochastic integration based method. So, the Doob-Meyer decomposition does not play such a pivotal role in these notes as in some other approaches to stochastic calculus. In fact, the special semimartingale decomposition already states a form of the Doob-Meyer decomposition in a more general setting. So, the main part of the proof given in this post will be to show that all local submartingales are semimartingales, allowing the decomposition for special semimartingales to be applied.

The Doob-Meyer decomposition is especially easy to understand in discrete time, where it reduces to the much simpler Doob decomposition. If {\{X_n\}_{n=0,1,2,\ldots}} is an integrable discrete-time process adapted to a filtration {\{\mathcal{F}_n\}_{n=0,1,2,\ldots}}, then the Doob decomposition expresses X as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle X_n&\displaystyle=M_n+A_n,\smallskip\\ \displaystyle A_n&\displaystyle=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\;\vert\mathcal{F}_{k-1}\right]. \end{array} (1)

As previously discussed, M is then a martingale and A is an integrable process which is also predictable, in the sense that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n > 0}. Furthermore, X is a submartingale if and only if {{\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}]\ge0} or, equivalently, if A is almost surely increasing.

Moving to continuous time, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})} with time index t ranging over the nonnegative real numbers. Then, the continuous-time version of (1) takes A to be a right-continuous and increasing process which is predictable, in the sense that it is measurable with respect to the σ-algebra generated by the class of left-continuous and adapted processes. Often, the Doob-Meyer decomposition is stated under additional assumptions, such as X being of class (D) or satisfying some similar uniform integrability property. To be as general possible, the statement I give here only requires X to be a local submartingale, and furthermore states how the decomposition is affected by various stronger hypotheses that X may satisfy.

Theorem 1 (Doob-Meyer) Any local submartingale X has a unique decomposition

\displaystyle  X=M+A, (2)

where M is a local martingale and A is a predictable increasing process starting from zero.


  1. if X is a proper submartingale, then A is integrable and satisfies
    \displaystyle  {\mathbb E}[A_\tau]\le{\mathbb E}[X_\tau-X_0] (3)

    for all uniformly bounded stopping times {\tau}.

  2. X is of class (DL) if and only if M is a proper martingale and A is integrable, in which case
    \displaystyle  {\mathbb E}[A_\tau]={\mathbb E}[X_\tau-X_0] (4)

    for all uniformly bounded stopping times {\tau}.

  3. X is of class (D) if and only if M is a uniformly integrable martingale and {A_\infty} is integrable. Then, {X_\infty=\lim_{t\rightarrow\infty}X_t} and {M_\infty=\lim_{t\rightarrow\infty}M_t} exist almost surely, and (4) holds for all (not necessarily finite) stopping times {\tau}.

Continue reading “The Doob-Meyer Decomposition”

Predictable Stopping Times

Although this post is under the heading of `the general theory of semimartingales’ it is not, strictly speaking, about semimartingales at all. Instead, I will be concerned with a characterization of predictable stopping times. The reason for including this now is twofold. First, the results are too advanced to have been proven in the earlier post on predictable stopping times, and reasonably efficient self-contained proofs can only be given now that we have already built up a certain amount of stochastic calculus theory. Secondly, the results stated here are indispensable to the further study of semimartingales. In particular, standard semimartingale decompositions require some knowledge of predictable processes and predictable stopping times.

Recall that a stopping time {\tau} is said to be predictable if there exists a sequence of stopping times {\tau_n\le\tau} increasing to {\tau} and such that {\tau_n < \tau} whenever {\tau > 0}. Also, the predictable sigma-algebra {\mathcal{P}} is defined as the sigma-algebra generated by the left-continuous and adapted processes. Stated like this, these two concepts can appear quite different. However, as was previously shown, stochastic intervals of the form {[\tau,\infty)} for predictable times {\tau} are all in {\mathcal{P}} and, in fact, generate the predictable sigma-algebra.

The main result (Theorem 1) of this post is to show that a converse statement holds, so that {[\tau,\infty)} is in {\mathcal{P}} if and only if the stopping time {\tau} is predictable. This rather simple sounding result does have many far-reaching consequences. We can use it show that all cadlag predictable processes are locally bounded, local martingales are predictable if and only if they are continuous, and also give a characterization of cadlag predictable processes in terms of their jumps. Some very strong statements about stopping times also follow without much difficulty for certain special stochastic processes. For example, if the underlying filtration is generated by a Brownian motion then every stopping time is predictable. Actually, this is true whenever the filtration is generated by a continuous Feller process. It is also possible to give a surprisingly simple characterization of stopping times for filtrations generated by arbitrary non-continuous Feller processes. Precisely, a stopping time {\tau} is predictable if the process is almost surely continuous at time {\tau} and is totally inaccessible if the underlying Feller process is almost surely discontinuous at {\tau}.

As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}. I now give a statement and proof of the main result of this post. Note that the equivalence of the four conditions below means that any of them can be used as alternative definitions of predictable stopping times. Often, the first condition below is used instead. Stopping times satisfying the definition used in these notes are sometimes called announceable, with the sequence {\tau_n\uparrow\tau} said to announce {\tau} (this terminology is used by, e.g., Rogers & Williams). Stopping times satisfying property 3 below, which is easily seen to be equivalent to 2, are sometimes called fair. Then, the following theorem says that the sets of predictable, fair and announceable stopping times all coincide.

Theorem 1 Let {\tau} be a stopping time. Then, the following are equivalent.

  1. {[\tau]\in\mathcal{P}}.
  2. {\Delta M_\tau1_{[\tau,\infty)}} is a local martingale for all local martingales M.
  3. {{\mathbb E}[1_{\{\tau < \infty\}}\Delta M_\tau]=0} for all cadlag bounded martingales M.
  4. {\tau} is predictable.

Continue reading “Predictable Stopping Times”

The Bichteler-Dellacherie Theorem

In this post, I will give a statement and proof of the Bichteler-Dellacherie theorem describing the space of semimartingales. A semimartingale, as defined in these notes, is a cadlag adapted stochastic process X such that the stochastic integral {\int\xi\,dX} is well-defined for all bounded predictable integrands {\xi}. More precisely, an integral should exist which agrees with the explicit expression for elementary integrands, and satisfies bounded convergence in the following sense. If {\{\xi^n\}_{n=1,2,\ldots}} is a uniformly bounded sequence of predictable processes tending to a limit {\xi}, then {\int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX} in probability as n goes to infinity. If such an integral exists, then it is uniquely defined up to zero probability sets.

An immediate consequence of bounded convergence is that the set of integrals {\int_0^t\xi\,dX} for a fixed time t and bounded elementary integrands {\vert\xi\vert\le1} is bounded in probability. That is,

\displaystyle  \left\{\int_0^t\xi\,dX\colon\xi{\rm\ is\ elementary},\ \vert\xi\vert\le1\right\} (1)

is bounded in probability, for each {t\ge0}. For cadlag adapted processes, it was shown in a previous post that this is both a necessary and sufficient condition to be a semimartingale. Some authors use the property that (1) is bounded in probability as the definition of semimartingales (e.g., Protter, Stochastic Calculus and Differential Equations). The existence of the stochastic integral for arbitrary predictable integrands does not follow particularly easily from this definition, at least, not without using results on extensions of vector valued measures. On the other hand, if you are content to restrict to integrands which are left-continuous with right limits, the integral can be constructed very efficiently and, furthermore, such integrands are sufficient for many uses (integration by parts, Ito’s formula, a large class of stochastic differential equations, etc).

It was previously shown in these notes that, if X can be decomposed as {X=M+V} for a local martingale M and FV process V then it is possible to construct the stochastic integral, so X is a semimartingale. The importance of the Bichteler-Dellacherie theorem is that it tells us that a process is a semimartingale if and only if it is the sum of a local martingale and an FV process. In fact this was the historical definition used of semimartingales, and is still probably the most common definition.

Throughout, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}, and all processes are real-valued.

Theorem 1 (Bichteler-Dellacherie) For a cadlag adapted process X, the following are equivalent.

  1. X is a semimartingale.
  2. For each {t\ge0}, the set given by (1) is bounded in probability.
  3. X is the sum of a local martingale and an FV process.

Furthermore, the local martingale term in 3 can be taken to be locally bounded.

Continue reading “The Bichteler-Dellacherie Theorem”

The General Theory of Semimartingales

Having completed the series of posts applying the methods of stochastic calculus to various special types of processes, I now return to the development of the theory. The next few posts of these notes will be grouped under the heading `The General Theory of Semimartingales’. Subjects which will be covered include the classification of predictable stopping times, integration with respect to continuous and predictable FV processes, decompositions of special semimartingales, the Bichteler-Dellacherie theorem, the Doob-Meyer decomposition and the theory of quasimartingales.

One of the main results is the Bichteler-Dellacherie theorem describing the class of semimartingales which, in these notes, were defined to be cadlag adapted processes with respect to which the stochastic integral can be defined (that is, they are good integrators). It was shown that these include the sums of local martingales and FV processes. The Bichteler-Dellacherie theorem says that this is the full class of semimartingales. Classically, semimartingales were defined as a sum of a local martingale and an FV process so, an alternative statement of the theorem is that the classical definition agrees with the one used in these notes. Further results, such as the Doob-Meyer decomposition for submartingales and Rao’s decomposition for quasimartingales, will follow quickly from this.

Logically, the structure of these notes will be almost directly opposite to the historical development of the results. Originally, much of the development of the stochastic integral was based on the Doob-Meyer decomposition which, in turn, relied on some advanced ideas such as the predictable and dual predictable projection theorems. However, here, we have already introduced stochastic integration without recourse to such general theory, and can instead make use of this in the theory. The reasons I have taken this approach are as follows. First, stochastic integration is a particularly straightforward and useful technique for many applications, so it is desirable to introduce this early on. Second, although it is possible to use the general theory of processes in the construction of the integral, such an approach seems rather distinct from the intuitive understanding of stochastic integration as well as superfluous to many of its properties. So it seemed more natural from the point of view of these notes to define the integral first, guided by the properties of the (non-stochastic) Lebesgue integral, then show how its elementary properties follow from the definitions, and develop the further theory later. Continue reading “The General Theory of Semimartingales”

Zero-Hitting and Failure of the Martingale Property

For nonnegative local martingales, there is an interesting symmetry between the failure of the martingale property and the possibility of hitting zero, which I will describe now. I will also give a necessary and sufficient condition for solutions to a certain class of stochastic differential equations to hit zero in finite time and, using the aforementioned symmetry, infer a necessary and sufficient condition for the processes to be proper martingales. It is often the case that solutions to SDEs are clearly local martingales, but is hard to tell whether they are proper martingales. So, the martingale condition, given in Theorem 4 below, is a useful result to know. The method described here is relatively new to me, only coming up while preparing the previous post. Applying a hedging argument, it was noted that the failure of the martingale property for solutions to the SDE {dX=X^c\,dB} for {c>1} is related to the fact that, for {c<1}, the process hits zero. This idea extends to all continuous and nonnegative local martingales. The Girsanov transform method applied here is essentially the same as that used by Carlos A. Sin (Complications with stochastic volatility models, Adv. in Appl. Probab. Volume 30, Number 1, 1998, 256-268) and B. Jourdain (Loss of martingality in asset price models with lognormal stochastic volatility, Preprint CERMICS, 2004-267).

Consider nonnegative solutions to the stochastic differential equation

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=a(X)X\,dB,\smallskip\\ &\displaystyle X_0=x_0, \end{array} (1)

where {a\colon{\mathbb R}_+\rightarrow{\mathbb R}}, B is a Brownian motion and the fixed initial condition {x_0} is strictly positive. The multiplier X in the coefficient of dB ensures that if X ever hits zero then it stays there. By time-change methods, uniqueness in law is guaranteed as long as a is nonzero and {a^{-2}} is locally integrable on {(0,\infty)}. Consider also the following SDE,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dY=\tilde a(Y)Y\,dB,\smallskip\\ &\displaystyle Y_0=y_0,\smallskip\\ &\displaystyle \tilde a(y) = a(y^{-1}),\ y_0=x_0^{-1} \end{array} (2)

Being integrals with respect to Brownian motion, solutions to (1) and (2) are local martingales. It is possible for them to fail to be proper martingales though, and they may or may not hit zero at some time. These possibilities are related by the following result.

Theorem 1 Suppose that (1) and (2) satisfy uniqueness in law. Then, X is a proper martingale if and only if Y never hits zero. Similarly, Y is a proper martingale if and only if X never hits zero.

Continue reading “Zero-Hitting and Failure of the Martingale Property”

Failure of the Martingale Property

In this post, I give an example of a class of processes which can be expressed as integrals with respect to Brownian motion, but are not themselves martingales. As stochastic integration preserves the local martingale property, such processes are guaranteed to be at least local martingales. However, this is not enough to conclude that they are proper martingales. Whereas constructing examples of local martingales which are not martingales is a relatively straightforward exercise, such examples are often slightly contrived and the martingale property fails for obvious reasons (e.g., double-loss betting strategies). The aim here is to show that the martingale property can fail for very simple stochastic differential equations which are likely to be met in practice, and it is not always obvious when this situation arises.

Consider the following stochastic differential equation

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX = aX^c\,dB +b X dt,\smallskip\\ &\displaystyle X_0=x, \end{array} (1)

for a nonnegative process X. Here, B is a Brownian motion and a,b,c,x are positive constants. This a common SDE appearing, for example, in the constant elasticity of variance model for option pricing. Now consider the following question: what is the expected value of X at time t?

The obvious answer seems to be that {{\mathbb E}[X_t]=xe^{bt}}, based on the idea that X has growth rate b on average. A more detailed argument is to write out (1) in integral form

\displaystyle  X_t=x+\int_0^t\,aX^c\,dB+ \int_0^t bX_s\,ds. (2)

The next step is to note that the first integral is with respect to Brownian motion, so has zero expectation. Therefore,

\displaystyle  {\mathbb E}[X_t]=x+\int_0^tb{\mathbb E}[X_s]\,ds.

This can be differentiated to obtain the ordinary differential equation {d{\mathbb E}[X_t]/dt=b{\mathbb E}[X_t]}, which has the unique solution {{\mathbb E}[X_t]={\mathbb E}[X_0]e^{bt}}.

In fact this argument is false. For {c\le1} there is no problem, and {{\mathbb E}[X_t]=xe^{bt}} as expected. However, for all {c>1} the conclusion is wrong, and the strict inequality {{\mathbb E}[X_t]<xe^{bt}} holds.

The point where the argument above falls apart is the statement that the first integral in (2) has zero expectation. This would indeed follow if it was known that it is a martingale, as is often assumed to be true for stochastic integrals with respect to Brownian motion. However, stochastic integration preserves the local martingale property and not, in general, the martingale property itself. If {c>1} then we have exactly this situation, where only the local martingale property holds. The first integral in (2) is not a proper martingale, and has strictly negative expectation at all positive times. The reason that the martingale property fails here for {c>1} is that the coefficient {aX^c} of dB grows too fast in X.

In this post, I will mainly be concerned with the special case of (1) with a=1 and zero drift.

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=X^c\,dB,\smallskip\\ &\displaystyle X_0=x. \end{array} (3)

The general form (1) can be reduced to this special case, as I describe below. SDEs (1) and (3) do have unique solutions, as I will prove later. Then, as X is a nonnegative local martingale, if it ever hits zero then it must remain there (0 is an absorbing boundary).

The solution X to (3) has the following properties, which will be proven later in this post.

  • If {c\le1} then X is a martingale and, for {c<1}, it eventually hits zero with probability one.
  • If {c>1} then X is a strictly positive local martingale but not a martingale. In fact, the following inequality holds
    \displaystyle  {\mathbb E}[X_t\mid\mathcal{F}_s]<X_s (4)

    (almost surely) for times {s<t}. Furthermore, for any positive constant {p<2c-1}, {{\mathbb E}[X_t^p]} is bounded over {t\ge0} and tends to zero as {t\rightarrow\infty}.

Continue reading “Failure of the Martingale Property”

Poisson Processes

A Poisson process sample path
Figure 1: A Poisson process sample path

A Poisson process is a continuous-time stochastic process which counts the arrival of randomly occurring events. Commonly cited examples which can be modeled by a Poisson process include radioactive decay of atoms and telephone calls arriving at an exchange, in which the number of events occurring in each consecutive time interval are assumed to be independent. Being piecewise constant, Poisson processes have very simple pathwise properties. However, they are very important to the study of stochastic calculus and, together with Brownian motion, forms one of the building blocks for the much more general class of Lévy processes. I will describe some of their properties in this post.

A random variable N has the Poisson distribution with parameter {\lambda}, denoted by {N\sim{\rm Po}(\lambda)}, if it takes values in the set of nonnegative integers and

\displaystyle  {\mathbb P}(N=n)=\frac{\lambda^n}{n!}e^{-\lambda} (1)

for each {n\in{\mathbb Z}_+}. The mean and variance of N are both equal to {\lambda}, and the moment generating function can be calculated,

\displaystyle  {\mathbb E}\left[e^{aN}\right] = \exp\left(\lambda(e^a-1)\right),

which is valid for all {a\in{\mathbb C}}. From this, it can be seen that the sum of independent Poisson random variables with parameters {\lambda} and {\mu} is again Poisson with parameter {\lambda+\mu}. The Poisson distribution occurs as a limit of binomial distributions. The binomial distribution with success probability p and m trials, denoted by {{\rm Bin}(m,p)}, is the sum of m independent {\{0,1\}}-valued random variables each with probability p of being 1. Explicitly, if {N\sim{\rm Bin}(m,p)} then

\displaystyle  {\mathbb P}(N=n)=\frac{m!}{n!(m-n)!}p^n(1-p)^{m-n}.

In the limit as {m\rightarrow\infty} and {p\rightarrow 0} such that {mp\rightarrow\lambda}, it can be verified that this tends to the Poisson distribution (1) with parameter {\lambda}.

Poisson processes are then defined as processes with independent increments and Poisson distributed marginals, as follows.

Definition 1 A Poisson process X of rate {\lambda\ge0} is a cadlag process with {X_0=0} and {X_t-X_s\sim{\rm Po}(\lambda(t-s))} independently of {\{X_u\colon u\le s\}} for all {s\le t}.

An immediate consequence of this definition is that, if X and Y are independent Poisson processes of rates {\lambda} and {\mu} respectively, then their sum {X+Y} is also Poisson with rate {\lambda+\mu}. Continue reading “Poisson Processes”

Continuous Processes with Independent Increments

A stochastic process X is said to have independent increments if {X_t-X_s} is independent of {\{X_u\}_{u\le s}} for all {s\le t}. For example, standard Brownian motion is a continuous process with independent increments. Brownian motion also has stationary increments, meaning that the distribution of {X_{t+s}-X_t} does not depend on t. In fact, as I will show in this post, up to a scaling factor and linear drift term, Brownian motion is the only such process. That is, any continuous real-valued process X with stationary independent increments can be written as

\displaystyle  X_t = X_0 + b t + \sigma B_t (1)

for a Brownian motion B and constants {b,\sigma}. This is not so surprising in light of the central limit theorem. The increment of a process across an interval [s,t] can be viewed as the sum of its increments over a large number of small time intervals partitioning [s,t]. If these terms are independent with relatively small variance, then the central limit theorem does suggest that their sum should be normally distributed. Together with the previous posts on Lévy’s characterization and stochastic time changes, this provides yet more justification for the ubiquitous position of Brownian motion in the theory of continuous-time processes. Consider, for example, stochastic differential equations such as the Langevin equation. The natural requirements for the stochastic driving term in such equations is that they be continuous with stationary independent increments and, therefore, can be written in terms of Brownian motion.

The definition of standard Brownian motion extends naturally to multidimensional processes and general covariance matrices. A standard d-dimensional Brownian motion {B=(B^1,\ldots,B^d)} is a continuous process with stationary independent increments such that {B_t} has the {N(0,tI)} distribution for all {t\ge 0}. That is, {B_t} is joint normal with zero mean and covariance matrix tI. From this definition, {B_t-B_s} has the {N(0,(t-s)I)} distribution independently of {\{B_u\colon u\le s\}} for all {s\le t}. This definition can be further generalized. Given any {b\in{\mathbb R}^d} and positive semidefinite {\Sigma\in{\mathbb R}^{d^2}}, we can consider a d-dimensional process X with continuous paths and stationary independent increments such that {X_t} has the {N(tb,t\Sigma)} distribution for all {t\ge 0}. Here, {b} is the drift of the process and {\Sigma} is the `instantaneous covariance matrix’. Such processes are sometimes referred to as {(b,\Sigma)}-Brownian motions, and all continuous d-dimensional processes starting from zero and with stationary independent increments are of this form.

Theorem 1 Let X be a continuous {{\mathbb R}^d}-valued process with stationary independent increments.

Then, there exist unique {b\in{\mathbb R}^d} and {\Sigma\in{\mathbb R}^{d^2}} such that {X_t-X_0} is a {(b,\Sigma)}-Brownian motion.

Continue reading “Continuous Processes with Independent Increments”