# Brownian Bridges

A Brownian bridge can be defined as standard Brownian motion conditioned on hitting zero at a fixed future time T, or as any continuous process with the same distribution as this. Rather than conditioning, a slightly easier approach is to subtract a linear term from the Brownian motion, chosen such that the resulting process hits zero at the time T. This is equivalent, but has the added benefit of being independent of the original Brownian motion at all later times.

Lemma 1 Let X be a standard Brownian motion and ${T > 0}$ be a fixed time. Then, the process

 $\displaystyle B_t = X_t - \frac tTX_T$ (1)

over ${0\le t\le T}$ is independent from ${\{X_t\}_{t\ge T}}$.

Proof: As the processes are joint normal, it is sufficient that there is zero covariance between them. So, for times ${s\le T\le t}$, we just need to show that ${{\mathbb E}[B_sX_t]}$ is zero. Using the covariance structure ${{\mathbb E}[X_sX_t]=s\wedge t}$ we obtain,

 $\displaystyle {\mathbb E}[B_sX_t]={\mathbb E}[X_sX_t]-\frac sT{\mathbb E}[X_TX_t]=s-\frac sTT=0$

as required. ⬜

This leads us to the definition of a Brownian bridge.

Definition 2 A continuous process ${\{B_t\}_{t\in[0,T]}}$ is a Brownian bridge on the interval ${[0,T]}$ if and only it has the same distribution as ${X_t-\frac tTX_T}$ for a standard Brownian motion X.

In case that ${T=1}$, then B is called a standard Brownian bridge.

There are actually many different ways in which Brownian bridges can be defined, which all lead to the same result.

• As a Brownian motion minus a linear term so that it hits zero at T. This is definition 2.
• As a Brownian motion X scaled as ${tT^{-1/2}X_{T/t-1}}$. See lemma 9 below.
• As a joint normal process with prescribed covariances. See lemma 7 below.
• As a Brownian motion conditioned on hitting zero at T. See lemma 14 below.
• As a Brownian motion restricted to the times before it last hits zero before a fixed positive time T, and rescaled to fit a fixed time interval. See lemma 15 below.
• As a Markov process. See lemma 13 below.
• As a solution to a stochastic differential equation with drift term forcing it to hit zero at T. See lemma 18 below.

There are other constructions beyond these, such as in terms of limits of random walks, although I will not cover those in this post. Continue reading “Brownian Bridges”

# The Riemann Zeta Function and Probability Distributions

The famous Riemann zeta function was first introduced by Riemann in order to describe the distribution of the prime numbers. It is defined by the infinite sum

 \displaystyle \begin{aligned} \zeta(s) &=1+2^{-s}+3^{-s}+4^{-s}+\cdots\\ &=\sum_{n=1}^\infty n^{-s}, \end{aligned} (1)

which is absolutely convergent for all complex s with real part greater than one. One of the first properties of this is that, as shown by Riemann, it extends to an analytic function on the entire complex plane, other than a simple pole at ${s=1}$. By the theory of analytic continuation this extension is necessarily unique, so the importance of the result lies in showing that an extension exists. One way of doing this is to find an alternative expression for the zeta function which is well defined everywhere. For example, it can be expressed as an absolutely convergent integral, as performed by Riemann himself in his original 1859 paper on the subject. This leads to an explicit expression for the zeta function, scaled by an analytic prefactor, as the integral of ${x^s}$ multiplied by a function of x over the range ${ x > 0}$. In fact, this can be done in a way such that the function of x is a probability density function, and hence expresses the Riemann zeta function over the entire complex plane in terms of the generating function ${{\mathbb E}[X^s]}$ of a positive random variable X. The probability distributions involved here are not the standard ones taught to students of probability theory, so may be new to many people. Although these distributions are intimately related to the Riemann zeta function they also, intriguingly, turn up in seemingly unrelated contexts involving Brownian motion.

In this post, I derive two probability distributions related to the extension of the Riemann zeta function, and describe some of their properties. I also show how they can be constructed as the sum of a sequence of gamma distributed random variables. For motivation, some examples are given of where they show up in apparently unrelated areas of probability theory, although I do not give proofs of these statements here. For more information, see the 2001 paper Probability laws related to the Jacobi theta and Riemann zeta functions, and Brownian excursions by Biane, Pitman, and Yor. Continue reading “The Riemann Zeta Function and Probability Distributions”

# Brownian Drawdowns

Here, I apply the theory outlined in the previous post to fully describe the drawdown point process of a standard Brownian motion. In fact, as I will show, the drawdowns can all be constructed from independent copies of a single ‘Brownian excursion’ stochastic process. Recall that we start with a continuous stochastic process X, assumed here to be Brownian motion, and define its running maximum as ${M_t=\sup_{s\le t}X_s}$ and drawdown process ${D_t=M_t-X_t}$. This is as in figure 1 above.

Next, ${D^a}$ was defined to be the drawdown ‘excursion’ over the interval at which the maximum process is equal to the value ${a \ge 0}$. Precisely, if we let ${\tau_a}$ be the first time at which X hits level ${a}$ and ${\tau_{a+}}$ be its right limit ${\tau_{a+}=\lim_{b\downarrow a}\tau_b}$ then,

 $\displaystyle D^a_t=D_{({\tau_a+t})\wedge\tau_{a+}}=a-X_{({\tau_a+t)}\wedge\tau_{a+}}.$

Next, a random set S is defined as the collection of all nonzero drawdown excursions indexed the running maximum,

 $\displaystyle S=\left\{(a,D^a)\colon D^a\not=0\right\}.$

The set of drawdown excursions corresponding to the sample path from figure 1 are shown in figure 2 below.

As described in the post on semimartingale local times, the joint distribution of the drawdown and running maximum ${(D,M)}$, of a Brownian motion, is identical to the distribution of its absolute value and local time at zero, ${(\lvert X\rvert,L^0)}$. Hence, the point process consisting of the drawdown excursions indexed by the running maximum, and the absolute value of the excursions from zero indexed by the local time, both have the same distribution. So, the theory described in this post applies equally to the excursions away from zero of a Brownian motion.

Before going further, let’s recap some of the technical details. The excursions lie in the space E of continuous paths ${z\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, on which we define a canonical process Z by sampling the path at each time t, ${Z_t(z)=z_t}$. This space is given the topology of uniform convergence over finite time intervals (compact open topology), which makes it into a Polish space, and whose Borel sigma-algebra ${\mathcal E}$ is equal to the sigma-algebra generated by ${\{Z_t\}_{t\ge0}}$. As shown in the previous post, the counting measure ${\xi(A)=\#(S\cap A)}$ is a random point process on ${({\mathbb R}_+\times E,\mathcal B({\mathbb R}_+)\otimes\mathcal E)}$. In fact, it is a Poisson point process, so its distribution is fully determined by its intensity measure ${\mu={\mathbb E}\xi}$.

Theorem 1 If X is a standard Brownian motion, then the drawdown point process ${\xi}$ is Poisson with intensity measure ${\mu=\lambda\otimes\nu}$ where,

• ${\lambda}$ is the standard Lebesgue measure on ${{\mathbb R}_+}$.
• ${\nu}$ is a sigma-finite measure on E given by
 $\displaystyle \nu(f) = \lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[f(Z^{\sigma})]$ (1)

for all bounded continuous continuous maps ${f\colon E\rightarrow{\mathbb R}}$ which vanish on paths of length less than L (some ${L > 0}$). The limit is taken over ${\epsilon > 0}$, ${{\mathbb E}_\epsilon}$ denotes expectation under the measure with respect to which Z is a Brownian motion started at ${\epsilon}$, and ${\sigma}$ is the first time at which Z hits 0. This measure satisfies the following properties,

• ${\nu}$-almost everywhere, there exists a time ${T > 0}$ such that ${Z > 0}$ on ${(0,T)}$ and ${Z=0}$ everywhere else.
• for each ${t > 0}$, the distribution of ${Z_t}$ has density
 $\displaystyle p_t(z)=z\sqrt{\frac 2{\pi t^3}}e^{-\frac{z^2}{2t}}$ (2)

over the range ${z > 0}$.

• over ${t > 0}$, ${Z_t}$ is Markov, with transition function of a Brownian motion stopped at zero.

# Semimartingale Local Times

For a stochastic process X taking values in a state space E, its local time at a point ${x\in E}$ is a measure of the time spent at x. For a continuous time stochastic process, we could try and simply compute the Lebesgue measure of the time at the level,

 $\displaystyle L^x_t=\int_0^t1_{\{X_s=x\}}ds.$ (1)

For processes which hit the level ${x}$ and stick there for some time, this makes some sense. However, if X is a standard Brownian motion, it will always give zero, so is not helpful. Even though X will hit every real value infinitely often, continuity of the normal distribution gives ${{\mathbb P}(X_s=x)=0}$ at each positive time, so that that ${L^x_t}$ defined by (1) will have zero expectation.

Rather than the indicator function of ${\{X=x\}}$ as in (1), an alternative is to use the Dirac delta function,

 $\displaystyle L^x_t=\int_0^t\delta(X_s-x)\,ds.$ (2)

Unfortunately, the Dirac delta is not a true function, it is a distribution, so (2) is not a well-defined expression. However, if it can be made rigorous, then it does seem to have some of the properties we would want. For example, the expectation ${{\mathbb E}[\delta(X_s-x)]}$ can be interpreted as the probability density of ${X_s}$ evaluated at ${x}$, which has a positive and finite value, so it should lead to positive and finite local times. Equation (2) still relies on the Lebesgue measure over the time index, so will not behave as we may expect under time changes, and will not make sense for processes without a continuous probability density. A better approach is to integrate with respect to the quadratic variation,

 $\displaystyle L^x_t=\int_0^t\delta(X_s-x)d[X]_s$ (3)

which, for Brownian motion, amounts to the same thing. Although (3) is still not a well-defined expression, since it still involves the Dirac delta, the idea is to come up with a definition which amounts to the same thing in spirit. Important properties that it should satisfy are that it is an adapted, continuous and increasing process with increments supported on the set ${\{X=x\}}$,

 $\displaystyle L^x_t=\int_0^t1_{\{X_s=x\}}dL^x_s.$

Local times are a very useful and interesting part of stochastic calculus, and finds important applications to excursion theory, stochastic integration and stochastic differential equations. However, I have not covered this subject in my notes, so do this now. Recalling Ito’s lemma for a function ${f(X)}$ of a semimartingale X, this involves a term of the form ${\int f^{\prime\prime}(X)d[X]}$ and, hence, requires ${f}$ to be twice differentiable. If we were to try to apply the Ito formula for functions which are not twice differentiable, then ${f^{\prime\prime}}$ can be understood in terms of distributions, and delta functions can appear, which brings local times into the picture. In the opposite direction, which I take in this post, we can try to generalise Ito’s formula and invert this to give a meaning to (3). Continue reading “Semimartingale Local Times”

# A Process With Hidden Drift

Consider a stochastic process X of the form

 $\displaystyle X_t=W_t+\int_0^t\xi_sds,$ (1)

for a standard Brownian motion W and predictable process ${\xi}$, defined with respect to a filtered probability space ${(\Omega,\mathcal F,\{\mathcal F_t\}_{t\in{\mathbb R}_+},{\mathbb P})}$. For this to make sense, we must assume that ${\int_0^t\lvert\xi_s\rvert ds}$ is almost surely finite at all times, and I will suppose that ${\mathcal F_\cdot}$ is the filtration generated by W.

The question is whether the drift ${\xi}$ can be backed out from knowledge of the process X alone. As I will show with an example, this is not possible. In fact, in our example, X will itself be a standard Brownian motion, even though the drift ${\xi}$ is non-trivial (that is, ${\int\xi dt}$ is not almost surely zero). In this case X has exactly the same distribution as W, so cannot be distinguished from the driftless case with ${\xi=0}$ by looking at the distribution of X alone.

On the face of it, this seems rather counter-intuitive. By standard semimartingale decomposition, it is known that we can always decompose

 $\displaystyle X=M+A$ (2)

for a unique continuous local martingale M starting from zero, and unique continuous FV process A. By uniqueness, ${M=W}$ and ${A=\int\xi dt}$. This allows us to back out the drift ${\xi}$ and, in particular, if the drift is non-trivial then X cannot be a martingale. However, in the semimartingale decomposition, it is required that M is a martingale with respect to the original filtration ${\mathcal F_\cdot}$. If we do not know the filtration ${\mathcal F_\cdot}$, then it might not be possible to construct decomposition (2) from knowledge of X alone. As mentioned above, we will give an example where X is a standard Brownian motion which, in particular, means that it is a martingale under its natural filtration. By the semimartingale decomposition result, it is not possible for X to be an ${\mathcal F_\cdot}$-martingale. A consequence of this is that the natural filtration of X must be strictly smaller than the natural filtration of W.

The inspiration for this post was a comment by Gabe posing the following question: If we take ${\mathbb F}$ to be the filtration generated by a standard Brownian motion W in ${(\Omega,\mathcal F,{\mathbb P})}$, and we define ${\tilde W_t=W_t+\int_0^t\Theta_udu}$, can we find an ${\mathbb F}$-adapted ${\Theta}$ such that the filtration generated by ${\tilde W}$ is smaller than ${\mathbb F}$? Our example gives an affirmative answer. Continue reading “A Process With Hidden Drift”

# Continuous Semimartingales

A stochastic process is a semimartingale if and only if it can be decomposed as the sum of a local martingale and an FV process. This is stated by the Bichteler-Dellacherie theorem or, alternatively, is often taken as the definition of a semimartingale. For continuous semimartingales, which are the subject of this post, things simplify considerably. The terms in the decomposition can be taken to be continuous, in which case they are also unique. As usual, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}$, all processes are real-valued, and two processes are considered to be the same if they are indistinguishable.

Theorem 1 A continuous stochastic process X is a semimartingale if and only if it decomposes as

 $\displaystyle X=M+A$ (1)

for a continuous local martingale M and continuous FV process A. Furthermore, assuming that ${A_0=0}$, decomposition (1) is unique.

Proof: As sums of local martingales and FV processes are semimartingales, X is a semimartingale whenever it satisfies the decomposition (1). Furthermore, if ${X=M+A=M^\prime+A^\prime}$ were two such decompositions with ${A_0=A^\prime_0=0}$ then ${M-M^\prime=A^\prime-A}$ is both a local martingale and a continuous FV process. Therefore, ${A^\prime-A}$ is constant, so ${A=A^\prime}$ and ${M=M^\prime}$.

It just remains to prove the existence of decomposition (1). However, X is continuous and, hence, is locally square integrable. So, Lemmas 4 and 5 of the previous post say that we can decompose ${X=M+A}$ where M is a local martingale, A is an FV process and the quadratic covariation ${[M,A]}$ is a local martingale. As X is continuous we have ${\Delta M=-\Delta A}$ so that, by the properties of covariations,

 $\displaystyle -[M,A]_t=-\sum_{s\le t}\Delta M_s\Delta A_s=\sum_{s\le t}(\Delta A_s)^2.$ (2)

We have shown that ${-[M,A]}$ is a nonnegative local martingale so, in particular, it is a supermartingale. This gives ${\mathbb{E}[-[M,A]_t]\le\mathbb{E}[-[M,A]_0]=0}$. Then (2) implies that ${\Delta A}$ is zero and, hence, A and ${M=X-A}$ are continuous. ⬜

Using decomposition (1), it can be shown that a predictable process ${\xi}$ is X-integrable if and only if it is both M-integrable and A-integrable. Then, the integral with respect to X breaks down into the sum of the integrals with respect to M and A. This greatly simplifies the construction of the stochastic integral for continuous semimartingales. The integral with respect to the continuous FV process A is equivalent to Lebesgue-Stieltjes integration along sample paths, and it is possible to construct the integral with respect to the continuous local martingale M for the full set of M-integrable integrands using the Ito isometry. Many introductions to stochastic calculus focus on integration with respect to continuous semimartingales, which is made much easier because of these results.

Theorem 2 Let ${X=M+A}$ be the decomposition of the continuous semimartingale X into a continuous local martingale M and continuous FV process A. Then, a predictable process ${\xi}$ is X-integrable if and only if

 $\displaystyle \int_0^t\xi^2\,d[M]+\int_0^t\vert\xi\vert\,\vert dA\vert < \infty$ (3)

almost surely, for each time ${t\ge0}$. In that case, ${\xi}$ is both M-integrable and A-integrable and,

 $\displaystyle \int\xi\,dX=\int\xi\,dM+\int\xi\,dA$ (4)

gives the decomposition of ${\int\xi\,dX}$ into its local martingale and FV terms.

# Lévy Processes

Continuous-time stochastic processes with stationary independent increments are known as Lévy processes. In the previous post, it was seen that processes with independent increments are described by three terms — the covariance structure of the Brownian motion component, a drift term, and a measure describing the rate at which jumps occur. Being a special case of independent increments processes, the situation with Lévy processes is similar. However, stationarity of the increments does simplify things a bit. We start with the definition.

Definition 1 (Lévy process) A d-dimensional Lévy process X is a stochastic process taking values in ${{\mathbb R}^d}$ such that

• independent increments: ${X_t-X_s}$ is independent of ${\{X_u\colon u\le s\}}$ for any ${s.
• stationary increments: ${X_{s+t}-X_s}$ has the same distribution as ${X_t-X_0}$ for any ${s,t>0}$.
• continuity in probability: ${X_s\rightarrow X_t}$ in probability as s tends to t.

More generally, it is possible to define the notion of a Lévy process with respect to a given filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}$. In that case, we also require that X is adapted to the filtration and that ${X_t-X_s}$ is independent of ${\mathcal{F}_s}$ for all ${s < t}$. In particular, if X is a Lévy process according to definition 1 then it is also a Lévy process with respect to its natural filtration ${\mathcal{F}_t=\sigma(X_s\colon s\le t)}$. Note that slightly different definitions are sometimes used by different authors. It is often required that ${X_0}$ is zero and that X has cadlag sample paths. These are minor points and, as will be shown, any process satisfying the definition above will admit a cadlag modification.

The most common example of a Lévy process is Brownian motion, where ${X_t-X_s}$ is normally distributed with zero mean and variance ${t-s}$ independently of ${\mathcal{F}_s}$. Other examples include Poisson processes, compound Poisson processes, the Cauchy process, gamma processes and the variance gamma process.

For example, the symmetric Cauchy distribution on the real numbers with scale parameter ${\gamma > 0}$ has probability density function p and characteristic function ${\phi}$ given by,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle p(x)=\frac{\gamma}{\pi(\gamma^2+x^2)},\smallskip\\ &\displaystyle\phi(a)\equiv{\mathbb E}\left[e^{iaX}\right]=e^{-\gamma\vert a\vert}. \end{array}$ (1)

From the characteristic function it can be seen that if X and Y are independent Cauchy random variables with scale parameters ${\gamma_1}$ and ${\gamma_2}$ respectively then ${X+Y}$ is Cauchy with parameter ${\gamma_1+\gamma_2}$. We can therefore consistently define a stochastic process ${X_t}$ such that ${X_t-X_s}$ has the symmetric Cauchy distribution with parameter ${t-s}$ independent of ${\{X_u\colon u\le t\}}$, for any ${s < t}$. This is called a Cauchy process, which is a purely discontinuous Lévy process. See Figure 1.

Lévy processes are determined by the triple ${(\Sigma,b,\nu)}$, where ${\Sigma}$ describes the covariance structure of the Brownian motion component, b is the drift component, and ${\nu}$ describes the rate at which jumps occur. The distribution of the process is given by the Lévy-Khintchine formula, equation (3) below.

Theorem 2 (Lévy-Khintchine) Let X be a d-dimensional Lévy process. Then, there is a unique function ${\psi\colon{\mathbb R}\rightarrow{\mathbb C}}$ such that

 $\displaystyle {\mathbb E}\left[e^{ia\cdot (X_t-X_0)}\right]=e^{t\psi(a)}$ (2)

for all ${a\in{\mathbb R}^d}$ and ${t\ge0}$. Also, ${\psi(a)}$ can be written as

 $\displaystyle \psi(a)=ia\cdot b-\frac{1}{2}a^{\rm T}\Sigma a+\int _{{\mathbb R}^d}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,d\nu(x)$ (3)

where ${\Sigma}$, b and ${\nu}$ are uniquely determined and satisfy the following,

1. ${\Sigma\in{\mathbb R}^{d^2}}$ is a positive semidefinite matrix.
2. ${b\in{\mathbb R}^d}$.
3. ${\nu}$ is a Borel measure on ${{\mathbb R}^d}$ with ${\nu(\{0\})=0}$ and,
 $\displaystyle \int_{{\mathbb R}^d}\Vert x\Vert^2\wedge 1\,d\nu(x)<\infty.$ (4)

Furthermore, ${(\Sigma,b,\nu)}$ uniquely determine all finite distributions of the process ${X-X_0}$.

Conversely, if ${(\Sigma,b,\nu)}$ is any triple satisfying the three conditions above, then there exists a Lévy process satisfying (2,3).

# Processes with Independent Increments

In a previous post, it was seen that all continuous processes with independent increments are Gaussian. We move on now to look at a much more general class of independent increments processes which need not have continuous sample paths. Such processes can be completely described by their jump intensities, a Brownian term, and a deterministic drift component. However, this class of processes is large enough to capture the kinds of behaviour that occur for more general jump-diffusion processes. An important subclass is that of Lévy processes, which have independent and stationary increments. Lévy processes will be looked at in more detail in the following post, and includes as special cases, the Cauchy process, gamma processes, the variance gamma process, Poisson processes, compound Poisson processes and Brownian motion.

Recall that a process ${\{X_t\}_{t\ge0}}$ has the independent increments property if ${X_t-X_s}$ is independent of ${\{X_u\colon u\le s\}}$ for all times ${0\le s\le t}$. More generally, we say that X has the independent increments property with respect to an underlying filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}$ if it is adapted and ${X_t-X_s}$ is independent of ${\mathcal{F}_s}$ for all ${s < t}$. In particular, every process with independent increments also satisfies the independent increments property with respect to its natural filtration. Throughout this post, I will assume the existence of such a filtered probability space, and the independent increments property will be understood to be with regard to this space.

The process X is said to be continuous in probability if ${X_s\rightarrow X_t}$ in probability as s tends to t. As we now state, a d-dimensional independent increments process X is uniquely specified by a triple ${(\Sigma,b,\mu)}$ where ${\mu}$ is a measure describing the jumps of X, ${\Sigma}$ determines the covariance structure of the Brownian motion component of X, and b is an additional deterministic drift term.

Theorem 1 Let X be an ${{\mathbb R}^d}$-valued process with independent increments and continuous in probability. Then, there is a unique continuous function ${{\mathbb R}^d\times{\mathbb R}_+\rightarrow{\mathbb C}}$, ${(a,t)\mapsto\psi_t(a)}$ such that ${\psi_0(a)=0}$ and

 $\displaystyle {\mathbb E}\left[e^{ia\cdot (X_t-X_0)}\right]=e^{i\psi_t(a)}$ (1)

for all ${a\in{\mathbb R}^d}$ and ${t\ge0}$. Also, ${\psi_t(a)}$ can be written as

 $\displaystyle \psi_t(a)=ia\cdot b_t-\frac{1}{2}a^{\rm T}\Sigma_t a+\int _{{\mathbb R}^d\times[0,t]}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,d\mu(x,s)$ (2)

where ${\Sigma_t}$, ${b_t}$ and ${\mu}$ are uniquely determined and satisfy the following,

1. ${t\mapsto\Sigma_t}$ is a continuous function from ${{\mathbb R}_+}$ to ${{\mathbb R}^{d^2}}$ such that ${\Sigma_0=0}$ and ${\Sigma_t-\Sigma_s}$ is positive semidefinite for all ${t\ge s}$.
2. ${t\mapsto b_t}$ is a continuous function from ${{\mathbb R}_+}$ to ${{\mathbb R}^d}$, with ${b_0=0}$.
3. ${\mu}$ is a Borel measure on ${{\mathbb R}^d\times{\mathbb R}_+}$ with ${\mu(\{0\}\times{\mathbb R}_+)=0}$, ${\mu({\mathbb R}^d\times\{t\})=0}$ for all ${t\ge 0}$ and,
 $\displaystyle \int_{{\mathbb R}^d\times[0,t]}\Vert x\Vert^2\wedge 1\,d\mu(x,s)<\infty.$ (3)

Furthermore, ${(\Sigma,b,\mu)}$ uniquely determine all finite distributions of the process ${X-X_0}$.

Conversely, if ${(\Sigma,b,\mu)}$ is any triple satisfying the three conditions above, then there exists a process with independent increments satisfying (1,2).

# Continuous Processes with Independent Increments

A stochastic process X is said to have independent increments if ${X_t-X_s}$ is independent of ${\{X_u\}_{u\le s}}$ for all ${s\le t}$. For example, standard Brownian motion is a continuous process with independent increments. Brownian motion also has stationary increments, meaning that the distribution of ${X_{t+s}-X_t}$ does not depend on t. In fact, as I will show in this post, up to a scaling factor and linear drift term, Brownian motion is the only such process. That is, any continuous real-valued process X with stationary independent increments can be written as

 $\displaystyle X_t = X_0 + b t + \sigma B_t$ (1)

for a Brownian motion B and constants ${b,\sigma}$. This is not so surprising in light of the central limit theorem. The increment of a process across an interval [s,t] can be viewed as the sum of its increments over a large number of small time intervals partitioning [s,t]. If these terms are independent with relatively small variance, then the central limit theorem does suggest that their sum should be normally distributed. Together with the previous posts on Lévy’s characterization and stochastic time changes, this provides yet more justification for the ubiquitous position of Brownian motion in the theory of continuous-time processes. Consider, for example, stochastic differential equations such as the Langevin equation. The natural requirements for the stochastic driving term in such equations is that they be continuous with stationary independent increments and, therefore, can be written in terms of Brownian motion.

The definition of standard Brownian motion extends naturally to multidimensional processes and general covariance matrices. A standard d-dimensional Brownian motion ${B=(B^1,\ldots,B^d)}$ is a continuous process with stationary independent increments such that ${B_t}$ has the ${N(0,tI)}$ distribution for all ${t\ge 0}$. That is, ${B_t}$ is joint normal with zero mean and covariance matrix tI. From this definition, ${B_t-B_s}$ has the ${N(0,(t-s)I)}$ distribution independently of ${\{B_u\colon u\le s\}}$ for all ${s\le t}$. This definition can be further generalized. Given any ${b\in{\mathbb R}^d}$ and positive semidefinite ${\Sigma\in{\mathbb R}^{d^2}}$, we can consider a d-dimensional process X with continuous paths and stationary independent increments such that ${X_t}$ has the ${N(tb,t\Sigma)}$ distribution for all ${t\ge 0}$. Here, ${b}$ is the drift of the process and ${\Sigma}$ is the `instantaneous covariance matrix’. Such processes are sometimes referred to as ${(b,\Sigma)}$-Brownian motions, and all continuous d-dimensional processes starting from zero and with stationary independent increments are of this form.

Theorem 1 Let X be a continuous ${{\mathbb R}^d}$-valued process with stationary independent increments.

Then, there exist unique ${b\in{\mathbb R}^d}$ and ${\Sigma\in{\mathbb R}^{d^2}}$ such that ${X_t-X_0}$ is a ${(b,\Sigma)}$-Brownian motion.

# The Martingale Representation Theorem

The martingale representation theorem states that any martingale adapted with respect to a Brownian motion can be expressed as a stochastic integral with respect to the same Brownian motion.

Theorem 1 Let B be a standard Brownian motion defined on a probability space ${(\Omega,\mathcal{F},{\mathbb P})}$ and ${\{\mathcal{F}_t\}_{t\ge 0}}$ be its natural filtration.

Then, every ${\{\mathcal{F}_t\}}$local martingale M can be written as

$\displaystyle M = M_0+\int\xi\,dB$

for a predictable, B-integrable, process ${\xi}$.

As stochastic integration preserves the local martingale property for continuous processes, this result characterizes the space of all local martingales starting from 0 defined with respect to the filtration generated by a Brownian motion as being precisely the set of stochastic integrals with respect to that Brownian motion. Equivalently, Brownian motion has the predictable representation property. This result is often used in mathematical finance as the statement that the Black-Scholes model is complete. That is, any contingent claim can be exactly replicated by trading in the underlying stock. This does involve some rather large and somewhat unrealistic assumptions on the behaviour of financial markets and ability to trade continuously without incurring additional costs. However, in this post, I will be concerned only with the mathematical statement and proof of the representation theorem.

In more generality, the martingale representation theorem can be stated for a d-dimensional Brownian motion as follows.

Theorem 2 Let ${B=(B^1,\ldots,B^d)}$ be a d-dimensional Brownian motion defined on the filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$, and suppose that ${\{\mathcal{F}_t\}}$ is the natural filtration generated by B and ${\mathcal{F}_0}$.

$\displaystyle \mathcal{F}_t=\sigma\left(\{B_s\colon s\le t\}\cup\mathcal{F}_0\right)$

Then, every ${\{\mathcal{F}_t\}}$-local martingale M can be expressed as

 $\displaystyle M=M_0+\sum_{i=1}^d\int\xi^i\,dB^i$ (1)

for predictable processes ${\xi^i}$ satisfying ${\int_0^t(\xi^i_s)^2\,ds<\infty}$, almost surely, for each ${t\ge0}$.