Failure of Pathwise Integration for FV Processes

A non-pathwise stochastic integral of an FV Process
Figure 1: A non-pathwise stochastic integral of an FV Process

The motivation for developing a theory of stochastic integration is that many important processes — such as standard Brownian motion — have sample paths which are extraordinarily badly behaved. With probability one, the path of a Brownian motion is nowhere differentiable and has infinite variation over all nonempty time intervals. This rules out the application of the techniques of ordinary calculus. In particular, the Stieltjes integral can be applied with respect to integrators of finite variation, but fails to give a well-defined integral with respect to Brownian motion. The Ito stochastic integral was developed to overcome this difficulty, at the cost both of restricting the integrand to be an adapted process, and the loss of pathwise convergence in the dominated convergence theorem (convergence in probability holds intead).

However, as I demonstrate in this post, the stochastic integral represents a strict generalization of the pathwise Lebesgue-Stieltjes integral even for processes of finite variation. That is, if V has finite variation, then there can still be predictable integrands {\xi} such that the integral {\int\xi\,dV} is undefined as a Lebesgue-Stieltjes integral on the sample paths, but is well-defined in the Ito sense. Continue reading “Failure of Pathwise Integration for FV Processes”

The Martingale Representation Theorem

The martingale representation theorem states that any martingale adapted with respect to a Brownian motion can be expressed as a stochastic integral with respect to the same Brownian motion.

Theorem 1 Let B be a standard Brownian motion defined on a probability space {(\Omega,\mathcal{F},{\mathbb P})} and {\{\mathcal{F}_t\}_{t\ge 0}} be its natural filtration.

Then, every {\{\mathcal{F}_t\}}local martingale M can be written as

\displaystyle  M = M_0+\int\xi\,dB

for a predictable, B-integrable, process {\xi}.

As stochastic integration preserves the local martingale property for continuous processes, this result characterizes the space of all local martingales starting from 0 defined with respect to the filtration generated by a Brownian motion as being precisely the set of stochastic integrals with respect to that Brownian motion. Equivalently, Brownian motion has the predictable representation property. This result is often used in mathematical finance as the statement that the Black-Scholes model is complete. That is, any contingent claim can be exactly replicated by trading in the underlying stock. This does involve some rather large and somewhat unrealistic assumptions on the behaviour of financial markets and ability to trade continuously without incurring additional costs. However, in this post, I will be concerned only with the mathematical statement and proof of the representation theorem.

In more generality, the martingale representation theorem can be stated for a d-dimensional Brownian motion as follows.

Theorem 2 Let {B=(B^1,\ldots,B^d)} be a d-dimensional Brownian motion defined on the filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}, and suppose that {\{\mathcal{F}_t\}} is the natural filtration generated by B and {\mathcal{F}_0}.

\displaystyle  \mathcal{F}_t=\sigma\left(\{B_s\colon s\le t\}\cup\mathcal{F}_0\right)

Then, every {\{\mathcal{F}_t\}}-local martingale M can be expressed as

\displaystyle  M=M_0+\sum_{i=1}^d\int\xi^i\,dB^i (1)

for predictable processes {\xi^i} satisfying {\int_0^t(\xi^i_s)^2\,ds<\infty}, almost surely, for each {t\ge0}.

Continue reading “The Martingale Representation Theorem”

Time-Changed Brownian Motion

From the definition of standard Brownian motion B, given any positive constant c, {B_{ct}-B_{cs}} will be normal with mean zero and variance c(ts) for times {t>s\ge 0}. So, scaling the time axis of Brownian motion B to get the new process {B_{ct}} just results in another Brownian motion scaled by the factor {\sqrt{c}}.

This idea is easily generalized. Consider a measurable function {\xi\colon{\mathbb R}_+\rightarrow{\mathbb R}_+} and Brownian motion B on the filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. So, {\xi} is a deterministic process, not depending on the underlying probability space {\Omega}. If {\theta(t)\equiv\int_0^t\xi^2_s\,ds} is finite for each {t>0} then the stochastic integral {X=\int\xi\,dB} exists. Furthermore, X will be a Gaussian process with independent increments. For piecewise constant integrands, this results from the fact that linear combinations of joint normal variables are themselves normal. The case for arbitrary deterministic integrands follows by taking limits. Also, the Ito isometry says that {X_t-X_s} has variance

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\left(\int_s^t\xi\,dB\right)^2\right]&\displaystyle={\mathbb E}\left[\int_s^t\xi^2_u\,du\right]\smallskip\\ &\displaystyle=\theta(t)-\theta(s)\smallskip\\ &\displaystyle={\mathbb E}\left[(B_{\theta(t)}-B_{\theta(s)})^2\right]. \end{array}

So, {\int\xi\,dB=\int\sqrt{\theta^\prime(t)}\,dB_t} has the same distribution as the time-changed Brownian motion {B_{\theta(t)}}.

With the help of Lévy’s characterization, these ideas can be extended to more general, non-deterministic, integrands and to stochastic time-changes. In fact, doing this leads to the startling result that all continuous local martingales are just time-changed Brownian motion. Continue reading “Time-Changed Brownian Motion”

Lévy’s Characterization of Brownian Motion

Standard Brownian motion, {\{B_t\}_{t\ge 0}}, is defined to be a real-valued process satisfying the following properties.

  1. {B_0=0}.
  2. {B_t-B_s} is normally distributed with mean 0 and variance ts independently of {\{B_u\colon u\le s\}}, for any {t>s\ge 0}.
  3. B has continuous sample paths.

As always, it only really matters is that these properties hold almost surely. Now, to apply the techniques of stochastic calculus, it is assumed that there is an underlying filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}, which necessitates a further definition; a process B is a Brownian motion on a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})} if in addition to the above properties it is also adapted, so that {B_t} is {\mathcal{F}_t}-measurable, and {B_t-B_s} is independent of {\mathcal{F}_s} for each {t>s\ge 0}. Note that the above condition that {B_t-B_s} is independent of {\{B_u\colon u\le s\}} is not explicitly required, as it also follows from the independence from {\mathcal{F}_s}. According to these definitions, a process is a Brownian motion if and only if it is a Brownian motion with respect to its natural filtration.

The property that {B_t-B_s} has zero mean independently of {\mathcal{F}_s} means that Brownian motion is a martingale. Furthermore, we previously calculated its quadratic variation as {[B]_t=t}. An incredibly useful result is that the converse statement holds. That is, Brownian motion is the only local martingale with this quadratic variation. This is known as Lévy’s characterization, and shows that Brownian motion is a particularly general stochastic process, justifying its ubiquitous influence on the study of continuous-time stochastic processes.

Theorem 1 (Lévy’s Characterization of Brownian Motion) Let X be a local martingale with {X_0=0}. Then, the following are equivalent.

  1. X is standard Brownian motion on the underlying filtered probability space.
  2. X is continuous and {X^2_t-t} is a local martingale.
  3. X has quadratic variation {[X]_t=t}.

Continue reading “Lévy’s Characterization of Brownian Motion”

The Burkholder-Davis-Gundy Inequality

The Burkholder-Davis-Gundy inequality is a remarkable result relating the maximum of a local martingale with its quadratic variation. Recall that [X] denotes the quadratic variation of a process X, and {X^*_t\equiv\sup_{s\le t}\vert X_s\vert} is its maximum process.

Theorem 1 (Burkholder-Davis-Gundy) For any {1\le p<\infty} there exist positive constants {c_p,C_p} such that, for all local martingales X with {X_0=0} and stopping times {\tau}, the following inequality holds.

\displaystyle  c_p{\mathbb E}\left[ [X]^{p/2}_\tau\right]\le{\mathbb E}\left[(X^*_\tau)^p\right]\le C_p{\mathbb E}\left[ [X]^{p/2}_\tau\right]. (1)

Furthermore, for continuous local martingales, this statement holds for all {0<p<\infty}.

A proof of this result is given below. For {p\ge 1}, the theorem can also be stated as follows. The set of all cadlag martingales X starting from zero for which {{\mathbb E}[(X^*_\infty)^p]} is finite is a vector space, and the BDG inequality states that the norms {X\mapsto\Vert X^*_\infty\Vert_p={\mathbb E}[(X^*_\infty)^p]^{1/p}} and {X\mapsto\Vert[X]^{1/2}_\infty\Vert_p} are equivalent.

The special case p=2 is the easiest to handle, and we have previously seen that the BDG inequality does indeed hold in this case with constants {c_2=1}, {C_2=4}. The significance of Theorem 1, then, is that this extends to all {p\ge1}.

One reason why the BDG inequality is useful in the theory of stochastic integration is as follows. Whereas the behaviour of the maximum of a stochastic integral is difficult to describe, the quadratic variation satisfies the simple identity {\left[\int\xi\,dX\right]=\int\xi^2\,d[X]}. Recall, also, that stochastic integration preserves the local martingale property. Stochastic integration does not preserve the martingale property. In general, integration with respect to a martingale only results in a local martingale, even for bounded integrands. In many cases, however, stochastic integrals are indeed proper martingales. The Ito isometry shows that this is true for square integrable martingales, and the BDG inequality allows us to extend the result to all {L^p}-integrable martingales, for {p> 1}.

Theorem 2 Let X be a cadlag {L^p}-integrable martingale for some {1<p<\infty}, so that {{\mathbb E}[\vert X_t\vert^p]<\infty} for each t. Then, for any bounded predictable process {\xi}, {Y\equiv\int\xi\,dX} is also an {L^p}-integrable martingale.

Continue reading “The Burkholder-Davis-Gundy Inequality”

Continuous Local Martingales

Continuous local martingales are a particularly well behaved subset of the class of all local martingales, and the results of the previous two posts become much simpler in this case. First, the continuous local martingale property is always preserved by stochastic integration.

Theorem 1 If X is a continuous local martingale and {\xi} is X-integrable, then {\int\xi\,dX} is a continuous local martingale.

Proof: As X is continuous, {Y\equiv\int\xi\,dX} will also be continuous and, therefore, locally bounded. Then, by preservation of the local martingale property, Y is a local martingale. ⬜

Next, the quadratic variation of a continuous local martingale X provides us with a necessary and sufficient condition for X-integrability.

Theorem 2 Let X be a continuous local martingale. Then, a predictable process {\xi} is X-integrable if and only if

\displaystyle  \int_0^t\xi^2\,d[X]<\infty

for all {t>0}.

Continue reading “Continuous Local Martingales”

Quadratic Variations and the Ito Isometry

As local martingales are semimartingales, they have a well-defined quadratic variation. These satisfy several useful and well known properties, such as the Ito isometry, which are the subject of this post. First, the covariation [X,Y] allows the product XY of local martingales to be decomposed into local martingale and FV terms. Consider, for example, a standard Brownian motion B. This has quadratic variation {[B]_t=t} and it is easily checked that {B^2_t-t} is a martingale.

Lemma 1 If X and Y are local martingales then XY-[X,Y] is a local martingale.

In particular, {X^2-[X]} is a local martingale for all local martingales X.

Proof: Integration by parts gives

\displaystyle  XY-[X,Y] = X_0Y_0+\int X_-\,dY+\int Y_-\,dX

which, by preservation of the local martingale property, is a local martingale. ⬜

Continue reading “Quadratic Variations and the Ito Isometry”

Preservation of the Local Martingale Property

Now that it has been shown that stochastic integration can be performed with respect to any local martingale, we can move on to the following important result. Stochastic integration preserves the local martingale property. At least, this is true under very mild hypotheses. That the martingale property is preserved under integration of bounded elementary processes is straightforward. The generalization to predictable integrands can be achieved using a limiting argument. It is necessary, however, to restrict to locally bounded integrands and, for the sake of generality, I start with local sub and supermartingales.

Theorem 1 Let X be a local submartingale (resp., local supermartingale) and {\xi} be a nonnegative and locally bounded predictable process. Then, {\int\xi\,dX} is a local submartingale (resp., local supermartingale).

Proof: We only need to consider the case where X is a local submartingale, as the result will also follow for supermartingales by applying to -X. By localization, we may suppose that {\xi} is uniformly bounded and that X is a proper submartingale. So, {\vert\xi\vert\le K} for some constant K. Then, as previously shown there exists a sequence of elementary predictable processes {\vert\xi^n\vert\le K} such that {Y^n\equiv\int\xi^n\,dX} converges to {Y\equiv\int\xi\,dX} in the semimartingale topology and, hence, converges ucp. We may replace {\xi_n} by {\xi_n\vee0} if necessary so that, being nonnegative elementary integrals of a submartingale, {Y^n} will be submartingales. Also, {\vert\Delta Y^n\vert=\vert\xi^n\Delta X\vert\le K\vert\Delta X\vert}. Recall that a cadlag adapted process X is locally integrable if and only its jump process {\Delta X} is locally integrable, and all local submartingales are locally integrable. So,

\displaystyle  \sup_n\vert\Delta Y^n_t\vert\le K\vert\Delta X_t\vert

is locally integrable. Then, by ucp convergence for local submartingales, Y will satisfy the local submartingale property. ⬜

For local martingales, applying this result to {\pm X} gives,

Theorem 2 Let X be a local martingale and {\xi} be a locally bounded predictable process. Then, {\int\xi\,dX} is a local martingale.

This result can immediately be extended to the class of local {L^p}-integrable martingales, denoted by {\mathcal{M}^p_{\rm loc}}.

Corollary 3 Let {X\in\mathcal{M}^p_{\rm loc}} for some {0< p\le\infty} and {\xi} be a locally bounded predictable process. Then, {\int\xi\,dX\in\mathcal{M}^p_{\rm loc}}.

Continue reading “Preservation of the Local Martingale Property”

Martingales are Integrators

A major foundational result in stochastic calculus is that integration can be performed with respect to any local martingale. In these notes, a semimartingale was defined to be a cadlag adapted process with respect to which a stochastic integral exists satisfying some simple desired properties. Namely, the integral must agree with the explicit formula for elementary integrands and satisfy bounded convergence in probability. Then, the existence of integrals with respect to local martingales can be stated as follows.

Theorem 1 Every local martingale is a semimartingale.

This result can be combined directly with the fact that FV processes are semimartingales.

Corollary 2 Every process of the form X=M+V for a local martingale M and FV process V is a semimartingale.

Working from the classical definition of semimartingales as sums of local martingales and FV processes, the statements of Theorem 1 and Corollary 2 would be tautologies. Then, the aim of this post is to show that stochastic integration is well defined for all classical semimartingales. Put in another way, Corollary 2 is equivalent to the statement that classical semimartingales satisfy the semimartingale definition used in these notes. The converse statement will be proven in a later post on the Bichteler-Dellacherie theorem, so the two semimartingale definitions do indeed agree.

Continue reading “Martingales are Integrators”

Local Martingales

Recall from the previous post that a cadlag adapted process {X} is a local martingale if there is a sequence {\tau_n} of stopping times increasing to infinity such that the stopped processes {1_{\{\tau_n>0\}}X^{\tau_n}} are martingales. Local submartingales and local supermartingales are defined similarly.

An example of a local martingale which is not a martingale is given by the `double-loss’ gambling strategy. Interestingly, in 18th century France, such strategies were known as martingales and is the origin of the mathematical term. Suppose that a gambler is betting sums of money, with even odds, on a simple win/lose game. For example, betting that a coin toss comes up heads. He could bet one dollar on the first toss and, if he loses, double his stake to two dollars for the second toss. If he loses again, then he is down three dollars and doubles the stake again to four dollars. If he keeps on doubling the stake after each loss in this way, then he is always gambling one more dollar than the total losses so far. He only needs to continue in this way until the coin eventually does come up heads, and he walks away with net winnings of one dollar. This therefore describes a fair game where, eventually, the gambler is guaranteed to win.

Of course, this is not an effective strategy in practise. The losses grow exponentially and, if he doesn’t win quickly, the gambler must hit his credit limit in which case he loses everything. All that the strategy achieves is to trade a large probability of winning a dollar against a small chance of losing everything. It does, however, give a simple example of a local martingale which is not a martingale.

The gamblers winnings can be defined by a stochastic process {\{Z_n\}_{n=1,\ldots}} representing his net gain (or loss) just before the n’th toss. Let {\epsilon_1,\epsilon_2,\ldots} be a sequence of independent random variables with {{\mathbb P}(\epsilon_n=1)={\mathbb P}(\epsilon_n=-1)=1/2}. Here, {\epsilon_n} represents the outcome of the n’th toss, with 1 referring to a head and -1 referring to a tail. Set {Z_1=0} and

\displaystyle  Z_{n}=\begin{cases} 1,&\text{if }Z_{n-1}=1,\\ Z_{n-1}+\epsilon_n(1-Z_{n-1}),&\text{otherwise}. \end{cases}

This is a martingale with respect to its natural filtration, starting at zero and, eventually, ending up equal to one. It can be converted into a local martingale by speeding up the time scale to fit infinitely many tosses into a unit time interval

\displaystyle  X_t=\begin{cases} Z_n,&\text{if }1-1/n\le t<1-1/(n+1),\\ 1,&\text{if }t\ge 1. \end{cases}

This is a martingale with respect to its natural filtration on the time interval {[0,1)}. Letting {\tau_n=\inf\{t\colon\vert X_t\vert\ge n\}} then the optional stopping theorem shows that {X^{\tau_n}_t} is a uniformly bounded martingale on {t<1}, continuous at {t=1}, and constant on {t\ge 1}. This is therefore a martingale, showing that {X} is a local martingale. However, {{\mathbb E}[X_1]=1\not={\mathbb E}[X_0]=0}, so it is not a martingale. Continue reading “Local Martingales”