Failure of Pathwise Integration for FV Processes

A non-pathwise stochastic integral of an FV Process
Figure 1: A non-pathwise stochastic integral of an FV Process

The motivation for developing a theory of stochastic integration is that many important processes — such as standard Brownian motion — have sample paths which are extraordinarily badly behaved. With probability one, the path of a Brownian motion is nowhere differentiable and has infinite variation over all nonempty time intervals. This rules out the application of the techniques of ordinary calculus. In particular, the Stieltjes integral can be applied with respect to integrators of finite variation, but fails to give a well-defined integral with respect to Brownian motion. The Ito stochastic integral was developed to overcome this difficulty, at the cost both of restricting the integrand to be an adapted process, and the loss of pathwise convergence in the dominated convergence theorem (convergence in probability holds intead).

However, as I demonstrate in this post, the stochastic integral represents a strict generalization of the pathwise Lebesgue-Stieltjes integral even for processes of finite variation. That is, if V has finite variation, then there can still be predictable integrands {\xi} such that the integral {\int\xi\,dV} is undefined as a Lebesgue-Stieltjes integral on the sample paths, but is well-defined in the Ito sense. Continue reading “Failure of Pathwise Integration for FV Processes”

Stochastic Calculus Examples and Counterexamples

I have been posting my stochastic calculus notes on this blog for some time, and they have now reached a reasonable level of sophistication. The basics of stochastic integration with respect to local martingales and general semimartingales have been introduced from a rigorous mathematical standpoint, and important results such as Ito’s lemma, the Ito isometry, preservation of the local martingale property, and existence of solutions to stochastic differential equations have been covered.

I will now start to also post examples demonstrating results from stochastic calculus, as well as counterexamples showing how the methods can break down when the required conditions are not quite met. As well as knowing precise mathematical statements and understanding how to prove them, I generally feel that it can be just as important to understand the limits of the results and how they can break down. Knowing good counterexamples can help with this. In stochastic calculus, especially, many statements have quite subtle conditions which, if dropped, invalidate the whole result. In particular, measurability and integrability conditions are often required in subtle ways. Knowing some counterexamples can help to understand these issues. Continue reading “Stochastic Calculus Examples and Counterexamples”

Integrating with respect to Brownian motion

In this post I attempt to give a rigorous definition of integration with respect to Brownian motion (as introduced by Itô in 1944), while keeping it as concise as possible. The stochastic integral can also be defined for a much more general class of processes called semimartingales. However, as Brownian motion is such an important special case which can be handled directly, I start with this as the subject of this post. If {\{X_s\}_{s\ge 0}} is a standard Brownian motion defined on a probability space {(\Omega,\mathcal{F},\mathop{\mathbb P})} and {\alpha_s} is a stochastic process, the aim is to define the integral

\displaystyle  \int_0^t\alpha_s\,dX_s.

(1)

In ordinary calculus, this can be approximated by Riemann sums, which converge for continuous integrands whenever the integrator {X} is of finite variation. This leads to the Riemann-Stietjes integral and, generalizing to measurable integrands, the Lebesgue-Stieltjes integral. Unfortunately this method does not work for Brownian motion which, as discussed in my previous post, has infinite variation over all nontrivial compact intervals.

The standard approach is to start by writing out the integral explicitly for piecewise constant integrands. If there are times {0=t_0\le t_1\le\cdots\le t_n=t} such that {\alpha_s=\alpha_{t_{k-1}}} for each {s\in(t_{k-1},t_k)} then the integral is given by the summation,

\displaystyle  \int_0^t\alpha\,dX = \sum_{k=1}^n\alpha_{t_{k-1}}(X_{t_k}-X_{t_{k-1}}).

(2)

We could try to extend to more general integrands by approximating by piecewise constant processes but, as mentioned above, Brownian motion has infinite variation paths and this will diverge in general.

Fortunately, when working with random processes, there are a couple of observations which improve the chances of being able to consistently define the integral. They are

  • The integral is not a single real number, but is instead a random variable defined on the probability space. It therefore only has to be defined up to a set of zero probability and not on every possible path of {X}.
  • Rather than requiring limits of integrals to converge for each path of {X} (e.g., dominated convergence), the much weaker convergence in probability can be used.

These observations are still not enough, and the main insight is to only look at integrands which are adapted. That is, the value of {\alpha_t} can only depend on {X} through its values at prior times. This condition is met in most situations where we need to use stochastic calculus, such as with (forward) stochastic differential equations. To make this rigorous, for each time {t\ge 0} let {\mathcal{F}_t} be the sigma-algebra generated by {X_s} for all {s\le t}. This is a filtration ({\mathcal{F}_s\subseteq\mathcal{F}_t} for {s\le t}), and {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},\mathop{\mathbb P})} is referred to as a filtered probability space. Then, {\alpha} is adapted if {\alpha_t} is {\mathcal{F}_t}-measurable for all times {t}. Piecewise constant and left-continuous processes, such as {\alpha} in (2), which are also adapted are commonly referred to as simple processes.

However, as with standard Lebesgue integration, we must further impose a measurability property. A stochastic process {\alpha} can be viewed as a map from the product space {{\mathbb R}_+\times\Omega} to the real numbers, given by {(t,\omega)\mapsto\alpha_t(\omega)}. It is said to be jointly measurable if it is measurable with respect to the product sigma-algebra {\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}, where {\mathcal{B}} refers to the Borel sigma-algebra. Finally, it is called progressively measurable, or just progressive, if its restriction to {[0,t]\times\Omega} is {\mathcal{B}([0,t])\otimes\mathcal{F}_t}-measurable for each positive time {t}. It is easily shown that progressively measurable processes are adapted, and the simple processes introduced above are progressive.

With these definitions, the stochastic integral of a progressively measurable process {\alpha} with respect to Brownian motion {X} is defined whenever {\int_0^t\alpha^2ds<\infty} almost surely (that is, with probability one). The integral (1) is a random variable, defined uniquely up to sets of zero probability by the following two properties.

  • The integral agrees with the explicit formula (2) for simple integrands.
  • If {\alpha^n} and {\alpha} are progressive processes such that {\int_0^t(\alpha^n-\alpha)^2\,ds} tends to zero in probability as {n\rightarrow\infty}, then

    \displaystyle  \int_0^t\alpha^n\,dX\rightarrow\int_0^t\alpha\,dX,

    (3)

    where, again, convergence is in probability.

Continue reading “Integrating with respect to Brownian motion”