Quadratic Variations and Integration by Parts

A major difference between standard integral calculus and stochastic calculus is the existence of quadratic variations and covariations. Such terms show up, for example, in the stochastic version of the integration by parts formula.

For motivation, let us start by considering a standard argument for differentiable processes. The increment of a process {X} over a time step {\delta t>0} can be written as {\delta X_t\equiv X_{t+\delta t}-X_t}. The following identity is easily verified,

\displaystyle  \delta XY = X\delta Y + Y\delta X + \delta X \delta Y. (1)

Now, divide the time interval {[0,t]} into {n} equal parts. That is, set {t_k=kt/n} for {k=0,1,\ldots,n}. Then, using {\delta t=1/n} and summing equation (1) over these times,

\displaystyle  X_tY_t -X_0Y_0=\sum_{k=0}^{n-1} X_{t_k}\delta Y_{t_k} +\sum_{k=0}^{n-1}Y_{t_k}\delta X_{t_k}+\sum_{k=0}^{n-1}\delta X_{t_k}\delta Y_{t_k}. (2)

If the processes are continuously differentiable, then the final term on the right hand side is a sum of {n} terms, each of order {1/n^2}, and therefore is of order {1/n}. This vanishes in the limit {n\rightarrow\infty}, leading to the integration by parts formula

\displaystyle  X_tY_t-X_0Y_0 = \int_0^t X\,dY + \int_0^t Y\,dX.

Now, suppose that {X,Y} are standard Brownian motions. Then, {\delta X,\delta Y} are normal random variables with standard deviation {\sqrt{\delta t}}. It follows that the final term on the right hand side of (2) is a sum of {n} terms each of which is, on average, of order {1/n}. So, even in the limit as {n} goes to infinity, it does not vanish. Consequently, in stochastic calculus, the integration by parts formula requires an additional term, which is called the quadratic covariation (or, just covariation) of {X} and {Y}. Continue reading “Quadratic Variations and Integration by Parts”

Properties of the Stochastic Integral

In the previous two posts I gave a definition of stochastic integration. This was achieved via an explicit expression for elementary integrands, and extended to all bounded predictable integrands by bounded convergence in probability. The extension to unbounded integrands was done using dominated convergence in probability. Similarly, semimartingales were defined as those cadlag adapted processes for which such an integral exists.

The current post will show how the basic properties of stochastic integration follow from this definition. First, if {V} is a cadlag process whose sample paths are almost surely of finite variation over an interval {[0,t]}, then {\int_0^t\xi\,dV} can be interpreted as a Lebesgue-Stieltjes integral on the sample paths. If the process is also adapted, then it will be a semimartingale and the stochastic integral can be used. Fortunately, these two definitions of integration do agree with each other. The term FV process is used to refer to such cadlag adapted processes which are almost surely of finite variation over all bounded time intervals. The notation {\int_0^t\vert\xi\vert\,\vert dV\vert} represents the Lebesgue-Stieltjes integral of {\vert\xi\vert} with respect to the variation of {V}. Then, the condition for {\xi} to be {V}-integrable in the Lebesgue-Stieltjes sense is precisely that this integral is finite.

Lemma 1 Every FV process {V} is a semimartingale. Furthermore, let {\xi} be a predictable process satisfying

\displaystyle  \int_0^t\vert\xi\vert\,\vert dV\vert<\infty (1)

almost surely, for each {t\ge 0}. Then, {\xi\in L^1(V)} and the stochastic integral {\int\xi\,dV} agrees with the Lebesgue-Stieltjes integral, with probability one.

Continue reading “Properties of the Stochastic Integral”

Extending the Stochastic Integral

In the previous post, I used the property of bounded convergence in probability to define stochastic integration for bounded predictable integrands. For most applications, this is rather too restrictive, and in this post the integral will be extended to unbounded integrands. As bounded convergence is not much use in this case, the dominated convergence theorem will be used instead.

The first thing to do is to define a class of integrable processes for which the integral with respect to {X} is well-defined. Suppose that {\xi^n} is a sequence of predictable processes dominated by any such {X}-integrable process {\alpha}, so that {\vert\xi^n\vert\le\vert\alpha\vert} for each {n}. If this sequence converges to a limit {\xi}, then dominated convergence in probability states that the integrals converge in probability,

\displaystyle  \int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX\ \ \text{(in probability)} (1)

as {n\rightarrow\infty}. Continue reading “Extending the Stochastic Integral”

The Stochastic Integral

Having covered the basics of continuous-time processes and filtrations in the previous posts, I now move on to stochastic integration. In standard calculus and ordinary differential equations, a central object of study is the derivative {df/dt} of a function {f(t)}. This does, however, require restricting attention to differentiable functions. By integrating, it is possible to generalize to bounded variation functions. If {f} is such a function and {g} is continuous, then the Riemann-Stieltjes integral {\int_0^tg\,df} is well defined. The Lebesgue-Stieltjes integral further generalizes this to measurable integrands.

However, the kinds of processes studied in stochastic calculus are much less well behaved. For example, with probability one, the sample paths of standard Brownian motion are nowhere differentiable. Furthermore, they have infinite variation over bounded time intervals. Consequently, if {X} is such a process, then the integral {\int_0^t\xi\,dX} is not defined using standard methods.

Stochastic integration with respect to standard Brownian motion was developed by Kiyoshi Ito. This required restricting the class of possible integrands to be adapted processes, and the integral can then be constructed using the Ito isometry. This method was later extended to more general square integrable martingales and, then, to the class of semimartingales. It can then be shown that, as with Lebesgue integration, a version of the bounded and dominated convergence theorems are satisfied.

In these notes, a more direct approach is taken. The idea is that we simply define the stochastic integral such that the required elementary properties are satisfied. That is, it should agree with the explicit expressions for certain simple integrands, and should satisfy the bounded and dominated convergence theorems. Much of the theory of stochastic calculus follows directly from these properties, and detailed constructions of the integral are not required for many practical applications. Continue reading “The Stochastic Integral”