Existence of the Stochastic Integral

The principal reason for introducing the concept of semimartingales in stochastic calculus is that they are precisely those processes with respect to which stochastic integration is well defined. Often, semimartingales are defined in terms of decompositions into martingale and finite variation components. Here, I have taken a different approach, and simply defined semimartingales to be processes with respect to which a stochastic integral exists satisfying some necessary properties. That is, integration must agree with the explicit form for piecewise constant elementary integrands, and must satisfy a bounded convergence condition. If it exists, then such an integral is uniquely defined. Furthermore, whatever method is used to actually construct the integral is unimportant to many applications. Only its elementary properties are required to develop a theory of stochastic calculus, as demonstrated in the previous posts on integration by parts, Ito’s lemma and stochastic differential equations.

The purpose of this post is to give an alternative characterization of semimartingales in terms of a simple and seemingly rather weak condition, stated in Theorem 1 below. The necessity of this condition follows from the requirement of integration to satisfy a bounded convergence property, as was commented on in the original post on stochastic integration. That it is also a sufficient condition is the main focus of this post. The aim is to show that the existence of the stochastic integral follows in a relatively direct way, requiring mainly just standard measure theory and no deep results on stochastic processes.

Recall that throughout these notes, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. To recap, elementary predictable processes are of the form

\displaystyle  \xi_t=Z_01_{\{t=0\}}+\sum_{k=1}^n Z_k1_{\{s_{k}<t\le t_k\}} (1)

for an {\mathcal{F}_0}-measurable random variable {Z_0}, real numbers {s_k,t_k\ge 0} and {\mathcal{F}_{s_k}}-measurable random variables {Z_k}. The integral with respect to any other process X up to time t can be written out explicitly as,

\displaystyle  \int_0^t\xi\,dX = \sum_{k=1}^n Z_k(X_{t_k\wedge t}-X_{s_k\wedge t}). (2)

The predictable sigma algebra, {\mathcal{P}}, on {{\mathbb R}_+\times\Omega} is generated by the set of left-continuous and adapted processes or, equivalently, by the elementary predictable process. The idea behind stochastic integration is to extend this to all bounded and predictable integrands {\xi\in{\rm b}\mathcal{P}}. Other than agreeing with (2) for elementary integrands, the only other property required is bounded convergence in probability. That is, if {\xi^n\in{\rm b}\mathcal{P}} is a sequence uniformly bounded by some constant K, so that {\vert\xi^n\vert\le K}, and converging to a limit {\xi} then, {\int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX} in probability. Nothing else is required. Other properties, such as linearity of the integral with respect to the integrand follow from this, as was previously noted. Note that we are considering two random variables to be the same if they are almost surely equal. Similarly, uniqueness of the stochastic integral means that, for each integrand, the integral is uniquely defined up to probability one.

Using the definition of a semimartingale as a cadlag adapted process with respect to which the stochastic integral is well defined for bounded and predictable integrands, the main result is as follows. To be clear, in this post all stochastic processes are real-valued.

Theorem 1 A cadlag adapted process X is a semimartingale if and only if, for each {t\ge 0}, the set

\displaystyle  \left\{\int_0^t\xi\,dX\colon \xi{\rm\ is\ elementary}, \vert\xi\vert\le 1\right\} (3)

is bounded in probability.

Continue reading “Existence of the Stochastic Integral”

SDEs with Locally Lipschitz Coefficients

In the previous post it was shown how the existence and uniqueness of solutions to stochastic differential equations with Lipschitz continuous coefficients follows from the basic properties of stochastic integration. However, in many applications, it is necessary to weaken this condition a bit. For example, consider the following SDE for a process X

\displaystyle  dX_t =\sigma \vert X_{t-}\vert^{\alpha}\,dZ_t,

where Z is a given semimartingale and {\sigma,\alpha} are fixed real numbers. The function {f(x)\equiv\sigma\vert x\vert^\alpha} has derivative {f^\prime(x)=\sigma\alpha {\rm sgn}(x)|x|^{\alpha-1}} which, for {\alpha>1}, is bounded on bounded subsets of the reals. It follows that f is Lipschitz continuous on such bounded sets. However, the derivative of f diverges to infinity as x goes to infinity, so f is not globally Lipschitz continuous. Similarly, if {\alpha<1} then f is Lipschitz continuous on compact subsets of {{\mathbb R}\setminus\{0\}}, but not globally Lipschitz. To be more widely applicable, the results of the previous post need to be extended to include such locally Lipschitz continuous coefficients.

In fact, uniqueness of solutions to SDEs with locally Lipschitz continuous coefficients follows from the global Lipschitz case. However, solutions need only exist up to a possible explosion time. This is demonstrated by the following simple non-stochastic differential equation

\displaystyle  dX= X^2\,dt.

For initial value {X_0=x>0}, this has the solution {X_t=(x^{-1}-t)^{-1}}, which explodes at time {t=x^{-1}}. Continue reading “SDEs with Locally Lipschitz Coefficients”

Existence of Solutions to Stochastic Differential Equations

A stochastic differential equation, or SDE for short, is a differential equation driven by one or more stochastic processes. For example, in physics, a Langevin equation describing the motion of a point {X=(X^1,\ldots,X^n)} in n-dimensional phase space is of the form

\displaystyle  \frac{dX^i}{dt} = \sum_{j=1}^m a_{ij}(X)\eta^j(t) + b_i(X). (1)

The dynamics are described by the functions {a_{ij},b_i\colon{\mathbb R}^n\rightarrow{\mathbb R}}, and the problem is to find a solution for X, given its value at an initial time. What distinguishes this from an ordinary differential equation are random noise terms {\eta^j} and, consequently, solutions to the Langevin equation are stochastic processes. It is difficult to say exactly how {\eta^j} should be defined directly, but we can suppose that their integrals {B^j_t=\int_0^t\eta^j(s)\,ds} are continuous with independent and identically distributed increments. A candidate for such a process is standard Brownian motion and, up to constant scaling factor and drift term, it can be shown that this is the only possibility. However, Brownian motion is nowhere differentiable, so the original noise terms {\eta^j=dB^j_t/dt} do not have well defined values. Instead, we can rewrite equation (1) is terms of the Brownian motions. This gives the following SDE for an n-dimensional process {X=(X^1,\ldots,X^n)}

\displaystyle  dX^i_t = \sum_{j=1}^m a_{ij}(X_t)\,dB^j_t + b_i(X_t)\,dt (2)

where {B^1,\ldots,B^m} are independent Brownian motions. This is to be understood in terms of the differential notation for stochastic integration. It is known that if the functions {a_{ij}, b_i} are Lipschitz continuous then, given any starting value for X, equation (2) has a unique solution. In this post, I give a proof of this using the basic properties of stochastic integration as introduced over the past few posts.

First, in keeping with these notes, equation (2) can be generalized by replacing the Brownian motions {B^j} and time t by arbitrary semimartingales. As always, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. In integral form, the general SDE for a cadlag adapted process {X=(X^1,\ldots,X^n)} is as follows,

\displaystyle  X^i = N^i + \sum_{j=1}^m\int a_{ij}(X)\,dZ^j. (3)

Continue reading “Existence of Solutions to Stochastic Differential Equations”

The Generalized Ito Formula

Recall that Ito’s lemma expresses a twice differentiable function {f} applied to a continuous semimartingale {X} in terms of stochastic integrals, according to the following formula

\displaystyle  f(X) = f(X_0)+\int f^\prime(X)\,dX + \frac{1}{2}\int f^{\prime\prime}(X)\,d[X]. (1)

In this form, the result only applies to continuous processes but, as I will show in this post, it is possible to generalize to arbitrary noncontinuous semimartingales. The result is also referred to as Ito’s lemma or, to distinguish it from the special case for continuous processes, it is known as the generalized Ito formula or generalized Ito’s lemma.

If equation (1) is to be extended to noncontinuous processes then, there are two immediate points to be considered. The first is that if the process {X} is not continuous then it need not be a predictable process, so {f^\prime(X),f^{\prime\prime}(X)} need not be predictable either. So, the integrands in (1) will not be {X}-integrable. To remedy this, we should instead use the left limits {X_{t-}} in the integrands, which is left-continuous and adapted and therefore is predictable. The second point is that the jumps of the left hand side of (1) are equal to {\Delta f(X)} and, on the right, they are {f^\prime(X_-)\Delta X+\frac{1}{2}f^{\prime\prime}(X_-)\Delta X^2}. There is no reason that these should be equal, and (1) cannot possibly hold in general. To fix this, we can simply add on the correction to the jump terms on the right hand side,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle f(X_t) =&\displaystyle f(X_0)+\int_0^t f^\prime(X_-)\,dX + \frac{1}{2}\int_0^t f^{\prime\prime}(X_-)\,d[X]\smallskip\\ &\displaystyle +\sum_{s\le t}\left(\Delta f(X_s)-f^\prime(X_{s-})\Delta X_s-\frac{1}{2}f^{\prime\prime}(X_{s-})\Delta X_s^2\right). \end{array} (2)

Continue reading “The Generalized Ito Formula”

Ito’s Lemma

Ito’s lemma, otherwise known as the Ito formula, expresses functions of stochastic processes in terms of stochastic integrals. In standard calculus, the differential of the composition of functions {f(x), x(t)} satisfies {df(x(t))=f^\prime(x(t))dx(t)}. This is just the chain rule for differentiation or, in integral form, it becomes the change of variables formula.

In stochastic calculus, Ito’s lemma should be used instead. For a twice differentiable function {f} applied to a continuous semimartingale {X}, it states the following,

\displaystyle  df(X) = f^\prime(X)\,dX + \frac{1}{2}f^{\prime\prime}(X)\,dX^2.

This can be understood as a Taylor expansion up to second order in {dX}, where the quadratic term {dX^2\equiv d[X]} is the quadratic variation of the process {X}.

A d-dimensional process {X=(X^1,\ldots,X^d)} is said to be a semimartingale if each of its components, {X^i}, are semimartingales. The first and second order partial derivatives of a function are denoted by {D_if} and {D_{ij}f}, and I make use of the summation convention where indices {i,j} which occur twice in a single term are summed over. Then, the statement of Ito’s lemma is as follows.

Theorem 1 (Ito’s Lemma) Let {X=(X^1,\ldots,X^d)} be a continuous d-dimensional semimartingale taking values in an open subset {U\subseteq{\mathbb R}^d}. Then, for any twice continuously differentiable function {f\colon U\rightarrow{\mathbb R}}, {f(X)} is a semimartingale and,

\displaystyle  df(X) = D_if(X)\,dX^i + \frac{1}{2}D_{ij}f(X)\,d[X^i,X^j]. (1)

Continue reading “Ito’s Lemma”

Properties of Quadratic Variations

Being able to handle quadratic variations and covariations of processes is very important in stochastic calculus. Apart from appearing in the integration by parts formula, they are required for the stochastic change of variables formula, known as Ito’s lemma, which will be the subject of the next post. Quadratic covariations satisfy several simple relations which make them easy to handle, especially in conjunction with the stochastic integral.

Recall from the previous post that the covariation {[X,Y]} is a cadlag adapted process, so that its jumps {\Delta [X,Y]_t\equiv [X,Y]_t-[X,Y]_{t-}} are well defined.

Lemma 1 If {X,Y} are semimartingales then

\displaystyle  \Delta [X,Y]=\Delta X\Delta Y. (1)

In particular, {\Delta [X]=\Delta X^2}.

Proof: Taking the jumps of the integration by parts formula for {XY} gives

\displaystyle  \Delta XY = X_{-}\Delta Y + Y_{-}\Delta X + \Delta [X,Y],

and rearranging this gives the result. ⬜

An immediate consequence is that quadratic variations and covariations involving continuous processes are continuous. Another consequence is that the sum of the squares of the jumps of a semimartingale over any bounded interval must be finite.

Corollary 2 Every semimartingale {X} satisfies

\displaystyle  \sum_{s\le t}\Delta X^2_s\le [X]_t<\infty.

Proof: As {[X]} is increasing, the inequality {[X]_t\ge \sum_{s\le t}\Delta [X]_s} holds. Substituting in {\Delta[X]=\Delta X^2} gives the result. ⬜

Next, the following result shows that covariations involving continuous finite variation processes are zero. As Lebesgue-Stieltjes integration is only defined for finite variation processes, this shows why quadratic variations do not play an important role in standard calculus. For noncontinuous finite variation processes, the covariation must have jumps satisfying (1), so will generally be nonzero. In this case, the covariation is just given by the sum over these jumps. Integration with respect to any FV process {V} can be defined as the Lebesgue-Stieltjes integral on the sample paths, which is well defined for locally bounded measurable integrands and, when the integrand is predictable, agrees with the stochastic integral.

Lemma 3 Let {X} be a semimartingale and {V} be an FV process. Their covariation is

\displaystyle  [X,V]_t = \int_0^t \Delta X\,dV = \sum_{s\le t}\Delta X_s\Delta V_s. (2)

In particular, if either of {X} or {V} is continuous then {[X,V]=0}.

Continue reading “Properties of Quadratic Variations”

Quadratic Variations and Integration by Parts

A major difference between standard integral calculus and stochastic calculus is the existence of quadratic variations and covariations. Such terms show up, for example, in the stochastic version of the integration by parts formula.

For motivation, let us start by considering a standard argument for differentiable processes. The increment of a process {X} over a time step {\delta t>0} can be written as {\delta X_t\equiv X_{t+\delta t}-X_t}. The following identity is easily verified,

\displaystyle  \delta XY = X\delta Y + Y\delta X + \delta X \delta Y. (1)

Now, divide the time interval {[0,t]} into {n} equal parts. That is, set {t_k=kt/n} for {k=0,1,\ldots,n}. Then, using {\delta t=1/n} and summing equation (1) over these times,

\displaystyle  X_tY_t -X_0Y_0=\sum_{k=0}^{n-1} X_{t_k}\delta Y_{t_k} +\sum_{k=0}^{n-1}Y_{t_k}\delta X_{t_k}+\sum_{k=0}^{n-1}\delta X_{t_k}\delta Y_{t_k}. (2)

If the processes are continuously differentiable, then the final term on the right hand side is a sum of {n} terms, each of order {1/n^2}, and therefore is of order {1/n}. This vanishes in the limit {n\rightarrow\infty}, leading to the integration by parts formula

\displaystyle  X_tY_t-X_0Y_0 = \int_0^t X\,dY + \int_0^t Y\,dX.

Now, suppose that {X,Y} are standard Brownian motions. Then, {\delta X,\delta Y} are normal random variables with standard deviation {\sqrt{\delta t}}. It follows that the final term on the right hand side of (2) is a sum of {n} terms each of which is, on average, of order {1/n}. So, even in the limit as {n} goes to infinity, it does not vanish. Consequently, in stochastic calculus, the integration by parts formula requires an additional term, which is called the quadratic covariation (or, just covariation) of {X} and {Y}. Continue reading “Quadratic Variations and Integration by Parts”

Properties of the Stochastic Integral

In the previous two posts I gave a definition of stochastic integration. This was achieved via an explicit expression for elementary integrands, and extended to all bounded predictable integrands by bounded convergence in probability. The extension to unbounded integrands was done using dominated convergence in probability. Similarly, semimartingales were defined as those cadlag adapted processes for which such an integral exists.

The current post will show how the basic properties of stochastic integration follow from this definition. First, if {V} is a cadlag process whose sample paths are almost surely of finite variation over an interval {[0,t]}, then {\int_0^t\xi\,dV} can be interpreted as a Lebesgue-Stieltjes integral on the sample paths. If the process is also adapted, then it will be a semimartingale and the stochastic integral can be used. Fortunately, these two definitions of integration do agree with each other. The term FV process is used to refer to such cadlag adapted processes which are almost surely of finite variation over all bounded time intervals. The notation {\int_0^t\vert\xi\vert\,\vert dV\vert} represents the Lebesgue-Stieltjes integral of {\vert\xi\vert} with respect to the variation of {V}. Then, the condition for {\xi} to be {V}-integrable in the Lebesgue-Stieltjes sense is precisely that this integral is finite.

Lemma 1 Every FV process {V} is a semimartingale. Furthermore, let {\xi} be a predictable process satisfying

\displaystyle  \int_0^t\vert\xi\vert\,\vert dV\vert<\infty (1)

almost surely, for each {t\ge 0}. Then, {\xi\in L^1(V)} and the stochastic integral {\int\xi\,dV} agrees with the Lebesgue-Stieltjes integral, with probability one.

Continue reading “Properties of the Stochastic Integral”

Extending the Stochastic Integral

In the previous post, I used the property of bounded convergence in probability to define stochastic integration for bounded predictable integrands. For most applications, this is rather too restrictive, and in this post the integral will be extended to unbounded integrands. As bounded convergence is not much use in this case, the dominated convergence theorem will be used instead.

The first thing to do is to define a class of integrable processes for which the integral with respect to {X} is well-defined. Suppose that {\xi^n} is a sequence of predictable processes dominated by any such {X}-integrable process {\alpha}, so that {\vert\xi^n\vert\le\vert\alpha\vert} for each {n}. If this sequence converges to a limit {\xi}, then dominated convergence in probability states that the integrals converge in probability,

\displaystyle  \int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX\ \ \text{(in probability)} (1)

as {n\rightarrow\infty}. Continue reading “Extending the Stochastic Integral”

The Stochastic Integral

Having covered the basics of continuous-time processes and filtrations in the previous posts, I now move on to stochastic integration. In standard calculus and ordinary differential equations, a central object of study is the derivative {df/dt} of a function {f(t)}. This does, however, require restricting attention to differentiable functions. By integrating, it is possible to generalize to bounded variation functions. If {f} is such a function and {g} is continuous, then the Riemann-Stieltjes integral {\int_0^tg\,df} is well defined. The Lebesgue-Stieltjes integral further generalizes this to measurable integrands.

However, the kinds of processes studied in stochastic calculus are much less well behaved. For example, with probability one, the sample paths of standard Brownian motion are nowhere differentiable. Furthermore, they have infinite variation over bounded time intervals. Consequently, if {X} is such a process, then the integral {\int_0^t\xi\,dX} is not defined using standard methods.

Stochastic integration with respect to standard Brownian motion was developed by Kiyoshi Ito. This required restricting the class of possible integrands to be adapted processes, and the integral can then be constructed using the Ito isometry. This method was later extended to more general square integrable martingales and, then, to the class of semimartingales. It can then be shown that, as with Lebesgue integration, a version of the bounded and dominated convergence theorems are satisfied.

In these notes, a more direct approach is taken. The idea is that we simply define the stochastic integral such that the required elementary properties are satisfied. That is, it should agree with the explicit expressions for certain simple integrands, and should satisfy the bounded and dominated convergence theorems. Much of the theory of stochastic calculus follows directly from these properties, and detailed constructions of the integral are not required for many practical applications. Continue reading “The Stochastic Integral”