# The Stochastic Fubini Theorem

Fubini’s theorem states that, subject to precise conditions, it is possible to switch the order of integration when computing double integrals. In the theory of stochastic calculus, we also encounter double integrals and would like to be able to commute their order. However, since these can involve stochastic integration rather than the usual deterministic case, the classical results are not always applicable. To help with such cases, we could do with a new stochastic version of Fubini’s theorem. Here, I will consider the situation where one integral is of the standard kind with respect to a finite measure, and the other is stochastic. To start, recall the classical Fubini theorem.

Theorem 1 (Fubini) Let ${(E,\mathcal E,\mu)}$ and ${(F,\mathcal F,\nu)}$ be finite measure spaces, and ${f\colon E\times F\rightarrow{\mathbb R}}$ be a bounded ${\mathcal E\otimes\mathcal F}$-measurable function. Then,

 $\displaystyle y\mapsto\int f(x,y)d\mu(x)$

is ${\mathcal F}$-measurable,

 $\displaystyle x\mapsto\int f(x,y)d\nu(y)$

is ${\mathcal E}$-measurable, and,

 $\displaystyle \int\int f(x,y)d\mu(x)d\nu(y)=\int\int f(x,y)d\nu(x)d\mu(y).$ (1)

# Semimartingale Completeness

A sequence of stochastic processes, ${X^n}$, is said to converge to a process X under the semimartingale topology, as n goes to infinity, if the following conditions are met. First, ${X^n_0}$ should tend to ${X_0}$ in probability. Also, for every sequence ${\xi^n}$ of elementary predictable processes with ${\vert\xi^n\vert\le 1}$,

 $\displaystyle \int_0^t\xi^n\,dX^n-\int_0^t\xi^n\,dX\rightarrow 0$

in probability for all times t. For short, this will be denoted by ${X^n\xrightarrow{\rm sm}X}$.

The semimartingale topology is particularly well suited to the class of semimartingales, and to stochastic integration. Previously, it was shown that the cadlag and adapted processes are complete under semimartingale convergence. In this post, it will be shown that the set of semimartingales is also complete. That is, if a sequence ${X^n}$ of semimartingales converge to a limit X under the semimartingale topology, then X is also a semimartingale.

Theorem 1 The space of semimartingales is complete under the semimartingale topology.

The same is true of the space of stochastic integrals defined with respect to any given semimartingale. In fact, for a semimartingale X, the set of all processes which can be expressed as a stochastic integral ${\int\xi\,dX}$ can be characterized as follows; it is precisely the closure, under the semimartingale topology, of the set of elementary integrals of X. This result was originally due to Memin, using a rather different proof to the one given here. The method used in this post only relies on the elementary properties of stochastic integrals, such as the dominated convergence theorem.

Theorem 2 Let X be a semimartingale. Then, a process Y is of the form ${Y=\int\xi\,dX}$ for some ${\xi\in L^1(X)}$ if and only if there is a sequence ${\xi^n}$ of bounded elementary processes with ${\int\xi^n\,dX\xrightarrow{\rm sm}Y}$.

Writing S for the set of processes of the form ${\int\xi\,dX}$ for bounded elementary ${\xi}$, and ${\bar S}$ for its closure under the semimartingale topology, the statement of the theorem is equivalent to

 $\displaystyle \bar S=\left\{\int\xi\,dX\colon \xi\in L^1(X)\right\}.$ (1)

# Further Properties of the Stochastic Integral

We move on to properties of stochastic integration which, while being fairly elementary, are rather difficult to prove directly from the definitions.

First, recall that for a semimartingale X, the X-integrable processes ${L^1(X)}$ were defined to be predictable processes ${\xi}$ which are ‘good dominators’. That is, if ${\xi^n}$ are bounded predictable processes with ${\vert\xi^n\vert\le\vert\xi\vert}$ and ${\xi^n\rightarrow 0}$ pointwise, then ${\int_0^t\xi^n\,dX}$ tends to zero in probability. This definition is a bit messy. Fortunately, the following result gives a much cleaner characterization of X-integrability.

Theorem 1 Let X be a semimartingale. Then, a predictable process ${\xi}$ is X-integrable if and only if the set

 $\displaystyle \left\{\int_0^t\zeta\,dX\colon\zeta\in{\rm b}\mathcal{P},\vert\zeta\vert\le\vert\xi\vert\right\}$ (1)

is bounded in probability for each ${t\ge 0}$.

# Existence of the Stochastic Integral 2 – Vector Valued Measures

The construction of the stochastic integral given in the previous post made use of a result showing that certain linear maps can be extended to vector valued measures. This result, Theorem 1 below, was separated out from the main argument in the construction of the integral, as it only involves pure measure theory and no stochastic calculus. For completeness of these notes, I provide a proof of this now.

Given a measurable space ${(E,\mathcal{E})}$, ${{\rm b}\mathcal{E}}$ denotes the bounded ${\mathcal{E}}$-measurable functions ${E\rightarrow{\mathbb R}}$. For a topological vector space V, the term V-valued measure refers to linear maps ${\mu\colon{\rm b}\mathcal{E}\rightarrow V}$ satisfying the following bounded convergence property; if a sequence ${\alpha_n\in{\rm b}\mathcal{E}}$ (n=1,2,…) is uniformly bounded, so that ${\vert\alpha_n\vert\le K}$ for a constant K, and converges pointwise to a limit ${\alpha}$, then ${\mu(\alpha_n)\rightarrow\mu(\alpha)}$ in V.

This differs slightly from the definition of V-valued measures as set functions ${\mu\colon\mathcal{E}\rightarrow V}$ satisfying countable additivity. However, any such set function also defines an integral ${\mu(\alpha)\equiv\int\alpha\,d\mu}$ satisfying bounded convergence and, conversely, any linear map ${\mu\colon{\rm b}\mathcal{E}\rightarrow V}$ satisfying bounded convergence defines a countably additive set function ${\mu(A)\equiv \mu(1_A)}$. So, these definitions are essentially the same, but for the purposes of these notes it is more useful to represent V-valued measures in terms of their integrals rather than the values on measurable sets.

In the following, a subalgebra of ${{\rm b}\mathcal{E}}$ is a subset closed under linear combinations and pointwise multiplication, and containing the constant functions.

Theorem 1 Let ${(E,\mathcal{E})}$ be a measurable space, ${\mathcal{A}}$ be a subalgebra of ${{\rm b}\mathcal{E}}$ generating ${\mathcal{E}}$, and V be a complete vector space. Then, a linear map ${\mu\colon\mathcal{A}\rightarrow V}$ extends to a V-valued measure on ${(E,\mathcal{E})}$ if and only if it satisfies the following properties for sequences ${\alpha_n\in\mathcal{A}}$.

1. If ${\alpha_n\downarrow 0}$ then ${\mu(\alpha_n)\rightarrow 0}$.
2. If ${\sum_n\vert\alpha_n\vert\le 1}$, then ${\mu(\alpha_n)\rightarrow 0}$.

# Existence of the Stochastic Integral

The principal reason for introducing the concept of semimartingales in stochastic calculus is that they are precisely those processes with respect to which stochastic integration is well defined. Often, semimartingales are defined in terms of decompositions into martingale and finite variation components. Here, I have taken a different approach, and simply defined semimartingales to be processes with respect to which a stochastic integral exists satisfying some necessary properties. That is, integration must agree with the explicit form for piecewise constant elementary integrands, and must satisfy a bounded convergence condition. If it exists, then such an integral is uniquely defined. Furthermore, whatever method is used to actually construct the integral is unimportant to many applications. Only its elementary properties are required to develop a theory of stochastic calculus, as demonstrated in the previous posts on integration by parts, Ito’s lemma and stochastic differential equations.

The purpose of this post is to give an alternative characterization of semimartingales in terms of a simple and seemingly rather weak condition, stated in Theorem 1 below. The necessity of this condition follows from the requirement of integration to satisfy a bounded convergence property, as was commented on in the original post on stochastic integration. That it is also a sufficient condition is the main focus of this post. The aim is to show that the existence of the stochastic integral follows in a relatively direct way, requiring mainly just standard measure theory and no deep results on stochastic processes.

Recall that throughout these notes, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. To recap, elementary predictable processes are of the form

 $\displaystyle \xi_t=Z_01_{\{t=0\}}+\sum_{k=1}^n Z_k1_{\{s_{k} (1)

for an ${\mathcal{F}_0}$-measurable random variable ${Z_0}$, real numbers ${s_k,t_k\ge 0}$ and ${\mathcal{F}_{s_k}}$-measurable random variables ${Z_k}$. The integral with respect to any other process X up to time t can be written out explicitly as,

 $\displaystyle \int_0^t\xi\,dX = \sum_{k=1}^n Z_k(X_{t_k\wedge t}-X_{s_k\wedge t}).$ (2)

The predictable sigma algebra, ${\mathcal{P}}$, on ${{\mathbb R}_+\times\Omega}$ is generated by the set of left-continuous and adapted processes or, equivalently, by the elementary predictable process. The idea behind stochastic integration is to extend this to all bounded and predictable integrands ${\xi\in{\rm b}\mathcal{P}}$. Other than agreeing with (2) for elementary integrands, the only other property required is bounded convergence in probability. That is, if ${\xi^n\in{\rm b}\mathcal{P}}$ is a sequence uniformly bounded by some constant K, so that ${\vert\xi^n\vert\le K}$, and converging to a limit ${\xi}$ then, ${\int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX}$ in probability. Nothing else is required. Other properties, such as linearity of the integral with respect to the integrand follow from this, as was previously noted. Note that we are considering two random variables to be the same if they are almost surely equal. Similarly, uniqueness of the stochastic integral means that, for each integrand, the integral is uniquely defined up to probability one.

Using the definition of a semimartingale as a cadlag adapted process with respect to which the stochastic integral is well defined for bounded and predictable integrands, the main result is as follows. To be clear, in this post all stochastic processes are real-valued.

Theorem 1 A cadlag adapted process X is a semimartingale if and only if, for each ${t\ge 0}$, the set

 $\displaystyle \left\{\int_0^t\xi\,dX\colon \xi{\rm\ is\ elementary}, \vert\xi\vert\le 1\right\}$ (3)

is bounded in probability.

# SDEs with Locally Lipschitz Coefficients

In the previous post it was shown how the existence and uniqueness of solutions to stochastic differential equations with Lipschitz continuous coefficients follows from the basic properties of stochastic integration. However, in many applications, it is necessary to weaken this condition a bit. For example, consider the following SDE for a process X

 $\displaystyle dX_t =\sigma \vert X_{t-}\vert^{\alpha}\,dZ_t,$

where Z is a given semimartingale and ${\sigma,\alpha}$ are fixed real numbers. The function ${f(x)\equiv\sigma\vert x\vert^\alpha}$ has derivative ${f^\prime(x)=\sigma\alpha {\rm sgn}(x)|x|^{\alpha-1}}$ which, for ${\alpha>1}$, is bounded on bounded subsets of the reals. It follows that f is Lipschitz continuous on such bounded sets. However, the derivative of f diverges to infinity as x goes to infinity, so f is not globally Lipschitz continuous. Similarly, if ${\alpha<1}$ then f is Lipschitz continuous on compact subsets of ${{\mathbb R}\setminus\{0\}}$, but not globally Lipschitz. To be more widely applicable, the results of the previous post need to be extended to include such locally Lipschitz continuous coefficients.

In fact, uniqueness of solutions to SDEs with locally Lipschitz continuous coefficients follows from the global Lipschitz case. However, solutions need only exist up to a possible explosion time. This is demonstrated by the following simple non-stochastic differential equation

 $\displaystyle dX= X^2\,dt.$

For initial value ${X_0=x>0}$, this has the solution ${X_t=(x^{-1}-t)^{-1}}$, which explodes at time ${t=x^{-1}}$. Continue reading “SDEs with Locally Lipschitz Coefficients”

# Existence of Solutions to Stochastic Differential Equations

A stochastic differential equation, or SDE for short, is a differential equation driven by one or more stochastic processes. For example, in physics, a Langevin equation describing the motion of a point ${X=(X^1,\ldots,X^n)}$ in n-dimensional phase space is of the form

 $\displaystyle \frac{dX^i}{dt} = \sum_{j=1}^m a_{ij}(X)\eta^j(t) + b_i(X).$ (1)

The dynamics are described by the functions ${a_{ij},b_i\colon{\mathbb R}^n\rightarrow{\mathbb R}}$, and the problem is to find a solution for X, given its value at an initial time. What distinguishes this from an ordinary differential equation are random noise terms ${\eta^j}$ and, consequently, solutions to the Langevin equation are stochastic processes. It is difficult to say exactly how ${\eta^j}$ should be defined directly, but we can suppose that their integrals ${B^j_t=\int_0^t\eta^j(s)\,ds}$ are continuous with independent and identically distributed increments. A candidate for such a process is standard Brownian motion and, up to constant scaling factor and drift term, it can be shown that this is the only possibility. However, Brownian motion is nowhere differentiable, so the original noise terms ${\eta^j=dB^j_t/dt}$ do not have well defined values. Instead, we can rewrite equation (1) is terms of the Brownian motions. This gives the following SDE for an n-dimensional process ${X=(X^1,\ldots,X^n)}$

 $\displaystyle dX^i_t = \sum_{j=1}^m a_{ij}(X_t)\,dB^j_t + b_i(X_t)\,dt$ (2)

where ${B^1,\ldots,B^m}$ are independent Brownian motions. This is to be understood in terms of the differential notation for stochastic integration. It is known that if the functions ${a_{ij}, b_i}$ are Lipschitz continuous then, given any starting value for X, equation (2) has a unique solution. In this post, I give a proof of this using the basic properties of stochastic integration as introduced over the past few posts.

First, in keeping with these notes, equation (2) can be generalized by replacing the Brownian motions ${B^j}$ and time t by arbitrary semimartingales. As always, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. In integral form, the general SDE for a cadlag adapted process ${X=(X^1,\ldots,X^n)}$ is as follows,

 $\displaystyle X^i = N^i + \sum_{j=1}^m\int a_{ij}(X)\,dZ^j.$ (3)

# The Generalized Ito Formula

Recall that Ito’s lemma expresses a twice differentiable function ${f}$ applied to a continuous semimartingale ${X}$ in terms of stochastic integrals, according to the following formula

 $\displaystyle f(X) = f(X_0)+\int f^\prime(X)\,dX + \frac{1}{2}\int f^{\prime\prime}(X)\,d[X].$ (1)

In this form, the result only applies to continuous processes but, as I will show in this post, it is possible to generalize to arbitrary noncontinuous semimartingales. The result is also referred to as Ito’s lemma or, to distinguish it from the special case for continuous processes, it is known as the generalized Ito formula or generalized Ito’s lemma.

If equation (1) is to be extended to noncontinuous processes then, there are two immediate points to be considered. The first is that if the process ${X}$ is not continuous then it need not be a predictable process, so ${f^\prime(X),f^{\prime\prime}(X)}$ need not be predictable either. So, the integrands in (1) will not be ${X}$-integrable. To remedy this, we should instead use the left limits ${X_{t-}}$ in the integrands, which is left-continuous and adapted and therefore is predictable. The second point is that the jumps of the left hand side of (1) are equal to ${\Delta f(X)}$ and, on the right, they are ${f^\prime(X_-)\Delta X+\frac{1}{2}f^{\prime\prime}(X_-)\Delta X^2}$. There is no reason that these should be equal, and (1) cannot possibly hold in general. To fix this, we can simply add on the correction to the jump terms on the right hand side,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle f(X_t) =&\displaystyle f(X_0)+\int_0^t f^\prime(X_-)\,dX + \frac{1}{2}\int_0^t f^{\prime\prime}(X_-)\,d[X]\smallskip\\ &\displaystyle +\sum_{s\le t}\left(\Delta f(X_s)-f^\prime(X_{s-})\Delta X_s-\frac{1}{2}f^{\prime\prime}(X_{s-})\Delta X_s^2\right). \end{array}$ (2)

# Ito’s Lemma

Ito’s lemma, otherwise known as the Ito formula, expresses functions of stochastic processes in terms of stochastic integrals. In standard calculus, the differential of the composition of functions ${f(x), x(t)}$ satisfies ${df(x(t))=f^\prime(x(t))dx(t)}$. This is just the chain rule for differentiation or, in integral form, it becomes the change of variables formula.

In stochastic calculus, Ito’s lemma should be used instead. For a twice differentiable function ${f}$ applied to a continuous semimartingale ${X}$, it states the following,

 $\displaystyle df(X) = f^\prime(X)\,dX + \frac{1}{2}f^{\prime\prime}(X)\,dX^2.$

This can be understood as a Taylor expansion up to second order in ${dX}$, where the quadratic term ${dX^2\equiv d[X]}$ is the quadratic variation of the process ${X}$.

A d-dimensional process ${X=(X^1,\ldots,X^d)}$ is said to be a semimartingale if each of its components, ${X^i}$, are semimartingales. The first and second order partial derivatives of a function are denoted by ${D_if}$ and ${D_{ij}f}$, and I make use of the summation convention where indices ${i,j}$ which occur twice in a single term are summed over. Then, the statement of Ito’s lemma is as follows.

Theorem 1 (Ito’s Lemma) Let ${X=(X^1,\ldots,X^d)}$ be a continuous d-dimensional semimartingale taking values in an open subset ${U\subseteq{\mathbb R}^d}$. Then, for any twice continuously differentiable function ${f\colon U\rightarrow{\mathbb R}}$, ${f(X)}$ is a semimartingale and,

 $\displaystyle df(X) = D_if(X)\,dX^i + \frac{1}{2}D_{ij}f(X)\,d[X^i,X^j].$ (1)

Being able to handle quadratic variations and covariations of processes is very important in stochastic calculus. Apart from appearing in the integration by parts formula, they are required for the stochastic change of variables formula, known as Ito’s lemma, which will be the subject of the next post. Quadratic covariations satisfy several simple relations which make them easy to handle, especially in conjunction with the stochastic integral.

Recall from the previous post that the covariation ${[X,Y]}$ is a cadlag adapted process, so that its jumps ${\Delta [X,Y]_t\equiv [X,Y]_t-[X,Y]_{t-}}$ are well defined.

Lemma 1 If ${X,Y}$ are semimartingales then

 $\displaystyle \Delta [X,Y]=\Delta X\Delta Y.$ (1)

In particular, ${\Delta [X]=\Delta X^2}$.

Proof: Taking the jumps of the integration by parts formula for ${XY}$ gives

 $\displaystyle \Delta XY = X_{-}\Delta Y + Y_{-}\Delta X + \Delta [X,Y],$

and rearranging this gives the result. ⬜

An immediate consequence is that quadratic variations and covariations involving continuous processes are continuous. Another consequence is that the sum of the squares of the jumps of a semimartingale over any bounded interval must be finite.

Corollary 2 Every semimartingale ${X}$ satisfies

 $\displaystyle \sum_{s\le t}\Delta X^2_s\le [X]_t<\infty.$

Proof: As ${[X]}$ is increasing, the inequality ${[X]_t\ge \sum_{s\le t}\Delta [X]_s}$ holds. Substituting in ${\Delta[X]=\Delta X^2}$ gives the result. ⬜

Next, the following result shows that covariations involving continuous finite variation processes are zero. As Lebesgue-Stieltjes integration is only defined for finite variation processes, this shows why quadratic variations do not play an important role in standard calculus. For noncontinuous finite variation processes, the covariation must have jumps satisfying (1), so will generally be nonzero. In this case, the covariation is just given by the sum over these jumps. Integration with respect to any FV process ${V}$ can be defined as the Lebesgue-Stieltjes integral on the sample paths, which is well defined for locally bounded measurable integrands and, when the integrand is predictable, agrees with the stochastic integral.

Lemma 3 Let ${X}$ be a semimartingale and ${V}$ be an FV process. Their covariation is

 $\displaystyle [X,V]_t = \int_0^t \Delta X\,dV = \sum_{s\le t}\Delta X_s\Delta V_s.$ (2)

In particular, if either of ${X}$ or ${V}$ is continuous then ${[X,V]=0}$.