# Semimartingale Completeness

A sequence of stochastic processes, ${X^n}$, is said to converge to a process X under the semimartingale topology, as n goes to infinity, if the following conditions are met. First, ${X^n_0}$ should tend to ${X_0}$ in probability. Also, for every sequence ${\xi^n}$ of elementary predictable processes with ${\vert\xi^n\vert\le 1}$,

 $\displaystyle \int_0^t\xi^n\,dX^n-\int_0^t\xi^n\,dX\rightarrow 0$

in probability for all times t. For short, this will be denoted by ${X^n\xrightarrow{\rm sm}X}$.

The semimartingale topology is particularly well suited to the class of semimartingales, and to stochastic integration. Previously, it was shown that the cadlag and adapted processes are complete under semimartingale convergence. In this post, it will be shown that the set of semimartingales is also complete. That is, if a sequence ${X^n}$ of semimartingales converge to a limit X under the semimartingale topology, then X is also a semimartingale.

Theorem 1 The space of semimartingales is complete under the semimartingale topology.

The same is true of the space of stochastic integrals defined with respect to any given semimartingale. In fact, for a semimartingale X, the set of all processes which can be expressed as a stochastic integral ${\int\xi\,dX}$ can be characterized as follows; it is precisely the closure, under the semimartingale topology, of the set of elementary integrals of X. This result was originally due to Memin, using a rather different proof to the one given here. The method used in this post only relies on the elementary properties of stochastic integrals, such as the dominated convergence theorem.

Theorem 2 Let X be a semimartingale. Then, a process Y is of the form ${Y=\int\xi\,dX}$ for some ${\xi\in L^1(X)}$ if and only if there is a sequence ${\xi^n}$ of bounded elementary processes with ${\int\xi^n\,dX\xrightarrow{\rm sm}Y}$.

Writing S for the set of processes of the form ${\int\xi\,dX}$ for bounded elementary ${\xi}$, and ${\bar S}$ for its closure under the semimartingale topology, the statement of the theorem is equivalent to

 $\displaystyle \bar S=\left\{\int\xi\,dX\colon \xi\in L^1(X)\right\}.$ (1)

# Further Properties of the Stochastic Integral

We move on to properties of stochastic integration which, while being fairly elementary, are rather difficult to prove directly from the definitions.

First, recall that for a semimartingale X, the X-integrable processes ${L^1(X)}$ were defined to be predictable processes ${\xi}$ which are good dominators’. That is, if ${\xi^n}$ are bounded predictable processes with ${\vert\xi^n\vert\le\vert\xi\vert}$ and ${\xi^n\rightarrow 0}$ pointwise, then ${\int_0^t\xi^n\,dX}$ tends to zero in probability. This definition is a bit messy. Fortunately, the following result gives a much cleaner characterization of X-integrability.

Theorem 1 Let X be a semimartingale. Then, a predictable process ${\xi}$ is X-integrable if and only if the set

 $\displaystyle \left\{\int_0^t\zeta\,dX\colon\zeta\in{\rm b}\mathcal{P},\vert\zeta\vert\le\vert\xi\vert\right\}$ (1)

is bounded in probability for each ${t\ge 0}$.

# Existence of the Stochastic Integral 2 – Vector Valued Measures

The construction of the stochastic integral given in the previous post made use of a result showing that certain linear maps can be extended to vector valued measures. This result, Theorem 1 below, was separated out from the main argument in the construction of the integral, as it only involves pure measure theory and no stochastic calculus. For completeness of these notes, I provide a proof of this now.

Given a measurable space ${(E,\mathcal{E})}$, ${{\rm b}\mathcal{E}}$ denotes the bounded ${\mathcal{E}}$-measurable functions ${E\rightarrow{\mathbb R}}$. For a topological vector space V, the term V-valued measure refers to linear maps ${\mu\colon{\rm b}\mathcal{E}\rightarrow V}$ satisfying the following bounded convergence property; if a sequence ${\alpha_n\in{\rm b}\mathcal{E}}$ (n=1,2,…) is uniformly bounded, so that ${\vert\alpha_n\vert\le K}$ for a constant K, and converges pointwise to a limit ${\alpha}$, then ${\mu(\alpha_n)\rightarrow\mu(\alpha)}$ in V.

This differs slightly from the definition of V-valued measures as set functions ${\mu\colon\mathcal{E}\rightarrow V}$ satisfying countable additivity. However, any such set function also defines an integral ${\mu(\alpha)\equiv\int\alpha\,d\mu}$ satisfying bounded convergence and, conversely, any linear map ${\mu\colon{\rm b}\mathcal{E}\rightarrow V}$ satisfying bounded convergence defines a countably additive set function ${\mu(A)\equiv \mu(1_A)}$. So, these definitions are essentially the same, but for the purposes of these notes it is more useful to represent V-valued measures in terms of their integrals rather than the values on measurable sets.

In the following, a subalgebra of ${{\rm b}\mathcal{E}}$ is a subset closed under linear combinations and pointwise multiplication, and containing the constant functions.

Theorem 1 Let ${(E,\mathcal{E})}$ be a measurable space, ${\mathcal{A}}$ be a subalgebra of ${{\rm b}\mathcal{E}}$ generating ${\mathcal{E}}$, and V be a complete vector space. Then, a linear map ${\mu\colon\mathcal{A}\rightarrow V}$ extends to a V-valued measure on ${(E,\mathcal{E})}$ if and only if it satisfies the following properties for sequences ${\alpha_n\in\mathcal{A}}$.

1. If ${\alpha_n\downarrow 0}$ then ${\mu(\alpha_n)\rightarrow 0}$.
2. If ${\sum_n\vert\alpha_n\vert\le 1}$, then ${\mu(\alpha_n)\rightarrow 0}$.

# Existence of the Stochastic Integral

The principal reason for introducing the concept of semimartingales in stochastic calculus is that they are precisely those processes with respect to which stochastic integration is well defined. Often, semimartingales are defined in terms of decompositions into martingale and finite variation components. Here, I have taken a different approach, and simply defined semimartingales to be processes with respect to which a stochastic integral exists satisfying some necessary properties. That is, integration must agree with the explicit form for piecewise constant elementary integrands, and must satisfy a bounded convergence condition. If it exists, then such an integral is uniquely defined. Furthermore, whatever method is used to actually construct the integral is unimportant to many applications. Only its elementary properties are required to develop a theory of stochastic calculus, as demonstrated in the previous posts on integration by parts, Ito’s lemma and stochastic differential equations.

The purpose of this post is to give an alternative characterization of semimartingales in terms of a simple and seemingly rather weak condition, stated in Theorem 1 below. The necessity of this condition follows from the requirement of integration to satisfy a bounded convergence property, as was commented on in the original post on stochastic integration. That it is also a sufficient condition is the main focus of this post. The aim is to show that the existence of the stochastic integral follows in a relatively direct way, requiring mainly just standard measure theory and no deep results on stochastic processes.

Recall that throughout these notes, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. To recap, elementary predictable processes are of the form

 $\displaystyle \xi_t=Z_01_{\{t=0\}}+\sum_{k=1}^n Z_k1_{\{s_{k} (1)

for an ${\mathcal{F}_0}$-measurable random variable ${Z_0}$, real numbers ${s_k,t_k\ge 0}$ and ${\mathcal{F}_{s_k}}$-measurable random variables ${Z_k}$. The integral with respect to any other process X up to time t can be written out explicitly as,

 $\displaystyle \int_0^t\xi\,dX = \sum_{k=1}^n Z_k(X_{t_k\wedge t}-X_{s_k\wedge t}).$ (2)

The predictable sigma algebra, ${\mathcal{P}}$, on ${{\mathbb R}_+\times\Omega}$ is generated by the set of left-continuous and adapted processes or, equivalently, by the elementary predictable process. The idea behind stochastic integration is to extend this to all bounded and predictable integrands ${\xi\in{\rm b}\mathcal{P}}$. Other than agreeing with (2) for elementary integrands, the only other property required is bounded convergence in probability. That is, if ${\xi^n\in{\rm b}\mathcal{P}}$ is a sequence uniformly bounded by some constant K, so that ${\vert\xi^n\vert\le K}$, and converging to a limit ${\xi}$ then, ${\int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX}$ in probability. Nothing else is required. Other properties, such as linearity of the integral with respect to the integrand follow from this, as was previously noted. Note that we are considering two random variables to be the same if they are almost surely equal. Similarly, uniqueness of the stochastic integral means that, for each integrand, the integral is uniquely defined up to probability one.

Using the definition of a semimartingale as a cadlag adapted process with respect to which the stochastic integral is well defined for bounded and predictable integrands, the main result is as follows. To be clear, in this post all stochastic processes are real-valued.

Theorem 1 A cadlag adapted process X is a semimartingale if and only if, for each ${t\ge 0}$, the set

 $\displaystyle \left\{\int_0^t\xi\,dX\colon \xi{\rm\ is\ elementary}, \vert\xi\vert\le 1\right\}$ (3)

is bounded in probability.

# Properties of the Stochastic Integral

In the previous two posts I gave a definition of stochastic integration. This was achieved via an explicit expression for elementary integrands, and extended to all bounded predictable integrands by bounded convergence in probability. The extension to unbounded integrands was done using dominated convergence in probability. Similarly, semimartingales were defined as those cadlag adapted processes for which such an integral exists.

The current post will show how the basic properties of stochastic integration follow from this definition. First, if ${V}$ is a cadlag process whose sample paths are almost surely of finite variation over an interval ${[0,t]}$, then ${\int_0^t\xi\,dV}$ can be interpreted as a Lebesgue-Stieltjes integral on the sample paths. If the process is also adapted, then it will be a semimartingale and the stochastic integral can be used. Fortunately, these two definitions of integration do agree with each other. The term FV process is used to refer to such cadlag adapted processes which are almost surely of finite variation over all bounded time intervals. The notation ${\int_0^t\vert\xi\vert\,\vert dV\vert}$ represents the Lebesgue-Stieltjes integral of ${\vert\xi\vert}$ with respect to the variation of ${V}$. Then, the condition for ${\xi}$ to be ${V}$-integrable in the Lebesgue-Stieltjes sense is precisely that this integral is finite.

Lemma 1 Every FV process ${V}$ is a semimartingale. Furthermore, let ${\xi}$ be a predictable process satisfying

 $\displaystyle \int_0^t\vert\xi\vert\,\vert dV\vert<\infty$ (1)

almost surely, for each ${t\ge 0}$. Then, ${\xi\in L^1(V)}$ and the stochastic integral ${\int\xi\,dV}$ agrees with the Lebesgue-Stieltjes integral, with probability one.

# Extending the Stochastic Integral

In the previous post, I used the property of bounded convergence in probability to define stochastic integration for bounded predictable integrands. For most applications, this is rather too restrictive, and in this post the integral will be extended to unbounded integrands. As bounded convergence is not much use in this case, the dominated convergence theorem will be used instead.

The first thing to do is to define a class of integrable processes for which the integral with respect to ${X}$ is well-defined. Suppose that ${\xi^n}$ is a sequence of predictable processes dominated by any such ${X}$-integrable process ${\alpha}$, so that ${\vert\xi^n\vert\le\vert\alpha\vert}$ for each ${n}$. If this sequence converges to a limit ${\xi}$, then dominated convergence in probability states that the integrals converge in probability,

 $\displaystyle \int_0^t\xi^n\,dX\rightarrow\int_0^t\xi\,dX\ \ \text{(in probability)}$ (1)

as ${n\rightarrow\infty}$. Continue reading “Extending the Stochastic Integral”

# The Stochastic Integral

Having covered the basics of continuous-time processes and filtrations in the previous posts, I now move on to stochastic integration. In standard calculus and ordinary differential equations, a central object of study is the derivative ${df/dt}$ of a function ${f(t)}$. This does, however, require restricting attention to differentiable functions. By integrating, it is possible to generalize to bounded variation functions. If ${f}$ is such a function and ${g}$ is continuous, then the Riemann-Stieltjes integral ${\int_0^tg\,df}$ is well defined. The Lebesgue-Stieltjes integral further generalizes this to measurable integrands.

However, the kinds of processes studied in stochastic calculus are much less well behaved. For example, with probability one, the sample paths of standard Brownian motion are nowhere differentiable. Furthermore, they have infinite variation over bounded time intervals. Consequently, if ${X}$ is such a process, then the integral ${\int_0^t\xi\,dX}$ is not defined using standard methods.

Stochastic integration with respect to standard Brownian motion was developed by Kiyoshi Ito. This required restricting the class of possible integrands to be adapted processes, and the integral can then be constructed using the Ito isometry. This method was later extended to more general square integrable martingales and, then, to the class of semimartingales. It can then be shown that, as with Lebesgue integration, a version of the bounded and dominated convergence theorems are satisfied.

In these notes, a more direct approach is taken. The idea is that we simply define the stochastic integral such that the required elementary properties are satisfied. That is, it should agree with the explicit expressions for certain simple integrands, and should satisfy the bounded and dominated convergence theorems. Much of the theory of stochastic calculus follows directly from these properties, and detailed constructions of the integral are not required for many practical applications. Continue reading “The Stochastic Integral”

# Predictable Stopping Times

The concept of a stopping times was introduced a couple of posts back. Roughly speaking, these are times for which it is possible to observe when they occur. Often, however, it is useful to distinguish between different types of stopping times. A random time for which it is possible to predict when it is about to occur is called a predictable stopping time. As always, we work with respect to a filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$.

Definition 1 A map ${\tau\colon\Omega\rightarrow\bar{\mathbb R}_+}$ is a predictable stopping time if there exists a sequence of stopping times ${\tau_n\uparrow\tau}$ satisfying ${\tau_n<\tau}$ whenever ${\tau\not=0}$.

Predictable stopping times are alternatively referred to as previsible. The sequence of times ${\tau_n}$ in this definition are said to announce ${\tau}$. Note that, in this definition, the random time was not explicitly required to be a stopping time. However, this is automatically the case, as the following equation shows.

$\displaystyle \left\{\tau\le t\right\}=\bigcap_n\left\{\tau_n\le t\right\}\in\mathcal{F}_t.$

One way in which predictable stopping times occur is as hitting times of a continuous adapted process. It is easy to predict when such a process is about to hit any level, because it must continuously approach that value.

Theorem 2 Let ${X}$ be a continuous adapted process and ${K}$ be a real number. Then

$\displaystyle \tau=\inf\left\{t\in{\mathbb R}_+\colon X_t\ge K\right\}$

is a predictable stopping time.

Proof: Let ${\tau_n}$ be the first time at which ${X_t\ge K-1/n}$ which, by the debut theorem, is a stopping time. This gives an increasing sequence bounded above by ${\tau}$. Also, ${X_{\tau_n}\ge K-1/n}$ whenever ${\tau_n<\infty}$ and, by left-continuity, setting ${\sigma=\lim_n\tau_n}$ gives ${X_\sigma\ge K}$ whenever ${\sigma<\infty}$. So, ${\sigma\ge\tau}$, showing that the sequence ${\tau_n}$ increases to ${\tau}$. If ${0<\tau_n\le\tau<\infty}$ then, by continuity, ${X_{\tau_n}=K-1/n\not=K=X_{\tau}}$. So, ${\tau_n<\tau}$ whenever ${0<\tau<\infty}$ and the sequence ${n\wedge\tau_n}$ announces ${\tau}$. ⬜

In fact, predictable stopping times are always hitting times of continuous processes, as stated by the following result. Furthermore, by the second condition below, it is enough to prove the much weaker condition that a random time can be announced in probability’ to conclude that it is a predictable stopping time.

Lemma 3 Suppose that the filtration is complete and ${\tau\colon\Omega\rightarrow\bar{\mathbb R}_+}$ is a random time. The following are equivalent.

1. ${\tau}$ is a predictable stopping time.
2. For any ${\epsilon,\delta,K>0}$ there is a stopping time ${\sigma}$ satisfying
 $\displaystyle {\mathbb P}\left(K\wedge\tau-\epsilon<\sigma<\tau{\rm\ or\ }\sigma=\tau=0\right)>1-\delta.$ (1)
3. ${\tau=\inf\{t\ge 0\colon X_t=0\}}$ for some continuous adapted process ${X}$.

In the previous post I started by introducing the concept of a stochastic process, and their modifications. It is necessary to introduce a further concept, to represent the information available at each time. A filtration ${\{\mathcal{F}_t\}_{t\ge 0}}$ on a probability space ${(\Omega,\mathcal{F},{\mathbb P})}$ is a collection of sub-sigma-algebras of ${\mathcal{F}}$ satisfying ${\mathcal{F}_s\subseteq\mathcal{F}_t}$ whenever ${s\le t}$. The idea is that ${\mathcal{F}_t}$ represents the set of events observable by time ${t}$. The probability space taken together with the filtration ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$ is called a filtered probability space.
$\displaystyle \mathcal{F}_{t+}=\bigcap_{s>t}\mathcal{F}_s,\ \mathcal{F}_{t-}=\sigma\Big(\bigcup_{s
Here, ${\sigma(\cdot)}$ denotes the sigma-algebra generated by a collection of sets. The left limit as defined here only really makes sense at positive times. Throughout these notes, I define the left limit at time zero as ${\mathcal{F}_{0-}\equiv\mathcal{F}_0}$. The filtration is said to be right-continuous if ${\mathcal{F}_t=\mathcal{F}_{t+}}$ . Continue reading “Filtrations and Adapted Processes”