Extending Filtered Probability Spaces

In stochastic calculus it is common to work with processes adapted to a filtered probability space { (\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}. As with probability space extensions, It can sometimes be necessary to enlarge the underlying space to introduce additional events and processes. For example, many diffusions and local martingales can be expressed as an integral with respect to Brownian motion but, sometimes, it may be necessary to enlarge the space to make sure that it includes a Brownian motion to work with. Also, in the theory of stochastic differential equations, finding solutions can sometimes require enlarging the space.

Extending a probability space is a relatively straightforward concept, which I covered in an earlier post. Extending a filtered probability space is the same, except that it also involves enlarging the filtration {\{\mathcal F_t\}_{t\ge0}}. It is important to do this in a way which does not destroy properties of existing processes, such as their distributions conditional on the filtration at each time.

Let’s consider a filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}. An enlargement

\displaystyle \pi\colon (\Omega',\mathcal F',\{\mathcal F'_t\}_{t\ge0},{\mathbb P}')\rightarrow(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})

is, firstly, an extension of the probability spaces. It is a map from Ω′ to Ω measurable with respect to {\mathcal F'} and {\mathcal F}, and preserving probabilities. So ℙ′(π-1E) = π(E) for all { E\in\mathcal F}. In addition, it is required to be {\mathcal F'_t/\mathcal F_t} measurable for each time t ≥ 0, meaning that {\pi^{-1}(E)\in\mathcal F'_t} for all { E\in\mathcal F_t}. Consequently, any adapted process Xt lifts to an adapted process Xt = πXt on the larger space, defined by Xt(ω) = Xt(π(ω)).

As with extensions of probability spaces, this can be considered in two steps. First, we extend to the filtered probability space on Ω′ with induced sigma-algebra {\pi^*\mathcal F} consisting of sets π-1E for { E\in\mathcal F}, and to the filtration {\pi^*\mathcal F_t}. This is essentially a no-op, since events and random variables on the original filtered probability space are in one-to-one correspondence with those on the enlarged space, up to zero probability events. Next, the sigma-algebras are enlarged to {\mathcal F'\supseteq\pi^*\mathcal F} and {\mathcal F'_t\supseteq\pi^*\mathcal F_t}. This is where new random events are added to the event space and filtration.

Such arbitrary extensions are too general for many uses in stochastic calculus where we merely want to add in some additional source of randomness. Consider, for example, a standard Brownian motion B defined on the original space so that, for any times s < t, Bt – Bs is normal and independent of {\mathcal F_s}. Does it necessarily lift to a Brownian motion on the enlarged space? The answer to this is no! It need not be the case that Bt – Bs is independent of {\mathcal F'_s}. For an extreme case, consider the situation where {(\Omega',\mathcal F',{\mathbb P}')=(\Omega,\mathcal F,{\mathbb P})} and π is the identity, so there is no enlargement of the sample space. If the filtration is is extended to the maximum, {\mathcal F'_t=\mathcal F}, consider what happens to our Brownian motion. The increment Bt – Bs is {\mathcal F'_s}-measurable, so is not independent of it. In fact, conditioned on {\mathcal F'_0}, the entire path of B is deterministic. It is definitely not a Brownian motion with respect to this new filtration. Similarly, martingales, submartingales and supermartingales will not remain as such if we pass to this enlarged filtration.

The idea is that, if { Y={\mathbb E}[X\vert\mathcal F_t]} for random variables X, Y defined on our original probability space, then this relation should continue to hold in the extension. It is required that { Y^*={\mathbb E}[X^*\vert\mathcal F'_t]}. This is exactly relative independence of {\mathcal F'_t} and {\pi^*\mathcal F} over {\pi^*\mathcal F_t}.

Recall that two sigma-algebras {\mathcal G} and {\mathcal H} are relatively independent over a third {\mathcal K\subseteq\mathcal G\cap\mathcal H} if

\displaystyle {\mathbb P}(A\cap B) = {\mathbb E}\left[{\mathbb P}(A\vert\mathcal K){\mathbb P}(B\vert\mathcal K)\right]

for all { A\in\mathcal G} and { B\in\mathcal H}. The following properties are each equivalent to this definition;

  • {{\mathbb E}[XY\vert\mathcal K]={\mathbb E}[X\vert\mathcal K]{\mathbb E}[Y\vert\mathcal K]} for all bounded {\mathcal G}-measurable random variables X and {\mathcal H}-measurable Y.
  • {{\mathbb E}[X\vert\mathcal G]={\mathbb E}[X\vert\mathcal K]} for all bounded {\mathcal H}-measurable X.
  • {{\mathbb E}[X\vert\mathcal H]={\mathbb E}[X\vert\mathcal K]} for all bounded {\mathcal G}-measurable X.

This leads us to the idea of a standard extension of filtered probability spaces.

Definition 1 An extension of filtered probability spaces

\displaystyle \pi\colon(\Omega',\mathcal F', \{\mathcal F'_t\}_{t\ge0},{\mathbb P}')\rightarrow(\Omega,\mathcal F, \{\mathcal F_t\}_{t\ge0},{\mathbb P})

is standard if, for each time t ≥ 0, the sigma-algebras {\mathcal F'_t} and {\pi^*\mathcal F} are relatively independent over {\pi^*\mathcal F_t}.

The definition above can be stated as saying that the extension is standard if, given { Y={\mathbb E}[X\vert\mathcal F_t]} for bounded {\mathcal F}-measurable random variable X, then { Y^*={\mathbb E}[X^*\vert\mathcal F'_t]}.

An obvious way to try to extend a filtered probability space { (\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})} is to take its product with another such space { (\Lambda,\mathcal G,\{\mathcal G_t\}_{t\ge0},{\mathbb Q})},

\displaystyle (\Omega',\mathcal F',\{\mathcal F'_t\}_{t\ge0},{\mathbb P}') =(\Omega\times\Lambda,\mathcal F\otimes\mathcal G,\{\mathcal F_t\otimes\mathcal G_t\}_{t\ge0},{\mathbb P}\times{\mathbb Q}) (1)

with {\pi\colon\Omega'\rightarrow\Omega} being the projection {\pi(\omega,\lambda)=\omega}. This preserves probabilities, by definition of the product measure ℙ′, so defines an extension of filtered probability spaces. This is something we might try if, for example, a construction requires the existence of a Brownian motion. We can define a separate space containing a Brownian motion, and then join it to our space by taking the product.

Lemma 2 The extension π to the product filtered probability space {(\Omega',\mathcal F',\{\mathcal F'_t\}_{t\ge0},{\mathbb P}')} defined above is standard.

Proof: Let ρ: Ω′ → Λ be the projection ρ(ω, λ) = λ. By definition of the product measure, {\pi^*\mathcal F} and {\rho^*\mathcal G} are independent.

Consider { Y={\mathbb E}[X\vert\mathcal F_t]} for time t ≥ 0 and bounded {\mathcal F}-measurable random variable X. Lifting to the extension, { Y^*={\mathbb E}[X^*\vert\pi^*\mathcal F_t]}. For bounded random variables U, V which are, respectively, {\pi^*\mathcal F_t} and {\rho^*\mathcal G_t} measurable,

\displaystyle {\mathbb E}[Y^*UV]={\mathbb E}[Y^*U]{\mathbb E}[V]={\mathbb E}[X^*U]{\mathbb E}[V]={\mathbb E}[X^*UV].

The first and last inequalities are using independence of {\pi^*\mathcal F} and {\rho^*\mathcal G}, and the center one us using the definition of the conditional expectation of X. By the monotone class theorem, this extends to the case where UV is replaced by any bounded {\mathcal F_t\otimes\mathcal G_t} measurable random variable. So, { Y^*={\mathbb E}[X^*\vert\mathcal F_t\otimes\mathcal G_t]} as required. ⬜


Preserved Properties

Consider, again, our Brownian motion B lifted to B under a standard extension. For times s < t, Bt – Bs is independent of {\mathcal F_s} and, by conditional independence Bt – Bs will be independent of {\mathcal F'_s}, so it remains a Brownian motion with respect to the extended filtration. More generally, Markov processes remain Markov under standard extensions. In particular, this includes all Lévy processes.

Lemma 3 Suppose that X is a Markov process with transition function {Pt}t≥0, and π is a standard extension of the underlying probability space. Then, X is a Markov process with the same transition function

Proof: Let X be a Markov process with transition function P· on measurable space E. For times s < t this means that {{\mathbb E}[f(X_t)\vert \mathcal F_s]=P_{t-s}f(X_s)} for bounded measurable f: E → ℝ. As f(Xt) is {\mathcal F}-measurable, the definition of standard extensions gives {{\mathbb E}[f(X^*_t)\vert \mathcal F'_s]=P_{t-s}f(X^*_s)} so X is a Markov process with the same transition function. ⬜

Look again at the situation where we take the product with an auxiliary space {(\Lambda,\mathcal G,\{\mathcal G_t\}_{t\ge0},{\mathbb Q})} to create an extension π: Ω′ → Ω to the product Ω′= Ω × Λ given by projecting on the first component, π(ω, λ) = ω. Lemma 2 told us that this is standard. However, it also says that the projection ρ(ω, λ) = λ onto the second component is a standard extension of the auxiliary space. So, if this auxiliary space was constructed to contain a Brownian motion, it remains a Brownian motion when lifted to the extension. Hence, we have constructed a standard extension of the original space which contains this Brownian motion independent of the original filtration.

In situations where the filtration is generated by the Markov process, lemma 3 generalizes to an ‘if and only if’ condition.

Lemma 4 Suppose that X is a Markov process on the underlying filtered probability space, and that {\mathcal F} is generated by X and {\mathcal F_0}. Then, extension π of the filtered probability space is standard if and only if X is Markov.

Proof: Saying that the filtration is generated by X and {\mathcal F_0} means that it is the smallest σ-algebra containing {\mathcal F_0} and with respect to which each Xt is measurable for all times t.

If the extension is standard, then X is standard by lemma 3, so only the reverse implication needs to be shown. Suppose that X Markov, and fix a time t ≥ 0. For bounded random variable Y measurable with respect to {Xs}st, the Markov property says that {{\mathbb E}[Y\vert\mathcal F_t]={\mathbb E}[Y\vert X_t]} and {{\mathbb E}[Y^*\vert\mathcal F'_t]={\mathbb E}[Y^*\vert X^*_t]}. Hence, for any bounded {\mathcal F_t}-measurable random variable Z, setting U = ZY,

\displaystyle \begin{aligned} {\mathbb E}[U^*\;\vert\pi^*\mathcal F_t] &=\pi^*{\mathbb E}[U\;\vert\mathcal F_t]\\ &=\pi^*(Z{\mathbb E}[Y\;\vert X_t])\\ &=Z^*{\mathbb E}[Y^*\vert X^*_t]\\ &=Z^*{\mathbb E}[Y^*\vert\mathcal F'_t]\\ &={\mathbb E}[U^*\vert\mathcal F'_t]. \end{aligned}

As the condition of the lemma says that {\mathcal F} is generated by {\mathcal F_t} and {Xs}st, the monotone class theorem implies that this holds if U is replaced by any bounded {\mathcal F}-measurable random variable, so {\pi^*\mathcal F} and {\mathcal F'_t} are relatively independent over {\pi^*\mathcal F_t}, so the extension is standard. ⬜

Lemma 4 implies that any space containing a Markov process can be considered as a standard extension of the process defined on its own canonical space. That is, we start with a Markov process X taking values in a measurable space E and defined on {(\Omega',\mathcal F', \{\mathcal F'_t\}_{t\ge0},{\mathbb P}')}. Then define a new space Ω consisting of all functions + → E and let X be its canonical process Xt(ω) = ω(t). We define a map π: Ω′ → Ω taking ω to ω ∈ Ω defined by ω(t) = Xt(ω′). Then, the filtration generated by X and measure ℙ = πℙ′ gives a filtered probability space on Ω. We can think of as the distribution of X, and π is the map to the paths of X with this distribution. As X will be a Markov process under its natural filtration, lemma 4 guarantees that π is a standard extension. Additionally, if the paths of X are continuous or right-continuous, then the same can be imposed on X so, for example, any filtered probability space containing a Brownian motion can be realized as a standard extension of Wiener space with the Wiener measure.

Standard extensions also preserve other important properties of processes.

Lemma 5 A standard extension of a filtered probability space preserves the (local) martingale, submartingale and supermartingale properties.

Proof: It is sufficient to consider the case where X is a (local) submartingale, since applying this to X covers the supermartingale case and applying to both X and X gives the result for martingales. So, suppose that X is a submartingale and s < t are fixed times. By conditional independence,

\displaystyle {\mathbb E}\left[X^*_t\vert\mathcal F'_s\right]={\mathbb E}\left[X_t\vert\mathcal F_s\right]^*\ge X_s^*,

so that X is a submartingale. Now, suppose that X is a local submartingale, so there exist stopping times τn increasing to infinity such that 1{τn>0}Xτn are submartingales. From what we have already proven,

\displaystyle \left(1_{\{\tau_n > 0\}}X^{\tau_n}\right)^*=1_{\{\tau^*_n > 0\}}X^{\tau^*_n}

are submartingales and, as τn are {\mathcal F'_\cdot}-stopping times increasing to infinity, X is a local submartingale. ⬜

An important class of processes is that of semimartingales. These are good integrators in the sense of stochastic integration, so if we are using stochastic integrals then it will be important to know that these remain semimartingales in an extension of the underlying space, with integrals which are preserved by the extension. That is,

\displaystyle \int_0^t\xi^*\,dX^*=\pi^*\int_0^t\xi\,dX. (2)

If we already know that X is a semimartingale then this is indeed true.

Before proceeding, I make a brief comment regarding stochastic integrals in general. In these notes, the underlying filtration was assumed to be complete in order to choose cadlag versions. I do not do that here, since extending filtrations by taking a product (1) could break this assumption. You can add the step of completing each filtration if you like, although I will just assume that stochastic integration and semimartingale decompositions are performed with respect to the completion.

Lemma 6 Let π be an extension of the underlying filtered probability space and X be an adapted process such that X is a semimartingale. Then, X is a semimartingale and if ξ is a predictable process such that ξ is X-integrable, then ξ is X-integrable and (2) holds.

Proof: For any elementary predictable process ξ, it is immediate from the definition that ξ is elementary and the elementary integrals (2) agree. As extensions preserve probability, they also preserve convergence in probability and, by bounded convergence in probability, it follows that 0tξdX is {\pi^*\mathcal F_t}-measurable for all bounded predictable processes ξ, so we can define

\displaystyle \pi^*\int_0^t\xi\,dX=\int_0^t\xi^*\,dX^*

guaranteeing that (2) holds. As this satisfies bounded convergence in probability and agrees with the definition for elementary integrands, it follows that X is a semimartingale satisfying (2) for bounded integrands.

Now, suppose that ξ is a predictable process such that ξ is X integrable, we need to show that ξ is X-integrable. So, let |ζn| ≤ |ξ| be a sequence of bounded predictable processes tending to zero. Applying (2) for bounded integrands,

\displaystyle \pi^*\int_0^t\zeta^n\,dX=\int_0^t(\zeta^n)^*\,dX^*\rightarrow0

by dominated convergence in probability as n goes to infinity. Hence, t0ζndX tends to zero in probability, showing that ξ is X-integrable. Finally, in a similar way, suppose that |ζn| ≤ |ξ| be a sequence of bounded predictable processes tending to ξ. By bounded convergence in probability,

\displaystyle \begin{aligned} \pi^*\int_0^t\xi\,dX &=\lim_n\pi^*\int_0^t\zeta^n\,dX\\ &=\lim_n\int_0^t(\zeta^n)^*\,dX^*\\ &=\int\xi^*\,dX^* \end{aligned}

as required. ⬜

In general, however, knowing that X is a semimartingale is not sufficient to guarantee that X is, so that lemma 6 cannot be applied. For example, X could be a standard Brownian motion, and if we enlarge the filtration so that {\mathcal F'_0} is generated by {Xt}t≥0 then, conditioned on this, X will be deterministic with unbounded variation over finite time intervals, so not a semimartingale. Even if we know that X is a semimartingale, then knowing that ξ is X-integrable is not sufficient to say that ξ is X-integrable. As an example, we can consider a case where X has finite variation, so remains a semimartingale in any enlarged filtration, but Y = ∫ξ dX is not of finite variation over a finite time interval. As above, enlarging the filtration so that Yt is {\mathcal F'_0}-measurable for all t means that Y will not be a semimartingale, so ξ is not X-integrable.

These deficiencies in preserving stochastic integration and the semimartingale property are all fixed in standard extensions.

Lemma 7 Let π be a standard extension of the underlying filtered probability space. Then, a semimartingale X remains a semimartingale in the extension.

Furthermore, a predictable process ξ in the original space is X-integrable if and only if ξ is X-integrable, in which case the stochastic integrals (2) agree.

Proof: Proving that X is a semimartingale is actually rather tricky if we try to directly show that it satisfies the original definition used in these notes. That is, the stochastic integral exists for bounded integrands satisfying bounded convergence in probability. The problem is, when we pass to the extension, there can be many more predictable processes, and constructing their integral in terms of integration in the original space can be tricky. However, the alternative definition of semimartingales as the sum of a local martingale and an FV process, as stated by the Bichteler-Dellacherie theorem, tells us that X is a semimartingale as an immediate consequence of lemma 5.

I will make use of the alternative condition that ξ is X-integrable if there is a semimartingale Y satisfying

\displaystyle \int(1+\lvert\xi\rvert)^{-1}\,dY=\int(1+\lvert\xi\rvert)^{-1}\xi\,dX.

We know that Y is a semimartingale from what we have already shown. As the integrands are bounded, lemma 6 says that (2) can be used,

\displaystyle \int(1+\lvert\xi^*\rvert)^{-1}\,dY^*=\int(1+\lvert\xi^*\rvert)^{-1}\xi^*\,dX^*.

So, ξ is X-integrable as required. ⬜

Let’s now look back at the example of extending a filtered probability space by taking its product with another such space. We saw in lemma 2 that this is a standard extension so, by lemma 7, it preserves the values of stochastic integrals. However, it is possible to go much further and describe the value of stochastic integrals in the extended space in terms of integrals in the original space. To recap, we take the product (1) of the base space Ω with an auxiliary space Λ to obtain the product Ω′= Ω × Λ with product filtration also described by (1). The extension map π is just projection π(ω, λ) = ω onto the first component of Ω × Λ.

For any λ ∈ Λ we can project a stochastic process ξ defined on the extended space Ω′ to a process ξλ on the base space,

\displaystyle \xi^\lambda_t(\omega)=\xi_t(\omega,\lambda).

This is a left-inverse of lifting, so that lifting a process ξ from the base space and projecting it back down gives the original process, ξ)λ = ξ. By lemma 7, any semimartingale X on the base space lifts to a semimartingale X.

Lemma 8 Let X be a semimartingale on Ω and ξ be a real-valued predictable process on Ω′= Ω × Λ. Then, ξλ is predictable and ξ is X-integrable if and only if ξλ is X-integrable for -almost all λ ∈ Λ. In that case,

\displaystyle \int_0^t\xi\,dX^*\,(\omega,\lambda)=\int_0^t\xi^\lambda\,dX\,(\omega) (3)

for ℙ′-almost all (ω, λ).

Just a word on the notation in (3). As stochastic integrals are just random variables, they are really functions on the underlying probability space and can expressed by their values corresponding to each ω ∈ Ω or (ω, λ) ∈ Ω′. While we usually do not explicitly include these sample points in expressions involving random variables, it does simplify things in this case, so they are included.

Proof: For any elementary predictable process ξ, it is immediate from the definition that ξλ is elementary and the elementary integrals (3) agree. By the monotone class theorem, ξλ is predictable whenever ξ is predictable and, by bounded convergence in probability, (3) extends to all bounded predictable ξ.

Now, suppose that ξλ is X-integrable for almost all λ. We need to show that ξ is X-integrable. So, consider any sequence of bounded predictable processes |αn| ≤ |ξ| tending to zero. Hence, by (3)

\displaystyle \int\alpha^n\,dX^*\,(\omega,\lambda) =\int(\alpha^n)^\lambda\,dX\,(\omega).

For any λ such that ξλ is X-integrable, the right hand side of this identity tends to zero in probability, by dominated convergence. Since this holds for almost all λ, it tends to zero in probability jointly w.r.t. (ω, λ), showing that ξ is X-integrable. If instead we choose αn → ξ then, again by dominated convergence in probability, the equality tends to (3).

It remains to show that, if ξ is X-integrable then ξλ is X-integrable for almost all λ, which is the trickiest part of this proof. Continuous semimartingales are easiest, since they decompose uniquely as

\displaystyle X=M+A (4)

for local martingale M and continuous FV process A starting from 0. As noted following equation (2) above, such decompositions are understood to be adapted to the completion of the filtration. By lemma 5, this decomposition is preserved by standard extensions, X = M + A for local martingale M and continuous FV process A. That ξ is X-integrable is equivalent to

\displaystyle \int_0^t\xi^2\,d[M^*]+\int_0^t\lvert\xi\rvert\,\lvert dA^*\rvert < \infty (5)

almost surely for each time t. As quadratic variations are defined by taking limits in probability of finite sums and extensions preserve probability, they also preserve quadratic variations, [M] = [M]. So, evaluating (5) on sample paths gives

\displaystyle \int_0^t(\xi^\lambda)^2\,d[M]+\int_0^t\lvert\xi^\lambda\rvert\,\lvert dA\rvert < \infty

almost surely, for -almost all λ and, hence, ξλ is X-integrable.

Only the noncontinuous case remains, which we will handle in a similar way after extracting out any problematic jumps. First, consider subtracting out large jumps by looking at X – Y where Yt = Σst1{|ΔXs|>1}ΔXs. As Y just consists of finitely many jumps over finite horizons, ξλ is automatically Y-integrable and ξ is Y-integrable. So, it is enough to prove the result with X replaced by X – Y, which has jumps bounded by 1.

Without loss of generality, we assume that X has jumps bounded by 1. In particular, this means that it is locally integrable, so decomposition (4) holds where, now, A is a predictable FV process rather than being necessarily continuous. Furthermore, if ξΔX was locally integrable, then ξ is X-integrable if and only if (5) holds almost surely. Then, we could argue as in the continuous case. As we include non locally integrable situations, then we subtract out the large values of ξΔX. To do this, set

\displaystyle Y_t=\sum_{s\le t}1_{\{\lvert\xi_s\Delta X^*_s\rvert > 1\}}\Delta X^*_s.

This is a process defined on Ω′ with finitely many jumps over finite horizons, all of which are bounded by 1. So, it has locally integrable variation and decomposition (4) can be applied,

\displaystyle Y=N+B

for local martingale N and predictable FV process B. The idea is that restricting to the base space preserves the decomposition. That is, Yλ = Nλ + Bλ, where Nλ is a local martingale and Bλ is predictable FV for almost all λ. The fact that Nλ is a local martingale is not immediately obvious, so I prove this in a moment, but first show why it gives the result.

As ξ is Y-integrable, it is also (X - Y)-integrable, and this integral has jumps bounded by 1 and hence is locally integrable. So, (5) applies

\displaystyle \int_0^t\xi^2\,d[M^*-N]+\int_0^t\lvert\xi\rvert\,\lvert d(A^*-B)\rvert < \infty

(almost surely). Equivalently, evaluating along sample paths,

\displaystyle \int_0^t(\xi^\lambda)^2\,d[M-N^\lambda]+\int_0^t\lvert\xi^\lambda\rvert\,\lvert d(A-B^\lambda)\rvert < \infty

almost surely for almost all λ. From the fact that ξλΔ(X - Yλ) is bounded by 1 and so locally integrable, combined with the decomposition

\displaystyle X-Y^\lambda=(M-N^\lambda)+(A-B^\lambda)

into local martingale and predictable FV terms, this means that ξλ is (X - Yλ)-integrable and, hence, is X-integrable for almost all λ.

It just remains to provide the promised proof of the statement above that Nλ is a local martingale for almost all λ. Start by choosing measurable Uk, Vk ⊆ ℝ (k = 1, 2, …) such that Uk × Vk partition the set {(x, y) ∈ ℝ2: |xy| > 1} and define the processes,

\displaystyle Z^k_t=\sum_{s\le t}1_{\{\Delta X_s\in V_k\}}\Delta X_s.

These have finitely many jumps bounded by 1 over each finite horizon, so decompose as Zk = Pk + Ck for local martingales Pk and predictable FV processes Ck. Lifting up to the product space and integrating gives,

\displaystyle \begin{aligned} \sum_{s\le t}1_{\{(\xi_s,\Delta X^*_s)\in U_k\times V_k\}}\Delta X^*_s &=\int_0^t1_{\{\xi\in U_k\}}d(Z^k)^*\\ &=\int_0^t1_{\{\xi\in U_k\}}d(P^k)^*+\int_0^t 1_{\{\xi\in U_k\}}d(C^k)^*\\ &\equiv\tilde P^k_t+\tilde C^k_t. \end{aligned}

Here, k and k are used to represent the integrals in the right hand side and, being an integral with respect to a local martingale, k is a local martingale. For any stopping time τ such that Y has integrable variation on the interval [0, τ], then k has expected variation bounded by that of the left hand side in the expression above. Summing over k this is absolutely convergent in L1 over each such interval,

\displaystyle Y=\sum_k\tilde P^k+\sum_k\tilde C^k.

So, by uniqueness of the decomposition into local martingale and predictable FV terms, N = Σkk. Hence, we obtain

\displaystyle N^\lambda=\sum_k\int1_{\{\xi^\lambda\in U_k\}}\,dP^k

almost surely. Each summand on the right hand side is an integral with respect to a local martingale, so is a local martingale and, as the sum is locally convergent in L1, this implies that Nλ is almost surely a local martingale. ⬜


Combining Extensions

Since we sometimes want to extend a filtered probability space more than a single time, I look at how these combine. Consider an extension π of the original filtered probability space, and then a further extension ρ of this.

\displaystyle (\Omega'',\mathcal F'',\{\mathcal F''_t\}_{t\ge0},{\mathbb P}'')\xrightarrow{\rho} (\Omega',\mathcal F',\{\mathcal F'_t\}_{t\ge0},{\mathbb P}')\xrightarrow{\pi} (\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P}).

These can be combined into a single extension ϕ = π○ρ of the original space,

\displaystyle \phi\colon(\Omega'',\mathcal F'',\{\mathcal F''_t\}_{t\ge0},{\mathbb P}'')\rightarrow(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P}).

Lemma 9 If π and ρ are standard extensions, then so is ϕ = π○ρ.

Proof: For time t ≥ 0 and bounded {\mathcal F}-measurable random variable X, set { Y={\mathbb E}[X\vert\mathcal F_t]}. As π is standard, we have {\pi^*Y={\mathbb E}[\pi^*X\vert\mathcal F'_t]}. Then, as ρ is standard, we have

\displaystyle \phi^*Y=\rho^*\pi^*Y=\rho^*{\mathbb E}[\pi^*X\vert\mathcal F'_t]={\mathbb E}[\rho^*\pi^*X\vert\mathcal F''_t]={\mathbb E}[\phi^*X\vert\mathcal F''_t]

so ϕ is a standard extension. ⬜

Next, consider two separate extensions of the same underlying space. Both of these will add in some additional source of randomness, and we would like to combine them into a single extension.

\displaystyle (\Omega_1,\mathcal F^1,\{\mathcal F_t^1\}_{t\ge0},{\mathbb P}_1)\xrightarrow{\pi_1} (\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})\xleftarrow{\pi_2} (\Omega_2,\mathcal F^2,\{\mathcal F_t^2\}_{t\ge0},{\mathbb P}_2)

As extensions of probability spaces, these can be combined into a single extension known as the relative product ϕ = π1ρ1 = π2ρ2,

\displaystyle \arraycolsep=1.4pt\begin{array}{ccc} (\Omega',\mathcal F',{\mathbb P}')\ &\xrightarrow{\displaystyle\ \rho_1\ }&(\Omega_1,\mathcal F^1,{\mathbb P}_1)\medskip\\ {\rho_2}\Big\downarrow\,\ &\searrow^{\hspace{-0.3em}\displaystyle\phi}&\,\Big\downarrow{\pi_1}\medskip\\ (\Omega_2,\mathcal F^2,{\mathbb P}_2)\,&\xrightarrow{\displaystyle\ \pi_2\ }&\,(\Omega,\mathcal F,{\mathbb P}) \end{array}

The sample space Ω′= Ω1 ⊗Ω Ω2 is the fiber product consisting of pairs (ω1, ω2) in Ω1 × Ω2 with π1(ω1) = π(ω2), and ρ1, ρ2 are the projection maps taking (ω1, ω2) to, respectively, ω1 and ω2. The sigma-algebra {\mathcal F'} is generated by {\rho_1^*\mathcal F^1} and {\rho_2^*\mathcal F^2}, and these two sigma-algebras are relatively independent over {\phi^*\mathcal F}, which is sufficient to uniquely define ℙ′. This relative product of probability spaces is not guaranteed to exist although, as discussed in the post on probability space extensions, it does for ‘regular’ extensions which is almost always sufficient. If it exists, the relative product is uniquely defined.

The relative product generalizes to filtered probability spaces. All that needs to be done is to define the filtration {\{\mathcal F'_t\}_{t\ge0}} on {(\Omega,\mathcal F',{\mathbb P}')}, which can be done in the minimal sense, so that {\mathcal F'_t} is the sigma-algebra generated by {\rho^*_1\mathcal F_t^1\cup\rho^*_2\mathcal F_t^2}. This gives a commutative square of filtered probability space extensions

\displaystyle \arraycolsep=1.4pt\begin{array}{ccc} (\Omega',\mathcal F',\{\mathcal F'_t\}_{t\ge0},{\mathbb P}')\ &\xrightarrow{\displaystyle\ \rho_1\ }&(\Omega_1,\mathcal F^1,\{\mathcal F_t^1\}_{t\ge0},{\mathbb P}_1)\medskip\\ {\rho_2}\Big\downarrow\qquad&\searrow^{\hspace{-0.3em}\displaystyle\phi}&\,\Big\downarrow{\pi_1}\medskip\\ (\Omega_2,\mathcal F^2,\{\mathcal F_t^2\}_{t\ge0},{\mathbb P}_2)\,&\xrightarrow{\displaystyle\ \pi_2\ }&(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P}) \end{array}

with ϕ = π1ρ1 = π2ρ2 containing π1 and π2 as sub-extensions.

Definition 10 The filtered probability space extension

\displaystyle \phi\colon(\Omega',\mathcal F', \{\mathcal F'_t\}_{t\ge0},{\mathbb P}')\rightarrow(\Omega,\mathcal F, \{\mathcal F_t\}_{t\ge0},{\mathbb P})

constructed above is the relative product of π1 and π2.

If we have two standard extensions of the same underlying filtered probability space then their relative product combines them into a single standard extension containing both of the original ones as sub-extensions.

Lemma 11 Let π1 and π2 be extensions of a filtered probability space with relative product ϕ = π1ρ1 = π2ρ2.

If π1 is standard then so is ρ2 and, similarly, if π2 is standard then so is ρ1.

In particular, if π1 and π2 are both standard, then so are ρ1, ρ2 and ϕ.

Proof: For brevity, I will use {\mathcal F^*}, {\mathcal F^{1*}} and {\mathcal F^{2*}} to denote, respectively, the induced sigma-algebras {\phi^*\mathcal F}, {\rho_1^*\mathcal F^1} and {\rho_2^*\mathcal F^2}. First suppose that π1 is standard. If U is bounded {\mathcal F_t^{1*}} measurable then

\displaystyle {\mathbb E}[U\vert\mathcal F^{2*}]={\mathbb E}[U\vert\mathcal F^*]={\mathbb E}[U\vert\mathcal F_t^*].

The first equality uses the fact that {\mathcal F^{1*}} and {\mathcal F^{2*}} are relatively independent over {\mathcal F^*}. The second equality uses the fact that π1 is standard so that {\mathcal F_t^{1*}} and {\mathcal F^*} are conditionally independent over {\mathcal F_t^*}. In particular, the right hand side is measurable with respect to {\mathcal F_t^{2*}\subseteq\mathcal F^{2*}} giving,

\displaystyle {\mathbb E}[U\vert\mathcal F^{2*}]={\mathbb E}[U\vert\mathcal F_t^{2*}].

So, if X = UV for V bounded and {\mathcal F_t^{2*}}-measurable,

\displaystyle {\mathbb E}[X\vert\mathcal F^{2*}] ={\mathbb E}[U\vert\mathcal F^{2*}]V ={\mathbb E}[U\vert\mathcal F_t^{2*}]V ={\mathbb E}[X\vert\mathcal F_t^{2*}].

The monotone class theorem extends this to all bounded {\mathcal F'_t} measurable X, so ρ2 is standard.

If π2 is standard then, in exactly the same way, this implies that ρ1 is standard. If both π1 and π2 are standard, lemma 9 shows that ϕ = π1ρ1 is standard. ⬜


Notes

Extending filtered probability spaces is a basic underlying concept although, in much of the theory, we do not need to invoke it explicitly. Some constructions or results may require the existence of a Brownian motion, and simply state that it exists as a condition for the result to hold. See, for example, the stochastic differential equation for Bessel processes of order 0, or the representation of local martingales as Brownian integrals. However, to apply the results, we may need to extend the space to introduce a Brownian motion if one does not already exist. One area where extensions are required in the theory is that of weak solutions to stochastic differential equations which, by definition, exist in an extension of the original space.

Leave a comment