The Stochastic Fubini Theorem

Fubini’s theorem states that, subject to precise conditions, it is possible to switch the order of integration when computing double integrals. In the theory of stochastic calculus, we also encounter double integrals and would like to be able to commute their order. However, since these can involve stochastic integration rather than the usual deterministic case, the classical results are not always applicable. To help with such cases, we could do with a new stochastic version of Fubini’s theorem. Here, I will consider the situation where one integral is of the standard kind with respect to a finite measure, and the other is stochastic. To start, recall the classical Fubini theorem.

Theorem 1 (Fubini) Let {(E,\mathcal E,\mu)} and {(F,\mathcal F,\nu)} be finite measure spaces, and {f\colon E\times F\rightarrow{\mathbb R}} be a bounded {\mathcal E\otimes\mathcal F}-measurable function. Then,

\displaystyle  y\mapsto\int f(x,y)d\mu(x)

is {\mathcal F}-measurable,

\displaystyle  x\mapsto\int f(x,y)d\nu(y)

is {\mathcal E}-measurable, and,

\displaystyle  \int\int f(x,y)d\mu(x)d\nu(y)=\int\int f(x,y)d\nu(x)d\mu(y). (1)

I previously gave a proof of this as a simple corollary of the functional monotone class theorem. Note that the first two statements regarding measurability of the single integrals are necessary to ensure that the double integral (1) is well-defined. There are various straightforward ways in which this base statement can be generalized. By simple linearity, it extends to finite signed measure spaces. Alternatively, by monotone convergence, we can extend to sigma-finite measure spaces and nonnegative measurable functions {f}, which need not be bounded.

A slight reformulation of Fubini’s theorem is useful for applications to stochastic calculus. Here, we work with respect to a probability space {(\Omega,\mathcal F,{\mathbb P})}, and a process is said to be FV if it is cadlag with finite variation over each finite time interval, and locally bounded if it is almost surely bounded over each finite time interval. I start with the simple case of FV processes, which can be proved as a corollary of Fubini’s theorem.

Theorem 2 Let X be an FV process, {(E,\mathcal E,\mu)} be a finite measure space, and {\{\xi^x\}_{x\in E}} be a uniformly bounded collection of processes such that

\displaystyle  \begin{aligned} &{\mathbb R}^+\times\Omega\times E\rightarrow{\mathbb R},\\ &(t,\omega,x)\mapsto\xi^x_t(\omega) \end{aligned}

is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable. Then,

\displaystyle  (t,\omega)\mapsto\int\xi^x_t(\omega)d\mu(x) (2)

is {\mathcal B({\mathbb R}^+)\otimes\mathcal F} measurable,

\displaystyle  (t,\omega,x)\mapsto \int_0^t\xi^x(\omega)dX(\omega) (3)

is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable, and,

\displaystyle  \int\int_0^t\xi^x\,dX\,d\mu(x) = \int_0^t\int\xi^x d\mu(x)\,dX (4)

for each {t \ge 0}.

Proof: For each individual value of {\omega\in\Omega}, the integral with respect to {X_s(\omega)} with s varying over the interval {[0,t]} is a finite signed measure. Hence, (4) is simply a restatement of Fubini’s theorem (1) with {f(s,x)=\xi^x_s(\omega)}. It only remains to prove measurability of the maps in (2) and (3), which are slightly stronger statements than that given by our application of Fubini’s theorem here.

First, define {f((t,\omega),x)=\xi^x_t(\omega)} so that,

\displaystyle  \int\xi^x_t(\omega)d\mu(x)=\int f((t,\omega),x)d\mu(x).

This is {\mathcal B({\mathbb R}^+)\otimes\mathcal F}-measurable by the first part of Fubini’s theorem, as required.

Measurability of (3) is a bit more tricky, and the dependence of X on {\omega} stops us from applying Fubini’s theorem as stated above. Instead, we go back to basics and apply the functional monotone class theorem. So, let {\mathcal H} denote the collection of all jointly measurable and uniformly bounded functions {\xi^x_t(\omega)} such that (3) has the stated measurability property. By linearity, this is clearly closed under taking linear combinations and, by monotone convergence, is closed under taking limits of uniformly bounded and nonnegative increasing sequences in {\mathcal H}. Consider {\xi^x_s(\omega)=1_{(\omega,x)\in S}1_{s\le T}} for some {T\ge0} and {S\in\mathcal F\otimes\mathcal E}. Then,

\displaystyle  \int_0^t\xi^x(\omega)dX(\omega)=1_{(\omega,x)\in S}(X_{t\wedge T}(\omega)-X_0(\omega)).

This is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable, so {\xi^x_s(\omega)} is in {\mathcal H}. The monotone class theorem says that all uniformly bounded {\xi^x} satisfying the requirements of the theorem are in {\mathcal H}, so (3) is measurable as stated. ⬜

The result stated in theorem 2 only applies to FV processes, whereas stochastic integration is defined more generally for semimartingales. Generalizing to semimartingales does introduce some technical problems though. First, it is necessary that the integrand is predictable. That is, it should be measurable with respect to the predictable sigma-algebra {\mathcal P}. So, we require a slightly stronger measurability condition than in theorem 2, but this is not too difficult. As usual, we work with respect to a filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}.

Lemma 3 Let {(E,\mathcal E,\mu)} be a finite measure space and {\{\xi^x\}_{x\in E}} be a uniformly bounded collection of processes such that

\displaystyle  \begin{aligned} &{\mathbb R}^+\times\Omega\times E\rightarrow{\mathbb R},\\ &(t,\omega,x)\mapsto\xi^x_t(\omega) \end{aligned}

is {\mathcal P\otimes\mathcal E}-measurable. Then, the process

\displaystyle  \zeta_t=\int\xi^x_td\mu(x)

is bounded and predictable.

Proof: It is clear that {\zeta} is bounded, so it only needs to be shown to be predictable. As {f((t,\omega),x)\equiv\xi^x_t(\omega)} is {\mathcal P\otimes\mathcal E}-measurable, the first part of Fubini’s theorem as stated above says that

\displaystyle  (t,\omega)\mapsto\zeta_t(\omega)=\int f((t,\omega),x)d\mu(x)

is {\mathcal P}-measurable. ⬜

The next technical difficulty in giving a stochastic version of Fubini’s theorem is that if {\xi^x} is a bounded predictable process and X is a semimartingale, then the integral

\displaystyle  \int_0^t\xi^x\,dX

is only defined up to probability one. Therefore, asking if it is measurable with respect to x does not even make sense. Furthermore, the arbitrary choice of the value of the integral on an uncountable collection of zero probability events, one for each x, could affect the value of the integral over x. This is the old problem of choosing good versions of stochastic processes except, now, we are concerned with the path as the variable x varies, rather than the time index t.

Lemma 4 Let {(E,\mathcal E)} be a measurable space and {\{\xi^x\}_{x\in E}} be uniformly bounded processes satisfying the measurablity requirement of lemma 3. Then, there exists processes {\{U^x\}_{x\in E}} such that

\displaystyle  U^x_t=\int_0^t\xi^x\,dX (5)

almost surely, for each t and x, and such that

\displaystyle  \begin{aligned} &{\mathbb R}^+\times\Omega\times E\rightarrow{\mathbb R},\\ &(t,\omega,x)\mapsto U^x_t(\omega) \end{aligned}

is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable and is cadlag in t.

This result depends on choosing a good version of the stochastic integral, simultaneously for all values of x, which is a bit tricky, so is left until later. We can now give a precise statement of the generalization of Fubini’s theorem for stochastic integration with respect to a semimartingale.

Theorem 5 (Stochastic Fubini Theorem) Let X be a semimartingale, {(E,\mathcal E,\mu)} be a finite measure space and

\displaystyle  (t,\omega, x)\mapsto \xi^x_t(\omega)

be a real-valued, bounded, and {\mathcal P\otimes\mathcal E}-measurable map. Let

\displaystyle  U^x_t=\int_0^t\xi^x_s\,dX_s

be as given by lemma 4. Then, {\int\lvert U^x_t\rvert\,d\mu(x)} is almost surely finite and,

\displaystyle  \int U^x_td\mu(x)=\int_0^t\int\xi_s^xd\mu(x)dX_s (6)

almost surely.

In the statement, the fact that {\int\lvert U^x_t\rvert\,d\mu(x)} is almost surely finite is required to ensure that the integral on the left of (6) is well-defined. Unlike in the FV case above, we do not know that {U^x_t} is almost surely bounded as x varies.

Before moving on to the proof of this theorem, there is a small ambiguity to be cleared up. We know that certain pathwise properties of stochastic processes, such as continuity, are sufficient to prove that the version is unique up to evanescence. Joint measurability is not sufficient by itself, so there will generally be many non-equivalent versions of the stochastic integral satisfying the conclusion of lemma 4. In fact, it does not matter which version is chosen in (6), as they will all give the same result when we perform the integral.

Lemma 6 Let {\{U^x\}_{x\in E}} and {\{V^x\}_{x\in E}} be collections of nonnegative random variables which are jointly measurable, in the sense that

\displaystyle  \begin{aligned} &(\omega,x)\mapsto U^x(\omega),\\ &(\omega,x)\mapsto V^x(\omega) \end{aligned}

are {\mathcal F\otimes\mathcal E}-measurable. If {U^x=V^x} almost surely, for each {x\in E} then, with probability one, {U^x=V^x} for {\mu} almost all x.

To be precise, there exists {A\in\mathcal F} of probability one and, for each {\omega\in A}, there exists a set {E_\omega\in\mathcal E} of full {\mu} measure, such that {U^x(\omega)=V^x(\omega)} for all {x\in E_\omega}.

In particular, if the random variables are nonnegative then,

\displaystyle  \int U^x d\mu(x) = \int V^x d\mu(x)

almost surely.

Proof: As the integral does not depend on the values of the integrand on a null set, the ‘in particular’ part of the theorem follows immediately from the first statement. We just need to show that, with probability one, {U^x=V^x} for {\mu} almost all x. Applying the classical Fubini theorem,

\displaystyle  {\mathbb E}\left[\int\lvert U^x-V^x\rvert d\mu(x)\right]= \int{\mathbb E}\left[\lvert U^x-V^x\rvert\right]d\mu(x) =0.

As any nonnegative random variable is almost surely zero, if its expected value is zero, we see that

\displaystyle  \int\lvert U^x-V^x\rvert d\mu(x)=0

almost surely. In this case, {\lvert U^x-V^x\rvert=0} for {\mu} almost all x. ⬜


Existence of Measurable Integrals

I we give a proof of lemma 4, showing that we can always choose a jointly measurable version of the stochastic integral. The proof is along similar lines to Protter, Stochastic Integration and Differential Equations. We start with the following result showing that certain limits of jointly measurable processes themselves have jointly measurable versions.

Lemma 7 Let {(E,\mathcal E)} be a measurable space and {U^{x,n}_t} be random variables for {x\in E} and {t\ge0} such that

\displaystyle  (t,\omega,x)\mapsto U^{x,n}_t(\omega)

is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable and cadlag in t, for each positive integer n. Suppose that {U^x_t} is a collection of random variables such that {U^{x,n}\xrightarrow{\rm ucp}U^x} (uniform convergence on compacts in probability) as n goes to infinity, for each x.

Then, {U^x_t(\omega)} has a version which is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable and cadlag in t.

Proof: In order to measure the rate of convergence, consider the pseudometric defining the ucp topology,

\displaystyle  D(X-Y)=\sum_{k=1}^\infty{\mathbb E}\left[2^{-k}\wedge\sup_{s\le k}\lvert X_s-Y_s\rvert\right].

Then, define

\displaystyle  F_n(x)=\sup_{m\ge n}D(U^{x,m}-U^x)

which, by ucp convergence, decreases to zero as n goes to infinity. Using ucp convergence again,

\displaystyle  F_n(x)=\sup_{m\ge n}\lim_{k\rightarrow\infty}D(U^{x,m}-U^{x,k})

which, by joint measurability of {U^{x,n}_t(\omega)}, is {\mathcal E}-measurable.

Now, for any fixed {\epsilon > 0}, set {S_n=\{x\in E\colon F_n(x) < \epsilon\}}. Taking {S_0=\emptyset}, define

\displaystyle  V^x_t=\sum_{n=1}^\infty 1_{\{x\in S_n\setminus S_{n-1}\}}U^{x,n}_t.

From the definition, this is jointly measurable and cadlag in t. For any {x\in E}, choosing n such that {x\in S_n\setminus S_{n-1}} gives

\displaystyle  D(V^x-U^x)=D(U^{x,n}-U^x) < \epsilon.

Hence, for each positive integer m, replacing {\epsilon} by {2^{-m}} in the argument above shows that there exists an {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable process {V^{x,m}_t(\omega)} which is cadlag in t and,

\displaystyle  D(V^{x,m}-U^x) < 2^{-m}

for all {x\in E}. Hence,

\displaystyle  \sum_{n=1}^\infty D(V^{x,n+1}-V^{x,n})\le\sum_{n=1}^\infty(2^{-(n+1)}+2^{-n}) < \infty.

As this is the expectation of

\displaystyle  \sum_{n=1}^\infty\sum_{k=1}^\infty2^{-k}\wedge\sup_{t\le k}\lvert V^{x,n+1}_t-V^{x,n}_t\rvert,

this sum has finite expectation and, so, is almost surely finite for each x. Let {A\in\mathcal F\otimes\mathcal E} be the set on which the sum is finite. As {V^{x,m}_t} converges uniformly on compacts on this set, we can define

\displaystyle  \tilde U^x_t(\omega)=\begin{cases} \lim_{m\rightarrow\infty}V^{x,m}_t(\omega),&{\rm for\ }(\omega,x)\in A,\\ 0,&{\rm otherwise}. \end{cases}

By construction, this is cadlag in t and is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable. Furthermore, for each {x\in E}, we showed that {{\mathbb P}(\omega\colon(\omega,x)\in A)=1}, so {V^{x,m}_t\rightarrow\tilde U^x_t} almost surely. In particular, this means that {\tilde U^x_t=U^x_t} almost surely, as required. ⬜

Lemma 7 can be applied to complete the proof of lemma 4.

Proof of Lemma 4: We will use the functional monotone class theorem, so define {\mathcal H} to be the collection of bounded {\mathcal P\otimes\mathcal E}-measurable processes {\xi^x_t(\omega)} satisfying the conclusion of the lemma. That is, {U^x_t(\omega)} defined by (5) has a version which is jointly measurable and is cadlag in t. By linearity of the integral, {\mathcal H} is closed under taking linear combinations. Next, for {\xi^x_t} of the form {1_{x\in A}\zeta_t} for {A\in\mathcal E} and bounded predictable {\zeta}, {U^x_t} can be expressed as

\displaystyle  U^x_t=1_{x\in A}\int_0^t\zeta\,dX.

Choosing a cadlag version of the stochastic integral, this is {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable as required.

By the monotone class theorem, it just remains to show that, if {\xi^{x,n}_t} is nonnegative and uniformly bounded sequence in {\mathcal H}, increasing in n to a limit {\xi^x_t}, then the limit is also in {\mathcal H}.

By the assumption that {\xi^{x,n}_t} is in {\mathcal H}, we can choose

\displaystyle  (t,\omega,x)\mapsto U^{x,n}_t(\omega)=\int_0^t\xi^{x,n}(\omega)dX(\omega)

to be {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable and cadlag in t, for each n. We also choose a version of {U^x_t} which is cadlag in t, which is possible as stochastic integrals always have a cadlag version. By dominated convergence, {U^{x,n}\xrightarrow{\rm ucp} U^x} as n goes to infinity, for each x. Lemma 7 guarantees that {U^x_t} has a version which is both cadlag in t and {\mathcal B({\mathbb R}^+)\otimes\mathcal F\otimes\mathcal E}-measurable. ⬜


Proof of the Stochastic Fubini Theorem

I now give a proof of theorem 5. One method used, for example by Protter in Stochastic Integration and Differential Equations, is to decompose the semimartingale into FV and local martingale terms. It can then be proved separately for these two cases, and combined to give the full result. However, in keeping with much of my stochastic calculus notes, I take a different approach. This will avoid relying on any semimartingale decompositions, and keep closer to our original definition of stochastic integration. However, whichever way we go about it, handling the various limits does get a bit tricky. The main tool used here will be Ito’s formula. For a semimartingale X and twice continuously differentiable {f\colon{\mathbb R}\rightarrow{\mathbb R}}, this says that

\displaystyle  \begin{aligned} f(X_t)&=f(X_0)+\int_0^t f^\prime(X_-)dX+\frac12\int_0^t f^{\prime\prime}(X_-)d[X]^{c}\\ &+\sum_{s\le t}(\Delta f(X_s)-f^\prime(X_{s-})\Delta X_s). \end{aligned}

In addition, we will suppose that {f} is bounded, along with its first and second order derivatives. If {\tfrac12(f^{\prime\prime})^2} is bounded by a constant {L\ge0} then, by Taylor expansion, the jump terms inside the summation in Ito’s formula can be seen to be bounded by {L(\Delta X)^2} and, hence, we obtain the almost-sure bound,

\displaystyle  \left\lvert f(X_t)-f(X_0)-\int_0^tf^\prime(X_-)dX\right\rvert\le L[X]_t.

Consequently, if we furthermore suppose that {f(0)=0} and write {U=\int\xi\,dX} for a predictable process {\xi} bounded by 1 then,

\displaystyle  \left\lvert f(U_t)-\int_0^t\xi f^\prime(U_-)dX\right\rvert\le L\int_0^t\xi^2 d[X]\le L[X]_t (7)

almost surely. The main part of the proof of theorem 5 consists of extending this inequality to incorporate an integral over the auxiliary parameter x.

Lemma 8 Let X be a semimartingale, {f\colon{\mathbb R}\rightarrow{\mathbb R}} be as above, {(E,\mathcal E,\mu)} be a probability space, and {\xi^x_t(\omega)} be a {\mathcal P\otimes\mathcal E}-measurable process bounded by 1.

If {U^x_t} is the jointly measurable version of the integral, as given by lemma 4 then,

\displaystyle  \left\lvert\int f(U^x_t)d\mu(x)-\int_0^t\int\xi^xf^\prime(U^x_-)d\mu(x)dX\right\rvert\le L[X]_t (8)

almost surely, for each {t\ge0}.

Proof: Note that all of the integrands in (8) are bounded and, hence, integrable. To prove the bound, we use the functional monotone class theorem. So, let {\mathcal H} denote the set of all {\mathcal P\otimes\mathcal E}-measurable processes {\xi^x_t(\omega)} bounded by 1 and for which (8) holds. We first consider {\xi^x} of the form

\displaystyle  \xi^x_t=\sum_{k=1}^n1_{x\in A_k}\zeta^k_t

for a finite sequence of pairwise disjoint sets {A_k\in\mathcal E} and predictable processes {\zeta^k} bounded by 1. Setting {V^k=\int\zeta^k\,dX}, which are cadlag adapted processes,

\displaystyle  U^{x}_t=\sum_{k=1}^n1_{x\in A_k}V^k_t,

which is jointly measurable as required. Then,

\displaystyle  \int \xi^x_tf^\prime(U^x_{t-})d\mu(x)=\sum_{k=1}^n\mu(A_k)\zeta^k_tf^\prime(V^k_{t-}).

We obtain,

\displaystyle  \begin{aligned} &\left\lvert\int f(U^x_t)d\mu(x)-\int_0^t\int\xi^xf^\prime(U^x_-)d\mu(x)dX\right\rvert\\ &= \left\lvert\sum_{k=1}^n\mu(A_k)\left(f(V^k_t)-\int_0^t\zeta^kf^\prime(V^k_-)dX\right)\right\rvert\\ &\le \sum_{k=1}^n\mu(A_k)L[X]_t\le L[X]_t. \end{aligned}

The first inequality here used (7), and the second used the fact that {A_k} are disjoint, so {\mu(A_k)} sum up to no more than 1. Hence, {\xi^x_t} is in {\mathcal H}.

Next, suppose that {\xi^{x,n}_t} is a sequence in {\mathcal H} converging to a limit {\xi^x_t} as n goes to infinity. By dominated convergence, the integrals {U^{x,n}=\int\xi^{x,n}dX} converge ucp to {U^x=\int\xi^xdX}, for each {x\in E}. By the standard Fubini theorem, this gives

\displaystyle  \begin{aligned} &{\mathbb E}\left[\int1\wedge\sup_{s\le t}\lvert U^{x,n}_s-U^x_s\rvert d\mu(x)\right]\\ &=\int{\mathbb E}\left[1\wedge\sup_{s\le t}\lvert U^{x,n}_s-U^x_s\rvert\right] d\mu(x) \rightarrow0 \end{aligned}

as n goes to infinity. So, by passing to a subsequence if necessary, we can assume that

\displaystyle  {\mathbb E}\left[\int1\wedge\sup_{s\le t}\lvert U^{x,n}_s-U^x_s\rvert d\mu(x)\right]\le2^{-n}.

In particular, this means that

\displaystyle  \int\sum_{n=1}^\infty1\wedge\sup_{s\le t}\lvert U^{x,n}_s-U^x_s\rvert d\mu(x).

has finite expectation, so is almost surely finite. Furthermore, when this is finite then {U^{x,n}_s\rightarrow U^x_s} as n tends to infinity, uniformly over {s\le t} and for {\mu} almost all x. So, by dominated convergence, with probability one the limits

\displaystyle  \begin{aligned} & \int f(U^{x,n}_t)d\mu(x)\rightarrow \int f(U^{x}_t)d\mu(x),\\ & \int\xi^{x,n}_sf^\prime(U^{x,n}_{s-})d\mu(x)\rightarrow\int\xi^{x}_sf^\prime(U^{x}_{s-})d\mu(x) \end{aligned}

hold for all {s\le t}. Using dominated convergence for the stochastic integral, taking limits in probability gives

\displaystyle  \begin{aligned} &\left\lvert\int f(U^x_t)d\mu(x)-\int_0^t\int\xi^xf^\prime(U^x_-)d\mu(x)dX\right\rvert\\ &=\lim_{n\rightarrow\infty}\left\lvert\int f(U^{x,n}_t)d\mu(x)-\int_0^t\int\xi^{x,n}f^\prime(U^{x,n}_-)d\mu(x)dX\right\rvert\\ &\le L[X]_t \end{aligned}

almost surely. So, {\xi^x_t} is in {\mathcal H}.

This shows that, if we let {\mathcal{\tilde H}} consist of the bounded {\mathcal P\otimes\mathcal E}-measurable processes {\xi^x_t} such that {(\xi^x_t\wedge1)\vee-1} is in {\mathcal H}, then {\mathcal{\tilde H}} satisfies the hypotheses for the monotone class theorem. Hence, every {\mathcal P\otimes\mathcal E}-measurable process {\xi^x_t} which is bounded by 1 is in {\mathcal{\tilde H}}, so satisfies (8). ⬜

I finally apply lemma 8 to complete the proof of theorem 5.

Proof of Theorem 5: By scaling, without loss of generality, we assume that {\mu} is a probability measure. Suppose that {f\colon{\mathbb R}\rightarrow{\mathbb R}} satisfies {f(0)=0} and is twice continuously differentiable, with bounded derivative, and that {\frac12(f^{\prime\prime})^2} is bounded by L. Then, consider {f_n(x)=e^{-x^2/n}f(x)}. It can be seen that {f_n} is bounded, {f_n^\prime} is uniformly bounded over n and {\limsup_n\sup_xf^{\prime\prime}_n(x)^2\le 2L}. So, by lemma 8,

\displaystyle  \limsup_{n\rightarrow\infty}\left\lvert\int f_n(U^{x,n}_t)d\mu(x)-\int_0^t\int\xi^xf^\prime_n(U^{x}_-)d\mu(x)dX\right\rvert\le L[X]_t, (9)

almost surely. Furthermore, by bounded convergence, the second integral on the left hand side of (9) converges in probability to {\int_0^t\int\xi^xf^\prime(U^x_-)d\mu(x)dX}.

In particular, if {f} is nonnegative, then applying monotone convergence for the first integral on the left hand side of (9) shows that (8) holds for {f}, so that {\int f(U^x_t)d\mu(x)} is almost surely finite. Using {f(x)=\lvert x\rvert+e^{-\lvert x\rvert}-1}, for example, so that {f(x)-\lvert x\rvert} is bounded, we see that {\int\lvert U^x_t\rvert d\mu(x)} is almost surely finite.

Finally, consider {f(x)=x}, in which case we can take {L=0}. Then, applying dominated convergence for the first integral on the left hand side of (9) gives

\displaystyle  \left\lvert\int U^x_t d\mu(x)-\int_0^t\int\xi^xd\mu(x)dX\right\rvert\le 0

almost surely, as required. ⬜

2 thoughts on “The Stochastic Fubini Theorem

  1. Hi, in the last formula of Lemma 6, \int_{-\infty}^{\infty} U^x d\mu(x) = \int_{-\infty}^{\infty} V^x d\mu(x), the integral bounds -\infty and \infty does not make sense, since the integral is over arbitrary measure space E. Are they typos or am I missing something?

Leave a comment