The Doob-Meyer Decomposition for Quasimartingales

As previously discussed, for discrete-time processes the Doob decomposition is a simple, but very useful, technique which allows us to decompose any integrable process into the sum of a martingale and a predictable process. If {\{X_n\}_{n=0,1,2,\ldots}} is an integrable discrete-time process adapted to a filtration {\{\mathcal{F}_n\}_{n=0,1,2,\ldots}}, then the Doob decomposition expresses X as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle X_n&\displaystyle=M_n+A_n,\smallskip\\ \displaystyle A_n&\displaystyle=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\;\vert\mathcal{F}_{k-1}\right]. \end{array} (1)

Then, M is then a martingale and A is an integrable process which is also predictable, in the sense that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n > 0}. The expected value of the variation of A can be computed in terms of X,

\displaystyle  {\mathbb E}\left[\sum_{k=1}^n\lvert A_k-A_{k-1}\rvert\right] ={\mathbb E}\left[\sum_{k=1}^n\left\lvert {\mathbb E}[X_k-X_{k-1}\vert\;\mathcal{F}_{k-1}]\right\rvert\right].

This is the mean variation of X.

In continuous time, the situation is rather more complex, and will require constraints on the process X other than just integrability. We have already discussed the case for submartingales — the Doob-Meyer decomposition. This decomposes a submartingale into a local martingale and a predictable increasing process.

A natural setting for further generalising the Doob-Meyer decomposition is that of quasimartingales. In continuous time, the appropriate class of processes to use for the component A of the decomposition is the predictable FV processes. Decomposition (2) below is the same as that in the previous post on special semimartingales. This is not surprising, as we have already seen that the class of special semimartingales is identical to the class of local quasimartingales. The difference with the current setting is that we can express the expected variation of A in terms of the mean variation of X, and obtain a necessary and sufficient condition for the local martingale component to be a proper martingale.

As was noted in an earlier post, historically, decomposition (2) for quasimartingales played an important part in the development of stochastic calculus and, in particular, in the proof of the Bichteler-Dellacherie theorem. That is not the case in these notes, however, as we have already proven the main results without requiring quasimartingales. As always, any two processes are identified whenever they are equivalent up to evanescence.

Theorem 1 Every cadlag quasimartingale X uniquely decomposes as

\displaystyle  X=M+A (2)

where M is a local martingale and A is a predictable FV process with {A_0=0}. Then, A has integrable variation over each finite time interval {[0,t]} satisfying

\displaystyle  {\rm Var}_t(X)={\rm Var}_t(M)+{\mathbb E}\left[\int_0^t\,\vert dA\vert\right]. (3)

so that, in particular,

\displaystyle  {\mathbb E}\left[\int_0^t\,\vert dA\vert\right]\le{\rm Var}_t(X). (4)

Furthermore, the following are equivalent,

  1. X is of class (DL).
  2. M is a proper martingale.
  3. inequality (4) is an equality for all times t.

Proof: We will proceed by extending the Doob-Meyer decomposition for submartingales to the quasimartingale case. So, apply Rao’s decomposition,

\displaystyle  X = Y-Z

where Y,Z are submartingales. For now, let us assume that the filtration is right-continuous, so that cadlag versions of Y,Z can be chosen.

Now apply the Doob-Meyer decomposition to each of Y, Z

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle Y&\displaystyle=U+B,\smallskip\\ \displaystyle Z&\displaystyle=V+C. \end{array}

Here, U,V are local martingales and B,C are integrable increasing predictable processes starting from 0. Decomposition (2) is then given by setting {M=U-V} and {A=B-C}. The variation of A is bounded by the sum of B and C, so is integrable over each finite time interval.

Now, we can prove (3). Choose a sequence of stopping times {\tau_n} increasing to infinity such that {M^{\tau_n}} are proper martingales. We have

\displaystyle  {\rm Var}_t(X)={\rm Var}_t(X^{\tau_n})+{\rm Var}_t(X-X^{\tau_n}). (5)

By the martingale property for {M^{\tau_n}}, {{\mathbb E}[X^{\tau_n}_t-X^{\tau_n}_s\;\vert\mathcal{F}_s]} is equal to {{\mathbb E}[A^{\tau_n}_t-A^{\tau_n}_s\;\vert\mathcal{F}_s]} for any times {s < t}. So, evaluating {{\rm Var}_t(X)} along partitions of {[0,t]} gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle {\rm Var}_t(X^{\tau_n})&\displaystyle={\rm Var}_t(A^{\tau_n})={\mathbb E}\left[\int_0^t\,\lvert dA^{\tau_n}\rvert\right]\smallskip\\ &\displaystyle\rightarrow{\mathbb E}\left[\int_0^t\,\lvert dA\rvert\right]. \end{array}

The second equality is from Lemma 10 of the previous post, and the limit is using monotone convergence. Next, look at {{\rm Var}_t(X-X^{\tau_n})}. As {X=M+A} we can use the triangle inequality,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle \left\lvert{\rm Var}_t(X-X^{\tau_n}) - {\rm Var}_t(M-M^{\tau_n})\right\rvert\smallskip\\ &\displaystyle\qquad\le {\rm Var}_t(A-A^{\tau_n})={\mathbb E}\left[\int_0^t\,\lvert d(A-A^{\tau_n})\rvert\right]\smallskip\\ &\displaystyle\qquad={\mathbb E}\left[\int_0^t1_{(\tau_n,t]}\,\lvert dA\rvert\right]\rightarrow0 \end{array}

The final limit is using dominated convergence. Also, using the martingale property of {M^{\tau_n}} we have that {{\rm Var}_t(M-M^{\tau_n})} equals {{\rm Var}_t(M)}. Hence,

\displaystyle  {\rm Var}_t(X-X^{\tau_n})\rightarrow{\rm Var}_t(M).

So, taking the limit {n\rightarrow\infty} in (5) gives (3). Inequality (4) is an immediate consequence of this.

It just remains to prove the equivalence of the three statements in the theorem. First, on each finite interval {[0,t]}, the process A is bounded by its variation, which is integrable. So, A is always of class (DL). Then, X is of class (DL) if and only if M is. As a local martingale is a proper martingale if and only if it is of class (DL), we see that statements 1 and 2 are equivalent.

Next, using (3), inequality (4) is an equality if and only if {{\rm Var}_t(M)=0}, which is equivalent to M being a proper martingale. So, statements 2 and 3 are equivalent.

The proof above of existence of the decomposition (2) assumed that the underlying filtration is right-continuous. We now show that this condition can be removed. Let {\lvert\xi\rvert\le 1} be an elementary process with respect to the right-continuous filtration {\{\mathcal{F}_{t+}\}_{t\ge0}}. Then, the process {\xi^n_t=1_{\{t>1/n\}}\xi_{t-1/n}} is elementary with respect to the original filtration. By Lemma 5 of the previous post, {X_{t+1/n}\rightarrow X_t} in {L^1} as {n\rightarrow\infty} so,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\rm Var}_t(X)\ge{\mathbb E}\left[\int_0^t\xi^n\,dX\right]\smallskip\\ &\displaystyle\qquad={\mathbb E}\left[\int_0^t\xi_s\,dX_{(s+1/n)\wedge t}\right]\rightarrow{\mathbb E}\left[\int_0^t\xi\,dX\right]. \end{array}

Taking the supremum over all such {\xi} shows that the mean variation of X over {[0,t]} taken with respect to the right-continuous filtration {\mathcal{F}_{\cdot+}} is bounded by {{\rm Var}_t(X)}, so is finite. We can therefore apply the above proof to obtain decomposition (2) with respect to this filtration. That is, M is a local martingale and A is predictable with respect to {\mathcal{F}_{\cdot+}}.

Next, as {\mathcal{F}_{\cdot+}} and {\mathcal{F}_\cdot} generate the same predictable sigma-algebra, A is a predictable FV process w.r.t. {\mathcal{F}_\cdot}. Now, choose stopping times

\displaystyle  \tau_n=\inf\left\{t\ge0\colon\lvert X_t\rvert\ge n\right\}.

Then, the stopped process {X^{\tau_n}} is bounded by the integrable random variable {n+\lvert X^{\tau_n}_t\rvert} over the interval {[0,t]}. Hence, it is class (DL), so {M^{\tau_n}} is a proper martingale w.r.t {\mathcal{F}_{\cdot+}} and, hence, is also a martingale w.r.t. the original filtration. So M is a local martingale w.r.t. the original filtration, as required. ⬜

Approximating the Compensator

Finally, we can show that the Doob-Meyer decomposition (2) is indeed the continuous-time limit of the discrete-time Doob decomposition (1). The remainder of this post closely follows the argument given in the previous posts on compensators and the Doob-Meyer submartingale decomposition.

We discretize time using a stochastic partition P of {{\mathbb R}_+}, which is a sequence of stopping times

\displaystyle  0=\tau_0\le\tau_1\le\tau_2\le\cdots\uparrow\infty.

The mesh of the partition is {\vert P\vert=\sup_n(\tau_n-\tau_{n-1})}. The compensator, {A^P}, of X computed along the partition P is

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[X_{\tau_n}-X_{\tau_{n-1}}\;\vert\mathcal{F}_{\tau_{n-1}}\right]. (6)

We will consider class (D) processes X for which {{\rm Var}(X)=\lim_{t\rightarrow\infty}{\rm Var}_t(X)} is finite. The class (D) property ensures that X is {L^1}-bounded, so

\displaystyle  {\rm Var}^*(X)={\rm Var}(X)+\lim_{t\rightarrow\infty}{\mathbb E}[\lvert X_t\rvert]

is finite. By quasimartingale convergence, the limit {X_\infty=\lim_{t\rightarrow\infty}X_t} exists, and {X_\tau} is integrable for all stopping times {\tau}. So, the expectations in (6) are well defined. Theorem 1 says that {X=M+A} for a martingale M and predictable FV process A with integrable variation,

\displaystyle  {\mathbb E}\left[\int_0^\infty\,\lvert dA\rvert\right]={\rm Var}(X) < \infty.

This ensures that {A_\infty=\lim_{t\rightarrow\infty}A_t} exists and is integrable, and A is of class (D). Therefore, M is a class (D) martingale. Optional sampling implies that {{\mathbb E}[M_{\tau_n}-M_{\tau_{n-1}}\vert\mathcal{F}_{\tau_{n-1}}]=0}, so (6) can be rewritten to express {A^P} in terms of A,

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[A_{\tau_n}-A_{\tau_{n-1}}\;\vert\mathcal{F}_{\tau_{n-1}}\right]. (7)

In the case where X is quasi-left-continuous, so that A is continuous, the approximation to the compensator calculated along partitions P converges uniformly in {L^1} to A as the mesh goes to zero. The notation {\vert P\vert\xrightarrow{\rm P}0} denotes the limit as the mesh {\vert P\vert} goes to zero in probability.

Theorem 2 Let X be a cadlag and quasi-left-continuous process of class (D), with finite mean variation {{\rm Var}(X) < \infty}. Then, with A as in decomposition (2), {A^P\rightarrow A} uniformly in {L^1} as {\vert P\vert\rightarrow0} in probability. That is,

\displaystyle  \lim_{\vert P\vert\xrightarrow{\rm P}0}{\mathbb E}\left[\sup_{t\ge0}\vert A^P_t-A_t\vert\right]=0.

Proof: As X is quasi-left-continuous, its compensator A is continuous. So, equation (7) above and Theorem 10 of the post on compensators, applied to A, gives the result. ⬜

If the quasimartingale X is not quasi-left-continuous, then the discrete-time compensator is not guaranteed to converge in {L^1}. An explicit example where this is the case was given in the post on compensators of stopping times. Instead, we have to work with respect to weak convergence in {L^1}.

Theorem 3 Let X be a cadlag class (D) process with finite mean variation, {{\rm Var}(X) < \infty}. Then, with A as in decomposition (2), {A^P_\tau\rightarrow A_\tau} weakly in {L^1} as {\vert P\vert\rightarrow0} in probability, for any random time {\tau\colon\Omega\rightarrow{\mathbb R}_+\cup\{\infty\}}. That is,

\displaystyle  \lim_{\vert P\vert\xrightarrow{\rm P}0}{\mathbb E}\left[YA^P_\tau\right]={\mathbb E}\left[YA_\tau\right]

for all uniformly bounded random variables Y.

Proof: This follows immediately from equation (7) above and Theorem 11 of the post on compensators, applied to the process A. ⬜

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s