Constructing Martingales with Prescribed Jumps

In this post we will describe precisely which processes can be realized as the jumps of a local martingale. This leads to very useful decomposition results for processes — see Theorem 10 below, where we give a decomposition of a process X into martingale and predictable components. As I will explore further in future posts, this enables us to construct particularly useful decompositions for local martingales and semimartingales.

Before going any further, we start by defining the class of local martingales which will be used to match prescribed jump processes. The purely discontinuous local martingales are, in a sense, the orthogonal complement to the class of continuous local martingales.

Definition 1 A local martingale X is said to be purely discontinuous iff XM is a local martingale for all continuous local martingales M.

The class of purely discontinuous local martingales is often denoted as {\mathcal{M}_{\rm loc}^{\rm d}}. Clearly, any linear combination of purely discontinuous local martingales is purely discontinuous. I will investigate {\mathcal{M}_{\rm loc}^{\rm d}} in more detail later but, in order that we do have plenty of examples of such processes, we show that all FV local martingales are purely discontinuous.

Lemma 2 Every FV local martingale is purely discontinuous.

Proof: If X is an FV local martingale and M is a continuous local martingale then we can compute the quadratic covariation,

\displaystyle  [X,M]_t=\sum_{s\le t}\Delta X_s\Delta M_s=0.

The first equality follows because X is an FV process, and the second because M is continuous. So, {XM=XM-[X,M]} is a local martingale and X is purely discontinuous. ⬜

Next, an important property of purely discontinuous local martingales is that they are determined uniquely by their jumps. Throughout these notes, I am considering two processes to be equal whenever they are equal up to evanescence.

Lemma 3 Purely discontinuous local martingales are uniquely determined by their initial value and jumps. That is, if X and Y are purely discontinuous local martingales with {X_0=Y_0} and {\Delta X = \Delta Y}, then {X=Y}.

Proof: Setting {M=X-Y} we have {M_0=0} and {\Delta M = 0}. So, M is a continuous local martingale and {M^2= MX-MY} is a local martingale starting from zero. Hence, it is a supermartingale and we have

\displaystyle  {\mathbb E}[M_t^2]\le{\mathbb E}[M_0^2]=0.

So {M_t=0} almost surely and, by right-continuity, {M=0} up to evanescence. ⬜

Note that if X is a continuous local martingale, then the constant process {Y_t=X_0} has the same initial value and jumps as X. So Lemma 3 has the immediate corollary.

Corollary 4 Any local martingale which is both continuous and purely discontinuous is almost surely constant.

Recalling that the jump process, {\Delta X}, of a cadlag adapted process X is thin, we now state the main theorem of this post and describe precisely those processes which occur as the jumps of a local martingale.

Theorem 5 Let H be a thin process. Then, {H=\Delta X} for a local martingale X if and only if

  1. {\sqrt{\sum_{s\le t}H_s^2}} is locally integrable.
  2. {{\mathbb E}[1_{\{\tau < \infty\}}H_\tau\;\vert\mathcal{F}_{\tau-}]=0} (a.s.) for all predictable stopping times {\tau}.

Furthermore, X can be chosen to be purely discontinuous with {X_0=0}, in which case it is unique.

The proof of this theorem will be given further below. For now, I will explain the two conditions of the theorem in a bit more detail. The first condition, that {\sqrt{\sum_{s\le t}H^2_s}} is locally integrable (as a process with time variable t), is perhaps more easily understood if it is broken into two separate statements, as in the following lemma. As the jumps of a locally integrable process is itself locally integrable, the first condition of the lemma must be satisfied if H is to be realized as the jumps of a local martingale. Similarly, as local martingales are semimartingales, their jumps must also satisfy the second condition. Recall that, in these notes, a process H being locally integrable is equivalent to the nonnegative increasing process {\sup_{s\le t}\lvert H_s\rvert} being locally integrable.

Lemma 6 Let H be a thin process. Then, {\sqrt{\sum_{s\le t}H_s^2}} is locally integrable if and only the following are satisfied,

  1. H is locally integrable.
  2. {\sum_{s\le t}H_s^2} is almost surely finite for each {t\in{\mathbb R}_+}.

Proof: Writing {Y_t=\sum_{s\le t}H_s^2}, we need to show that the two statements are equivalent to Y being locally {L^{1/2}}-integrable. First, if Y is {L^{1/2}}-integrable then it is almost-surely finite at each time and {\Delta Y = H^2}. So, {H^2} is locally {L^{1/2}}-integrable and H is locally integrable.

Conversely, if Y is almost surely finite then, again, we have {\Delta Y=H^2} and local integrability of H implies that {H^2} and Y are locally {L^{1/2}}-integrable. ⬜

Now, I move on to explaining the second condition of Theorem 5. For convenience, I will set {H_\infty=0} for any process H in this post, so the condition can be written as {{\mathbb E}[H_\tau\;\vert\mathcal{F}_{\tau-}]=0} for predictable times {\tau}. Note that this conditional expectation is only well-defined in the case that H satisfies the integrability condition that {{\mathbb E}[\lvert H_\tau\rvert\;\vert\mathcal{F}_{\tau-}]} is almost surely finite. Fortunately, this is guaranteed by local integrability, so the first statement of the theorem ensures this.

The second condition of Theorem 5 can be understood in terms of predictable projections. The process K defined by the following lemma is called the predictable projection of H, and is denoted by {{}^p\!H}. This is a thin predictable process satisfying

\displaystyle  {}^p\!H_\tau={\mathbb E}[H_\tau\;\vert\mathcal{F}_{\tau-}]\ \ {\rm(a.s.)}

for all predictable stopping times {\tau}. In fact, the predictable projection theorem says that it is the unique predictable process satisfying this, although this statement is outside of the scope of this post. The second condition of Theorem 5 can be written succinctly as {{}^p\!H=0}.

Lemma 7 Let H be a locally integrable thin process. Then, there exists a unique thin process K satisfying

  1. {K_\tau=0} (a.s.) for all totally inaccessible stopping times {\tau}.
  2. {K_\tau={\mathbb E}[H_\tau\;\vert\mathcal{F}_{\tau-}]} (a.s.) for all predictable stopping times {\tau}.

Furthermore, K is a predictable process.

Proof: Letting {B\subseteq{\mathbb R}_+\times\Omega} be the accessible component of the graph {\{H\not=0\}}, Theorem 15 of the post on predictable stopping times says that there is a sequence {\{\tau_n\}_{n=1,2,\ldots}} of predictable stopping times such that {\tau_m\not=\tau_n} whenever {m\not=n} and {B\subseteq\bigcup_n[\tau_n]}. Define the thin process

\displaystyle  K=\sum_{n=1}^\infty{\mathbb E}[H_{\tau_n}\;\vert\mathcal{F}_{\tau_n-}]1_{[\tau_n]}.

For any {\mathcal{F}_{\tau_n-}}-measurable random variables {U_n}, the process {U_n1_{[\tau_n]}} is predictable, so the same is true of the sum {\sum_nU_n1_{[\tau_n]}}, and we see that K is predictable.

Next, if {\tau} is a totally inaccessible time, then {K_\tau=0} follows immediately from the definition. If {\tau} is predictable then predictability of K implies that {K_\tau} is {\mathcal{F}_{\tau-}}-measurable. If U is an {\mathcal{F}_{\tau-}}-measurable random variable such that {UH_\tau} is integrable,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[UH_\tau] &\displaystyle=\sum_n{\mathbb E}[1_{\{\tau=\tau_n\}}UH_\tau] +{\mathbb E}[1_{\{\tau\not\in\{\tau_1,\tau_2,\ldots\}\}}UH_\tau]\smallskip\\ &\displaystyle=\sum_n{\mathbb E}[1_{\{\tau=\tau_n\}}UH_{\tau_n}]=\sum_n{\mathbb E}[1_{\{\tau=\tau_n\}}UK_{\tau_n}]\smallskip\\ &\displaystyle={\mathbb E}[UK_\tau]. \end{array}

showing that the second statement holds.

Finally, it remains to show that any other thin process {\tilde K} satisfying the two statements is equal to K. By the first property, {\tilde K_\tau=K_\tau=0} at all totally inaccessible times. So, we just need to show that they are equal at a predictable stopping time {\tau}. For {\tau=\tau_n}, the definition of K gives

\displaystyle  \tilde K_\tau={\mathbb E}[H_\tau\;\vert\mathcal{F}_{\tau-}]= K_\tau

as required. So, only the case where {\tau\not=\tau_n} (all n) whenever {\tau < \infty} remains. In this case, however, the definition of {\{\tau_n\}} and K shows that {H_\tau=K_\tau=0}, so the equality still holds. ⬜

The predictable projection of the jumps of a local martingale is always zero, explaining the necessity of the second statement of Theorem 5.

Lemma 8 The jumps of a local martingale X satisfy {{}^p\!\Delta X = 0}.

Proof: The conclusion is equivalent to {{\mathbb E}[\Delta X_\tau\;\vert\mathcal{F}_{\tau-}]=0} for each predictable stopping time {\tau}. This was previously shown in the proof of Lemma 2 of the post on compensators. ⬜

Now, consider a thin process H which is locally integrable. If its predictable projection is not zero then we can instead look at the difference {H-{}^p\!H}. It should be clear from the definition that this has zero predictable projection, so Theorem 5 can be used to construct a local martingale with jumps {\Delta X=H-{}^p\!H}. It still needs to be shown that the first condition of the theorem is satisfied, so the result is not immediate. However, it is not difficult to show that the following result follows from Theorem 5.

Theorem 9 Let H be a thin process such that {\sqrt{\sum_{s\le t}H_s^2}} is locally integrable. Then, there exists a unique purely discontinuous local martingale X with {X_0=0} such that either (and then, both) of the following equivalent statements hold.

  1. {\Delta X = H - {}^p\!H}
  2. {H - \Delta X} is predictable.

I will not give the proof of this now, as it drops out of the proof below of Theorem 5. Instead, I will prove that the following very general decomposition result as a straightforward consequence of the previous theorem. Note that, unlike previous decompositions, such as those for special semimartingales and quasimartingales, the following result does not assume any special properties of the process X other than a restriction on the size of its jumps and that it be adapted.

Theorem 10 Let X be an adapted, locally integrable and cadlag process such that {\sum_{s\le t}\Delta X_s^2} is almost surely finite for each finite time {t}. Then, there is a unique decomposition

\displaystyle  X = M + A (1)

where M is a purely discontinuous local martingale and A is a predicable process with {A_0=0}.

Proof: As shown in a previous post, an adapted cadlag process A is predictable if and only if {\Delta A} is predictable. So, any local martingale M with {M_0=X_0} satisfies the conclusion of the theorem if and only if {\Delta A = \Delta X - \Delta M} is predictable. Then, existence and uniqueness is given by Theorem 9 with {H=\Delta X}. ⬜

The jumps of the process A in decomposition 1 can be identified as the predictable projection of the jumps of X. Note that the following result also applies to the compensator of X, whenever it exists.

Lemma 11 Suppose that a cadlag process X decomposes as {X=M+A} for a local martingale M and predictable process A.

Then, {\Delta A = {}^p\!\Delta X}.

Proof: As previously shown, predictability of A implies that {{}^p\!\Delta A = \Delta A}. So,

\displaystyle  \Delta A = {}^p\!\Delta A = {}^p\!\Delta X - {}^p\!\Delta M = {}^p\!\Delta X.

Proof of Theorems 5 and 9

We aim to prove the following.

Lemma 12 Let H be a thin process such that {\sqrt{\sum_{s\le t}H_s^2}} is locally integrable. Then, there exists a purely discontinuous local martingale X such that {X_0=0} and

\displaystyle  \Delta X = H - {}^p\!H.

Once this is done, the proofs of Theorems 5 and 9 will follow quickly. We start with a simple special case. Throughout the remainder of this post, H will always denote a thin process.

Lemma 13 If {\sum_{s\le t}\lvert H_s\rvert} is locally integrable, then the process X, as in Lemma 12, exists and is an FV process.

Proof: Set {Y_t=\sum_{s\le t} H_s}. This is a locally integrable FV process with {\Delta Y_t = H_t} for {t > 0}. So, the compensator A of Y exists. Its jump process is equal to {\Delta A={}^p\!\Delta Y = {}^p\!H}. Then, {X=Y-A} is an FV local martingale with

\displaystyle  \Delta X = \Delta Y - \Delta A = H - {}^p\!H.

as required. ⬜

The general result will follow by using Lemma 13 and taking limits. I will take limits in {L^2}, for which that following will inequality will be used.

Lemma 14 If X exists as in Lemma 12 and is an FV process, then

\displaystyle  {\mathbb E}\left[[X]_\infty\right]\le{\mathbb E}\left[\sum_{t\ge0}H_t^2\right].

Proof: As H is thin, its support {\{H\not=0\}} is contained in the union of graphs of a sequence of stopping times {\tau_n} such that {\tau_m\not=\tau_n} whenever {m\not=n} and {\tau_n < \infty}, and each {\tau_n} is either predictable or totally inaccessible. For each n where {\tau_n} is predictable, we have {{}^p\!H_{\tau_n}={\mathbb E}[H_{\tau_n}\;\vert\mathcal{F}_{\tau_n-}]} so,

\displaystyle  {\mathbb E}\left[\left(H_{\tau_n}-{}^p\!H_{\tau_n}\right)^2\right]+{\mathbb E}\left[{}^p\!H_{\tau_n}^2\right] ={\mathbb E}\left[H_{\tau_n}^2\right].

On the other hand, {H_{\tau_n}-{}^p\!H_{\tau_n}=H_{\tau_n}} whenever {\tau_n} is totally inaccessible. In either case,

\displaystyle  {\mathbb E}\left[\left(H_{\tau_n}-{}^p\!H_{\tau_n}\right)^2\right]\le{\mathbb E}\left[H_{\tau_n}^2\right].

Then, as X is an FV process, the quadratic variation satisfies

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[[X]_\infty\right]&\displaystyle={\mathbb E}\left[\sum_{t\ge0}(\Delta X_t)^2\right] ={\mathbb E}\left[\sum_{t\ge0}\left(H_t-{}^p\!H_t\right)^2\right]\smallskip\\ &\displaystyle=\sum_n{\mathbb E}\left[\left(H_{\tau_n}-{}^p\!H_{\tau_n}\right)^2\right] \le\sum_n{\mathbb E}\left[H_{\tau_n}^2\right]\smallskip\\ &\displaystyle={\mathbb E}\left[\sum_{t\ge0}H_t^2\right]. \end{array}

By using lemma 13 and taking {L^2}-limits, we can now prove the following.

Lemma 15 If {\sum_{t\ge0}H_t^2} is integrable then X as in Lemma 12 exists.

Proof: Define the sequence of thin processes {H^n=1_{\{\lvert H\rvert > 1/n\}}H} for {n=1,2,\ldots}. Then, {\lvert H^n\rvert\le n(H^n)^2}, so

\displaystyle  \sum_{t\ge0}\lvert H^n_t\rvert\le n\sum_{t\ge0}(H^n_t)^2.

This is integrable so, by lemma 13, there is a purely discontinuous local martingale {X^n} such that {X^n_0=0} and {X^n=H^n-{}^p\!H^n}. For any {m\ge n},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\sup_{t\ge0}\left(X^m_t-X^n_t\right)^2\right] &\displaystyle\le 4{\mathbb E}\left[[X^m-X^n]_\infty\right]\smallskip\\ &\displaystyle\le 4{\mathbb E}\left[\sum_{t\ge0}(H^m_t-H^n_t)^2\right]\smallskip\\ &\displaystyle\le 4{\mathbb E}\left[\sum_{t\ge0}1_{\{\lvert H_t\rvert\le 1/n\}}H_t^2\right] \end{array}

The first inequality here is a consequence of the Ito isometry for local martingales, and the second inequality is from Lemma 14 above. By dominated convergence, this tends to 0 as n goes to infinity. Hence, {X^n} tends uniformly to a limit X in {L^2},

\displaystyle  {\mathbb E}\left[\sup_{t\ge0}\left(X^n_t-X_t\right)^2\right]\rightarrow0

as {n\rightarrow\infty}. So, X is a cadlag {L^2}-integrable martingale. If M is a continuous and uniformly bounded local martingale then, as {X^n} is purely discontinuous, {X^nM} is a local martingale and, as it is dominated in {L^2}, it is a martingale. Taking the limit in {L^2} as n goes to infinity shows that {XM} is a martingale, so X is purely discontinuous. Taking limits in {L^2} as n goes to infinity, we see that

\displaystyle  {}^p\!H^n_\tau={\mathbb E}[H^n_\tau\;\vert\mathcal{F}_{\tau-}]\rightarrow{\mathbb E}[H_\tau\;\vert\mathcal{F}_{\tau-}]={}^p\!H_\tau

for predictable {\tau}. Also, if {\tau} is totally inaccessible, then {{}^p\!H^n_\tau={}^p\!H_\tau=0}. So, in either case,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\Delta X_{\tau} &\displaystyle=\lim_{n\rightarrow\infty}\Delta X^n_\tau = \lim_{n\rightarrow\infty}(H^n_\tau-{}^p\!H^n_\tau)\smallskip\\ &\displaystyle=H_\tau-{}^p\!H_\tau. \end{array}

Now, any thin process H satisfying the condition of Lemma 12 can be decomposed as follows.

Lemma 16 If {\sqrt{\sum_{s\le t}H_s^2}} is locally integrable then we can decompose

\displaystyle  H = J + K

where J,K are thin processes with {\sum_{s\le t}\lvert J_s\rvert} locally integrable and {\sum_{t\ge0}K_t^2} integrable.

Proof: Start with a sequence {\epsilon_1,\epsilon_2,\ldots} of positive reals (whose precise values will be chosen later), and define the processes

\displaystyle  Y^n_t=\sum_{s\le t}1_{\{\lvert H_s\rvert\le\epsilon_n\}}H_s^2

and the stopping times

\displaystyle  \tau_n=\inf\left\{t\ge0\colon Y^n_t\ge2^{-n}\right\}.

Dominated convergence implies that {Y^{\epsilon_n}_t\rightarrow0} in the limit as {\epsilon_n\rightarrow0} for each positive t. In particular, by taking {\epsilon_n} small enough, we can ensure that

\displaystyle  {\mathbb P}(\tau_n\le n)={\mathbb P}(Y^n_n\ge2^{-n})\le2^{-n}\rightarrow0

as {n\rightarrow\infty}. Then, the stopped process {(Y^n)^{\tau_n}} is bounded by {2^{-n}+\epsilon_n}. Again choosing {\epsilon_n} small enough, we can ensure that {(Y^n)^{\tau_n}} is bounded by {2^{1-n}}. Define the thin process

\displaystyle  K_t=1_{\bigcup_n\{t\le \tau_n,\lvert H_t\rvert\le\epsilon_n\}}H_t.


\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle \sum_tK_t^2&\displaystyle\le\sum_n\sum_{t\le\tau_n}1_{\{\lvert H_t\rvert\le\epsilon_n\}}H_t^2\smallskip\\ &\displaystyle\le\sum_nY^n_{\tau_n}\le\sum_n2^{1-n}=2, \end{array}

which trivially has finite expectation. Now set {J=H-K}. We have,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle \sum_{s\le t\wedge\tau_n}\lvert J_s\rvert &\displaystyle\le\sum_{s\le t\wedge\tau_n}1_{\{\lvert H_s\rvert > \epsilon_n\}}\lvert H_s\rvert\smallskip\\ &\displaystyle\le\epsilon_n^{-1} \sum_{s\le t}H_s^2 < \infty. \end{array}

Letting n go to infinity, we see that the process {Z_t\equiv\sum_{s\le t}\lvert J_s\rvert} is almost-surely finite, and has jumps {\lvert J_t\rvert\le\lvert H_t\rvert} which, by lemma 6 is locally integrable, showing that Z is locally integrable. ⬜

We immediately obtain the proof of Lemma 12.

Proof of Lemma 12: Let {H=J+K} be the decomposition as in Lemma 16. By lemmas 13 and 15, there exists a purely discontinuous local martingales Y and Z with {\Delta Y = J-{}^p\!J} and {\Delta Z = K-{}^p\!K}. Setting {X=Y+Z} gives the result. ⬜

Now, Theorem 5 follows easily.

Proof of Lemma 5: The second condition of the theorem says that {{}^p\!H=0}. So, by Lemma 12, there exists a purely discontinuous local martingale X with {X_0=0} and {\Delta X = H}. Uniqueness of X follows from Lemma 3 — purely discontinuous local martingales are uniquely determined by their jumps and initial value.

Conversely, suppose that {H=\Delta X} for a local martingale X. Then, {\sum_{s\le t}H_s^2} is almost surely finite and H is locally integrable. So, by Lemma 6 above, {\sum_{s\le t}\sqrt{H_s^2}} is locally integrable. Furthermore, Lemma 8 above shows that {{}^p\!H={}^p\!\Delta X=0}, so the second statement of the Theorem is satisfied. ⬜

Finally, we can prove Theorem 9.

Proof of Lemma 9: That there exists a purely discontinuous local martingale X with {X_0=0} and {\Delta X=H-{}^p\!H} is stated by Lemma 12. Then, {H-\Delta X={}^p\!H} is predictable.

It just remains to show that if Y is any purely discontinuous local martingale with {Y_0=0} and {H-\Delta Y} predictable, then {Y=X}. However, this condition shows that {\Delta(Y-X)} is predictable and, therefore, {Y-X} is a predictable local martingale. This implies that {Y-X} is continuous. Then, Lemma 3 above implies that {Y=X}.

5 thoughts on “Constructing Martingales with Prescribed Jumps

  1. There’s a typo in your Corollary 4: “Any local martingale with” .. should be “Any local martingale which”…

  2. You write
    > ” Similarly, as local martingales are semimartingales, their jumps must also satisfy the second condition. ”

    However, if one follows the link, it actually explains the FIRST condition. So the second one remains unexplained…

    1. That text refers to “the following lemma” which, in context, is Lemma 6. The link does explain the second condition of Lemma 6. Maybe it was not clear which lemma was being referred to?

      1. Aha! NOW it is more clear! Thanks!

        Note that the paragraph we discuss mentions “the first condition” twice — in different meanings! So when you say “the second condition”, IMO it is more or less mandatory to clarify as in “the second condition of the following lemma”…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s