# Predictable Projection For Left-Continuous Processes

In the previous post, I looked at optional projection. Given a non-adapted process X we construct a new, adapted, process Y by taking the expected value of ${X_t}$ conditional on the information available up until time t. I will now concentrate on predictable projection. This is a very similar concept, except that we now condition on the information available strictly before time t.

It will be assumed, throughout this post, that the underlying filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}$ satisfies the usual conditions, meaning that it is complete and right-continuous. This is just for convenience, as most of the results stated here extend easily to non-right-continuous filtrations. The sigma-algebra

$\displaystyle \mathcal{F}_{t-} = \sigma\left(\mathcal{F}_s\colon s < t\right)$

represents the collection of events which are observable before time t and, by convention, we take ${\mathcal{F}_{0-}=\mathcal{F}_0}$. Then, the conditional expectation of X is written as,

 $\displaystyle Y_t={\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]{\rm\ \ (a.s.)}$ (1)

By definition, Y is adapted. However, at each time, (1) only defines Y up to a zero probability set. It does not determine the paths of Y, which requires specifying its values simultaneously at the uncountable set of times in ${{\mathbb R}_+}$. So, (1) does not tell us the distribution of Y at random times, and it is necessary to specify an appropriate version for Y. Predictable projection gives a uniquely defined modification satisfying (1). The full theory of predictable projection for jointly measurable processes requires the predictable section theorem. However, as I demonstrate here, in the case where X is left-continuous, predictable projection can be done by more elementary methods. The statements and most of the proofs in this post will follow very closely those given previously for optional projection. The main difference is that left and right limits are exchanged, predictable stopping times are used in place of general stopping times, and the sigma algebra ${\mathcal{F}_{t-}}$ is used in place of ${\mathcal{F}_t}$.

Stochastic processes will be defined up to evanescence, so two processes are considered to be the same if they are equal up to evanescence. In order to apply (1), some integrability requirements need to imposed. I will use local integrability. Recall that, in these notes, a process X is locally integrable if there exists a sequence of stopping times ${\tau_n}$ increasing to infinity and such that

 $\displaystyle 1_{\{\tau_n > 0\}}\sup_{t \le \tau_n}\lvert X_t\rvert$ (2)

is integrable. This is a strong enough condition for the conditional expectation (1) to exist, not just at each fixed time, but also whenever t is a stopping time. The main result of this post can now be stated.

Theorem 1 (Predictable Projection) Let X be a left-continuous and locally integrable process. Then, there exists a unique left-continuous process Y satisfying (1).

As it is left-continuous, the fact that Y is specified, almost surely, at any time t by (1) means that it is uniquely determined up to evanescence. The main content of Theorem 1 is the existence of Y, and the proof of this is left until later in this post.

The process defined by Theorem 1 is called the predictable projection of X, and is denoted by ${{}^{\rm p}\!X}$. So, ${{}^{\rm p}\!X}$ is the unique left-continuous process satisfying

 $\displaystyle {}^{\rm p}\!X_t={\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]{\rm\ \ (a.s.)}$ (3)

for all times t. In practice, X will usually not just be left-continuous, but will also have right limits everywhere. That is, it is caglad (“continu à gauche, limites à droite”).

Theorem 2 Let X be a caglad and locally integrable process. Then, its predictable projection is caglad.

The simplest non-trivial example of predictable projection is where ${X_t}$ is constant in t and equal to an integrable random variable U. Then, ${{}^{\rm p}\!X_t=M_{t-}}$ is the left-limits of the cadlag martingale ${M_t={\mathbb E}[U\;\vert\mathcal{F}_t]}$, so ${{}^{\rm p}\!X}$ is easily seen to be a caglad process.

Existence of the predictable projection for caglad processes is much simpler than the general left-continuous case, and follows from the existence of the optional projection of cadlag processes. See Lemma 7 below. First, I prove the basic properties of the predictable projection under the assumption that it exists. Linearity of the projection is almost immediate.

Lemma 3 Let X and Y be left-continuous and locally integrable processes. Then, for ${\mathcal{F}_0}$-measurable random variables ${\lambda}$, ${\mu}$,

$\displaystyle {}^{\rm p}(\lambda X+\mu Y)=\lambda\,^{\rm p}\!X+\mu\,^{\rm p}Y.$

Proof: It is clear that ${\lambda X+\mu Y}$ is left-continuous and locally integrable. Setting ${Z=\lambda\,^{\rm p}\!X+\mu\,^{\rm p}Y}$ we have,

$\displaystyle Z_t=\lambda{\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]+\mu{\mathbb E}[Y_t\;\vert\mathcal{F}_{t-}] ={\mathbb E}[\lambda X_t+\mu Y_t\;\vert\mathcal{F}_{t-}]$

almost surely. So, by definition, Z is the predictable projection of ${\lambda X+\mu Y}$. ⬜

Predictable projection commutes with multiplication by an adapted process.

Lemma 4 Let U and X be left-continuous processes such that U is adapted and X and UX are locally integrable. Then,

 $\displaystyle {}^{\rm p}(UX)=U\,^{\rm p}\!X.$ (4)

Proof: The process ${Z=U\,^{\rm p}\!X}$ is left-continuous. As U is adapted and left-continuous, ${U_t}$ is ${\mathcal{F}_{t-}}$-measurable, and

$\displaystyle Z_t=U_t{\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]={\mathbb E}[U_tX_t\;\vert\mathcal{F}_{t-}]$

almost surely. So, by definition, Z is the predictable projection of UX. ⬜

An important property of predictable projection is the fact that equation (3) still holds when t is generalized to be any predictable stopping time. Compare with Theorem 5 from the post on optional projection. The main difference for predictable projection, other than the use of the sigma algebra ${\mathcal{F}_{\tau-}}$ of events observable strictly before ${\tau}$, is that we restrict to predictable stopping times.

Theorem 5 If X is a left-continuous and locally integrable process then, for all predictable stopping times ${\tau}$,

 $\displaystyle 1_{\{\tau < \infty\}}{}^{\rm p}\!X_\tau={\mathbb E}\left[1_{\{\tau < \infty\}}X_\tau\;\vert\mathcal{F}_{\tau-}\right]{\rm\ \ (a.s.)}.$ (5)

Proof: In order to avoid having to keep multiplying by the term ${1_{\{\tau < \infty\}}}$ in expressions such as (5), I will take all processes to be zero at infinity. Suppose, first, that ${\sup_t\lvert X_t\rvert}$ is integrable. Letting U be an adapted, bounded and left-continuous process, we need to prove

 $\displaystyle {\mathbb E}\left[U_\tau\,^{\rm p}\!X_\tau\right]={\mathbb E}\left[U_\tau X_\tau\right].$ (6)

As ${{}^{\rm p}\!X_\tau}$ is ${\mathcal{F}_{\tau-}}$-measurable, and the sigma algebra ${\mathcal{F}_{\tau-}}$ is generated by ${U_\tau}$ as U ranges over the left-continuous and adapted processes, (6) will imply (5).

If ${\tau}$ is a simple predictable stopping time, taking values in a finite set S, then ${{\{\tau=t\}}}$ is in ${\mathcal{F}_{t-}}$ for each ${t\in S}$. Using the fact that ${1_{\{\tau=t\}}U_t}$ is ${\mathcal{F}_{t-}}$-measurable,

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle {\mathbb E}[U_\tau\,^{\rm p}\!X_\tau]&\displaystyle=\sum_{t\in S}{\mathbb E}[1_{\{\tau = t\}}U_t\,^{\rm p}\!X_t]\smallskip\\ &\displaystyle=\sum_{t\in S}{\mathbb E}[1_{\{\tau = t\}}U_tX_t] ={\mathbb E}[U_\tau X_\tau]. \end{array}$

This proves (5) for simple predictable stopping times. In particular, the set of random variables

$\displaystyle \lvert{}^{\rm p}\!X_\tau\rvert=\lvert{\mathbb E}[X_\tau\;\vert\mathcal{F}_{\tau-}]\rvert\le{\mathbb E}[\sup_t\lvert X_t\rvert\;\vert\mathcal{F}_{\tau-}],$

over simple predictable stopping times ${\tau}$ is uniformly integrable.

Next, let ${\tau}$ be an arbitrary predictable stopping time. In the corresponding result for optional projection, we approximated ${\tau}$ from the right by simple stopping times. For the current proof, we can proceed similarly if ${\tau}$ is approximated from the left in the same way. However, this is not possible in general. Instead, we will construct simple stopping times approximating ${\tau}$ and, almost surely, are eventually less than ${\tau}$.

Let ${\{\tau_n\}_{n=1,2,\ldots}}$ be a sequence of stopping times announcing ${\tau}$. That is, ${\tau_n}$ are stopping times increasing to ${\tau}$, and strictly less than ${\tau}$ whenever ${\tau > 0}$. Then, for each n, there will be a sequence of simple stopping times, ${\tau_{nm}}$, decreasing to ${\tau_n}$ as m goes to infinity. Replacing ${\tau_{nm}}$ by ${\tau_{nm}+1/m}$ if necessary, we can suppose that it is predictable. So, setting ${\sigma_n=\tau_{nm}}$ for large enough m gives a simple predictable stopping time greater than ${\tau_n}$ and satisfying

$\displaystyle {\mathbb P}({\sigma_n}\ge\tau > 0) \le 2^{-n}.$

By the Borel-Cantelli lemma, this implies that, almost surely, ${\sigma_n < \tau}$ for large n whenever ${\tau > 0}$. We also set ${\sigma_n=0}$ when ${\tau=0}$. Then, by left-continuity, ${U_{\sigma_n}}$, ${X_{\sigma_n}}$ and ${{}^{\rm p}\!X_{\sigma_n}}$ converge, almost surely, to ${U_\tau}$, ${X_\tau}$ and ${{}^{\rm p}\!X_\tau}$ respectively, as n goes to infinity.

So, using the above proof of (6) for simple predictable stopping times,

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle {\mathbb E}[U_\tau\,^{\rm p}\!X_\tau]&\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[U_{\sigma_n}\,^{\rm p}\!X_{\sigma_n}]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[U_{\sigma_n}X_{\sigma_n}]={\mathbb E}[U_\tau X_\tau]. \end{array}$

This proves (5) in the case where ${\sup_t\lvert X_t\rvert}$ is integrable.

If X is locally integrable, then let ${\tau_n}$ be stopping times increasing to infinity such that (2) is integrable. From Lemma 4,

$\displaystyle {}^{\rm p}(1_{[0,\tau_n]}X)=1_{[0,\tau_n]}\,^{\rm p}\!X$

and the above shows that

$\displaystyle 1_{\{\tau \le \tau_n\}}{}^{\rm p}\!X_\tau = {\mathbb E}[1_{\{\tau \le \tau_n\}}X_\tau\;\vert\mathcal{F}_{\tau-}] =1_{\{\tau \le \tau_n\}}{\mathbb E}[X_\tau\;\vert\mathcal{F}_{\tau-}]$

(almost surely). Letting n go to infinity gives the result. ⬜

The restriction to predictable stopping times in Theorem 5 is necessary. For example, consider a compensated Poisson process M. This is a martingale with discrete jumps of size 1 and, if we let ${\tau}$ be the time of its first jump then the predictable projection of the constant process ${X_t=M_{\tau}}$ (all t) is seen to be

$\displaystyle {}^{\rm p}\!X_t=1_{\{t \le \tau\}}M_{t-}+1_{\{t > \tau\}}M_{\tau}.$

As ${X_\tau=M_{\tau-}+1}$ is ${\mathcal{F}_{\tau-}}$-measurable,

$\displaystyle {}^{\rm p}\!X_\tau=M_{\tau-}\not=M_{\tau-}+1={\mathbb E}[X_\tau\;\vert\mathcal{F}_{\tau-}].$

This contradicts (5). The reason is that ${\tau}$ is not a predictable stopping time and, in fact, is totally inaccessible.

I now show that predictable projection is a true projection operator. So, applying it twice gives the same result as applying it once.

Lemma 6 If X is a left-continuous and locally integrable process, then so is its predictable projection ${{}^{\rm p}\!X}$ and,

$\displaystyle {}^{\rm p}({}^{\rm p}\!X)={}^{\rm p}\!X.$

Proof: The fact that ${{}^{\rm p}\!X}$ is adapted and left-continuous means that it is equal to its own predictable projection, just so long as we can show that it satisfies the requirement of local integrability. Supposing first that ${\sup_t\lvert X_t\rvert}$ is integrable, we have the bound,

$\displaystyle \lvert{}^{\rm p}\!X_t\rvert=\lvert{\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]\rvert\le M_{t-}$

where ${M_t={\mathbb E}[\sup_s\lvert X_s\rvert\;\vert\mathcal{F}_t]}$. As M is a martingale, its cadlag version is is locally integrable. So, ${M_-}$ is left-continuous and locally integrable, showing that ${{}^{\rm p}\!X}$ is locally integrable.

Now, for arbitrary locally integrable X, let ${\tau_n}$ be a sequence of stopping times increasing to infinity such that (2) is integrable. Then, using Lemma (4),

$\displaystyle 1_{[0,\tau_n]}{}^{\rm p}\!X={}^{\rm p}(1_{[0,\tau_n]}X)$

is locally integrable for each n. So, ${{}^{\rm p}\!X}$ is locally integrable as required. ⬜

#### Predictable Projection of Caglad Processes

I will now show that the predictable projection of a caglad process, X, exists and is itself caglad, proving Theorem 2 above. Taking the right limits ${X_{t+}=\lim_{s\downarrow\downarrow t}X_s}$ gives a cadlag process and, using the results of the previous post, its optional projection exists. The predictable projection of X can be constructed from this simply by taking left limits.

Lemma 7 Suppose that X is a caglad locally integrable process. Then, ${Y=X_+}$ is a cadlag and prelocally integrable process. Furthermore, the predictable projection of X exists, is caglad, and satisfies

$\displaystyle {}^{\rm p}\!X_t={}^{\rm o}Y_{t-}$

over ${t > 0}$.

Proof: If ${\tau_n}$ is a sequence of stopping times increasing to infinity such that (2) is integrable then

$\displaystyle 1_{\{\tau_n > 0\}}\sup_{t <\tau_n}\lvert X_{t+}\rvert =1_{\{\tau_n > 0\}}\sup_{0 < t <\tau_n}\lvert X_t\rvert$

is integrable. So, ${Y=X_+}$ is prelocally integrable. Then, the optional projection of Y exists and is cadlag. So, the process

$\displaystyle Z_t=\begin{cases} {}^{\rm o}Y_{t-},&t > 0,\\ {\mathbb E}[X_0\;\vert\mathcal{F}_0],&t=0, \end{cases}$

is caglad and adapted. It just needs to be shown that Z is the predictable projection of X. That is, the identity

 $\displaystyle Z_t={\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]{\rm\ \ (a.s.)}$ (7)

needs to be verified. This holds for ${t=0}$ by the definition of Z, so we suppose that t is strictly positive.

Let us first suppose that ${\sup_t\lvert X_t\rvert}$ is integrable. Choosing any time ${s < t}$ and bounded ${\mathcal{F}_s}$-measurable random variable U, let ${t_n\ge s}$ be a sequence of times increasing strictly to t.

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[UZ_t]&\displaystyle={\mathbb E}[U\,^{\rm o}Y_{t-}]=\lim_{n\rightarrow\infty}{\mathbb E}[U\,^{\rm o}Y_{t_n}]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[U X_{t_n}]={\mathbb E}[UX_t]. \end{array}$ (8)

The second equality here uses the fact that

$\displaystyle {}^{\rm o}Y_{t_n}={\mathbb E}[X_{t_n}\;\vert\mathcal{F}_{t_n}]\le{\mathbb E}[\sup_s\lvert X_s\rvert\;\vert\mathcal{F}_{t_n}]$

is a uniformly integrable sequence of random variables tending to ${{}^{\rm o}Y_{t-}}$. The third equality uses the fact that U is ${\mathcal{F}_{t_n}}$-measurable, and the final equality uses dominated convergence. As the sigma algebras ${\mathcal{F}_s}$ over ${s < t}$ generate ${\mathcal{F}_{t-}}$, (8) holds whenever U is bounded and ${\mathcal{F}_{t-}}$ measurable, proving (7). ⬜

#### Predictable Projection of Left-Continuous Processes

I now give a proof that the predictable projection of left-continuous processes exists, proving Theorem 1. This is considerably more difficult to prove than the caglad case, and it is not possible to construct ${{}^{\rm p}\!X}$ directly from the optional projection as we did above in Lemma 7. However, the proof does follow in a very similar way as for the optional projection of right-continuous processes. First, we show that it is possible to approximate X from above by caglad processes. As in the previous post, for processes X and Y, the inequality ${X\ge Y}$ is taken to mean that X is greater than Y up to evanescence. So, outside of a zero probability set, ${X_t\ge Y_t}$ for all t. Convergence of a sequence of processes ${X^n}$ to a limit X is taken as pointwise convergence up to evanescence — outside of a zero probability set, ${X^n_t\rightarrow X_t}$ for all times t.

Lemma 8 Let X be a locally integrable left-continuous process. Then, there exists a sequence, ${\{X^n\}_{n=1,2,\ldots}}$, of locally integrable caglad processes ${X^n}$ which are decreasing in n and tending to X as n goes to infinity.

Proof: Choose a sequence of finite sets ${S_n\subseteq(0,\infty)}$ such that ${S_n\subseteq S_{n+1}}$ for each n, and ${\bigcup_nS_n}$ is dense in ${{\mathbb R}_+}$. For any given n we write ${S_n}$ as ${\{t_1 < t_2 < \cdots < t_r\}}$. Setting ${t_0=0}$ and ${t_{r+1}=\infty}$, define the process ${X^n}$ by

$\displaystyle X^n_t=\sup\left\{X_s\colon s\in(t_{k-1},t]\right\}$

for all t in ${(t_{k-1},t_k]}$. We also set ${X^n_0}$ equal to ${X_0}$. This is almost surely finite at all times, since X is locally integrable, and satisfies inequality

$\displaystyle \sup_{s\le t}\lvert X^n_t\rvert\le\sup_{s\le t}\lvert X_s\rvert.$

In particular, this implies that ${X^n}$ is locally integrable.

As ${S_m}$ is a refinement of ${S_n}$ for ${m\ge n}$, the supremum in the definition of ${X^m_t}$ is taken over a subinterval of ${(t_{k-1},t]}$ and, hence, ${X^m\le X^n}$. Also, as ${\bigcup_nS_n}$ is dense in ${{\mathbb R}_+}$, left-continuity ensures that ${X^n_t}$ tends to ${X_t}$ as n goes to infinity.

It is straightforward to see that ${X^n_t}$ is left-continuous and increasing across each interval ${(t_{k-1},t_k]}$, so it is caglad. ⬜

The idea, then, is to construct the predictable projection of X by taking the limit of predictable projections of a sequence of caglad processes tending monotonically to X. This requires first proving that predictable projection commutes with taking limits of monotonic sequences of processes. To prove that, the following result will be used. This involves a rather tricky argument, and is very similar to Lemma 15 from the post on optional projection. The proof here is actually a bit more difficult, and makes use of the result for right-continuous processes. I will take all processes to be equal to 0 at time ${t=\infty}$, in order to avoid having to keep multiplying by expressions such as ${1_{\{\tau < \infty\}}}$.

Lemma 9 Let ${\{X^n\}_{n=1,2,\ldots}}$ be a sequence of non-negative left-continuous adapted processes, and suppose that it is decreasing in n. If, for all predictable stopping times ${\tau}$,

$\displaystyle X^n_\tau\rightarrow0{\rm\ \ (a.s.)}$

as ${n\rightarrow\infty}$, then ${X^n\rightarrow0}$.

Proof: Given a random time ${\sigma\colon\Omega\rightarrow{\mathbb R}_+}$, I will show that ${X^n_\sigma}$ tends to zero almost surely. Note that it is not required that ${\sigma}$ is a stopping time, much less a predictable stopping time. Fixing ${\epsilon > 0}$, it is enough to show that ${\lim_n X^n_\sigma\le\epsilon}$.

Letting ${\delta_n}$ be a sequence of strictly positive reals, define the times ${\tau_1,\tau_2,\ldots}$,

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\tau_1&\displaystyle=0,\smallskip\\ \displaystyle\tau_{n+1}&\displaystyle=\inf\left\{t\ge\tau_n+\delta_n\colon X^n_t > \epsilon\right\}. \end{array}$

It can be seen that these are stopping times. In fact, from left-continuity, if ${\tau_n}$ is a stopping time then

$\displaystyle \{\tau_{n+1} < t\}=\left\{\sup_{s\in{\mathbb Q}_+\cap[0,t)}1_{\{s\ge\tau_n\}}X^n_s > \epsilon\right\},$

is ${\mathcal{F}_t}$-measurable, so ${\tau_{n+1}}$ is also a stopping time. Furthermore, as ${\tau_{n+1}}$ is strictly greater than ${\tau_n}$ whenever it is finite, the limit ${\tau=\lim_n\tau_n}$ is a predictable stopping time and, by the hypothesis, ${X^n_{\tau}\rightarrow0}$ almost-surely as n goes to infinity. Fixing an m then, whenever ${n > m}$ and ${\tau_n}$ is finite, the construction of ${\tau_n}$ implies that there exists times in the interval ${[\tau_n,\tau_n+\delta_n)}$ where ${X^{n-1}}$ and, hence, ${X^m}$ exceeds ${\epsilon}$. So, whenever ${\tau}$ is finite, left-continuity of X gives ${X^m_\tau\ge\epsilon}$. As this contradicts the limit ${X^m_\tau\rightarrow0}$, we have shown that ${\tau=\infty}$ and ${\tau_n}$ increase to infinity with probability one.

So far, I have not stated how ${\delta_n}$ are chosen. Fixing ${\alpha > 0}$, the real numbers ${\delta_n}$ are taken small enough so that

$\displaystyle {\mathbb P}\left(\sigma\in(\tau_n,\tau_n+\delta_n]\right) < 2^{-n}\alpha.$

Summing over n, the probability that ${\sigma}$ lies in the union of the intervals ${(\tau_n,\tau_n+\delta_n]}$ is less than ${\alpha}$.

Next, note that ${\lim_n X^n}$ is bounded above by ${\epsilon}$ on the intervals ${(\tau_n+\delta_n, \tau_{n+1}]}$. On the event ${\lim_nX^n_\sigma > \epsilon}$, ${\sigma}$ cannot lie in any of these intervals, so must be equal to 0 or lie in an interval ${(\tau_n,\tau_n+\delta_n]}$. Furthermore, ${\sigma > 0}$ on this event since, by the hypothesis, ${X_0}$ tends to 0 (a.s.). Hence,

$\displaystyle {\mathbb P}\left(\lim_nX^n_\sigma > \epsilon\right)\le{\mathbb P}\left(\sigma\in\bigcup_n(\tau_n,\tau_n+\delta_n]\right) < \alpha$

So, as ${\alpha}$ and ${\epsilon}$ are arbitrary positive reals, ${X^n_\sigma}$ almost surely tends to 0.

Finally, for any positive real T, define the right-continuous processes

$\displaystyle Y^n_t=X^n_{(T-t)_+}.$

These are not adapted but, as they are measurable, they will be adapted with respect to the constant filtration ${\mathcal{G}_t=\mathcal{F}}$. Then, for any ${\{\mathcal{G}_t\}_{t\in{\mathbb R}_+}}$ stopping time ${\tau}$, ${Y^n_\tau=X^n_{(T-\tau)_+}}$ and, as we have shown above, this tends almost surely to zero. By Lemma 15 from the previous post, ${Y^n\rightarrow0}$ and, by taking T arbitrarily large, ${X^n\rightarrow0}$. ⬜

Monotone convergence of the predictable projection follows easily from Lemma 9.

Lemma 10 (Monotone Convergence) Let ${\{X^n\}_{n=1,2,\ldots}}$ be a sequence of locally integrable left-continuous processes decreasing to 0 as n goes to infinity.

Then, supposing that their predictable projections ${{}^{\rm p}\!X^n}$ exist, they also decrease to 0.

Proof: For ${m\ge n}$ we have

$\displaystyle {}^{\rm p}\!X^m_t={\mathbb E}[X^m_t\;\vert\mathcal{F}_{t-}]\le{\mathbb E}[X^n_t\;\vert\mathcal{F}_{t-}]={}\!X^n_t$

almost surely. By left-continuity this implies that, outside of a zero probability set, ${{}^{\rm p}\!X^n_t}$ is decreasing in n for all t. For any predictable stopping time ${\tau}$, Theorem 5 together with monotone convergence for the conditional expectation gives

$\displaystyle {}^{\rm p}\!X^n_\tau={\mathbb E}[X^n_\tau\;\vert\mathcal{F}_{\tau-}] \rightarrow0$

almost surely. Lemma 9 then gives ${{}^{\rm p}\!X^n\rightarrow0}$ as required. ⬜

Finally, I give the proof of Theorem 1 and show that the predictable projection of all locally integrable and left-continuous processes exists. The predictable projection is constructed by taking limits of the predictable projections of caglad processes and applying monotone convergence. We follow along the same lines as the proof given for Lemma 17 in the post on optional projection.

Proof of Theorem 1: Applying Lemma 8 to both ${X}$ and ${-X}$ gives sequences of locally integrable caglad processes ${Y^n}$ decreasing in n, and ${Z^n}$ increasing in n, both tending to X as n goes to infinity. By Lemma 7, the predictable projections

$\displaystyle {}^{\rm p}(Y^n-Z^n)={}^{\rm p}Y^n-{}^{\rm p}\!Z^n$

exist. As ${Y^n}$ and ${Z^n}$ are respectively decreasing and increasing in n, the same holds for their predictable projections. Lemma 10 says that ${{}^{\rm p}Y^n-{}^{\rm p}\!Z^n}$ is decreasing to zero. So, we can define a process W up to evanescence by

$\displaystyle W_t=\lim_{n\rightarrow\infty}{}^{\rm p}Y^n_t=\lim_{n\rightarrow\infty}{}^{\rm p}\!Z^n_t.$

By dominated convergence,

$\displaystyle W_t=\lim_{n\rightarrow\infty}{\mathbb E}[Y^n_t\;\vert\mathcal{F}_{t-}]={\mathbb E}[X_t\;\vert\mathcal{F}_{t-}]$

almost surely. To show that W is the predictable projection of X, only left-continuity remains. If ${t_m}$ is a sequence increasing to t then,

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle W_{t_m}&\displaystyle\le{}^{\rm p}Y^n_{t_m}\rightarrow {}^{\rm p}Y^n_t.\smallskip\\ \displaystyle W_{t_m}&\displaystyle\ge{}^{\rm p}Z^n_{t_m}\rightarrow {}^{\rm p}Z^n_t. \end{array}$

as m goes to infinity, for each fixed n. Then letting n go to infinity gives

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\limsup_{m\rightarrow\infty}W_{t_m}\le W_t,\smallskip\\ \displaystyle\liminf_{m\rightarrow\infty}W_{t_m}\ge W_t. \end{array}$

Therefore, ${W_{t_m}\rightarrow W_t}$ as required. ⬜