Dual Projections

The optional and predictable projections of stochastic processes have corresponding dual projections, which are the subject of this post. I will be concerned with their initial construction here, and show that they are well-defined. The study of their properties will be left until later. In the discrete time setting, the dual projections are relatively straightforward, and can be constructed by applying the optional and predictable projection to the increments of the process. In continuous time, we no longer have discrete time increments along which we can define the dual projections. In some sense, they can still be thought of as projections of the infinitesimal increments so that, for a process A, the increments of the dual projections {A^{\rm o}} and {A^{\rm p}} are determined from the increments {dA} of A as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dA^{\rm o}={}^{\rm o}(dA),\smallskip\\ &\displaystyle dA^{\rm p}={}^{\rm p}(dA). \end{array} (1)

Unfortunately, these expressions are difficult to make sense of in general. In specific cases, (1) can be interpreted in a simple way. For example, when A is differentiable with derivative {\xi}, so that {dA=\xi dt}, then the dual projections are given by {dA^{\rm o}={}^{\rm o}\xi dt} and {dA^{\rm p}={}^{\rm p}\xi dt}. More generally, if A is right-continuous with finite variation, then the infinitesimal increments {dA} can be interpreted in terms of Lebesgue-Stieltjes integrals. However, as the optional and predictable projections are defined for real valued processes, and {dA} is viewed as a stochastic measure, the right-hand-side of (1) is still problematic. This can be rectified by multiplying by an arbitrary process {\xi}, and making use of the transitivity property {{\mathbb E}[\xi\,{}^{\rm o}(dA)]={\mathbb E}[({}^{\rm o}\xi)dA]}. Integrating over time gives the more meaningful expressions

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle {\mathbb E}\left[\int_0^\infty \xi\,dA^{\rm o}\right]={\mathbb E}\left[\int_0^\infty{}^{\rm o}\xi\,dA\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\int_0^\infty \xi\,dA^{\rm p}\right]={\mathbb E}\left[\int_0^\infty{}^{\rm p}\xi\,dA\right]. \end{array}

In contrast to (1), these equalities can be used to give mathematically rigorous definitions of the dual projections. As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}, and processes are identified whenever they are equal up to evanescence. The terminology `raw IV process‘ will be used to refer to any right-continuous integrable process whose variation on the whole of {{\mathbb R}^+} has finite expectation. The use of the word `raw’ here is just to signify that we are not requiring the process to be adapted. Next, to simplify the expressions, I will use the notation {\xi\cdot A} for the integral of a process {\xi} with respect to another process A,

\displaystyle  \xi\cdot A_t\equiv\xi_0A_0+\int_0^t\xi\,dA.

Note that, whereas the integral {\int_0^t\xi\,dA} is implicitly taken over the range {(0,t]} and does not involve the time-zero value of {\xi}, I have included the time-zero values of the processes in the definition of {\xi\cdot A}. This is not essential, and could be excluded, so long as we were to restrict to processes starting from zero. The existence and uniqueness (up to evanescence) of the dual projections is given by the following result.

Theorem 1 (Dual Projections) Let A be a raw IV process. Then,

  • There exists a unique raw IV process {A^{\rm o}} satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm o}_\infty\right]={\mathbb E}\left[{}^{\rm o}\xi\cdot A_\infty\right] (2)

    for all bounded measurable processes {\xi}. We refer to {A^{\rm o}} as the dual optional projection of A.

  • There exists a unique raw IV process {A^{\rm p}} satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm p}_\infty\right]={\mathbb E}\left[{}^{\rm p}\xi\cdot A_\infty\right] (3)

    for all bounded measurable processes {\xi}. We refer to {A^{\rm p}} as the dual predictable projection of A.

Furthermore, if A is nonnegative and increasing then so are {A^{\rm o}} and {A^{\rm p}}.

The proof will be given further down in this post. For now, note that nowhere in the statement of theorem 1 did we require the dual optional projection to be an optional process or the dual predictable projection to be predictable. In fact, these properties are automatically satisfied.

Theorem 2 Let A be a raw IV process. Then,

  • {A^{\rm o}} is the unique optional IV process satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm o}_\infty\right]={\mathbb E}\left[\xi\cdot A_\infty\right] (4)

    for all bounded optional processes {\xi}.

  • {A^{\rm p}} is the unique predictable IV process satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm p}_\infty\right]={\mathbb E}\left[\xi\cdot A_\infty\right] (5)

    for all bounded predictable processes {\xi}.

This can be used as an alternative definition of the dual projections, rather than theorem 1, which may be preferred as it explicitly makes clear that the dual projections are, respectively, optional and predictable. The proof of theorem 2 is a bit tricky, so will again be left until further down in this post. For now, we will show how localization can be used to relax the rather stringent condition that A has integrable variation.

Recall that for a cadlag process A and stopping time {\tau}, the stopped process {A^\tau} is defined by {A^\tau_t=A_{t\wedge\tau}}, and the pre-stopped process {A^{\tau-}} is defined by {A^{\tau-}_t=A_t} for {t < \tau} and {A^{\tau-}_t=A_{\tau-}} for {t\ge\tau}. We say that A is locally (raw) IV if there exists a sequence of stopping times {\tau_n} increasing to infinity such that {1_{\{\tau_n > 0\}}A^{\tau_n}} are raw IV processes, and is prelocally (raw) IV if there exists a sequence {\tau_n} of stopping times tending to infinity such that {1_{\{\tau_n > 0\}}A^{\tau_n-}} are raw IV processs. It should be clear that every raw IV process is locally IV, and that every locally IV process is also prelocally IV.

The dual optional projection can be extended to prelocally IV processes using a similar definition to that given for IV processes by Theorems 1 and 2. The difference is that we do not know, a-priori, that integrals of the form {\xi\cdot A_\infty} are well-defined and integral. Instead, we must impose this as a condition. In the following, when we state that {\xi\cdot A} is IV then we are saying both that {\xi} is A integrable and that {\xi\cdot A} is a raw IV process. That is,

\displaystyle  {\mathbb E}\left[\lvert\xi_0A_0\rvert+\int_0^\infty\lvert\xi\rvert\,\lvert dA\rvert\right] < \infty.

Also, as used elsewhere in these notes, a process will be said to be FV if it is cadlag, adapted, and of finite variation on all finite time intervals so that, in particular, adapted locally IV and prelocally IV processes are FV. When we say that a process {\xi} has optional projection {{}^{\rm o}\xi} or predictable projection {{}^{\rm p}\xi}, we are implicitly including the condition that {\xi} satisfies the necessary integrability properties for these projections to exist.

Theorem 3 (Dual Optional Projection) Let A be prelocally a raw IV process. Then there exists a unique prelocally IV process {A^{\rm o}} satisfying

\displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm o}_\infty\right]={\mathbb E}\left[{}^{\rm o}\xi\cdot A_\infty\right] (6)

for all processes {\xi} with optional projection {{}^{\rm o}\xi} such that {{}^{\rm o}\xi\cdot A} and {\xi\cdot A^{\rm o}} are both IV. Equivalently, {A^{\rm o}} is the unique optional FV process such that, if {\xi} is an optional process such that {\xi\cdot A} is IV, then {\xi\cdot A^{\rm o}} is also IV and,

\displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm o}_\infty\right]={\mathbb E}\left[\xi\cdot A_\infty\right]. (7)

Proof: Letting V be the variation process of A,

\displaystyle  V_t=\lvert A_0\rvert + \int_0^t\,\lvert dA\rvert (8)

then, as A is prelocally a raw IV process, there exists a sequence {\tau_n} of stopping times increasing to infinity such that {1_{\{\tau_n > 0\}}V_{\tau_n-}} are integrable. Choose a sequence {\epsilon_n > 0} of positive reals such that {\sum_n\epsilon_n{\mathbb E}[1_{\{\tau_n > 0\}}V_{\tau_n-}]} is finite. Then, we can construct a positive optional process

\displaystyle  \alpha\equiv\sum_{n=1}^\infty \epsilon_n1_{[0,\tau_n)}

such that

\displaystyle  \alpha\cdot V_\infty=\sum_{n=1}^\infty\epsilon_n1_{[0,\tau_n)}\cdot V_\infty=\sum_{n=1}^\infty\epsilon_n1_{\{\tau_n > 0\}}V_{\tau_n-}

is integrable. This implies that {\alpha\cdot A} is an IV process and, by theorem 1, has a dual optional projection {(\alpha\cdot A)^{\rm o}}. Define

\displaystyle  A^{\rm o}\equiv\alpha^{-1}\cdot(\alpha\cdot A)^{\rm o}.

As {\alpha^{-1}} is bounded by {\epsilon_n^{-1}} over the interval {[0,\tau_n)}, this integral is well-defined and {(A^{\rm o})^{\tau_n-}=(1_{[0,\tau_n)}\alpha_n^{-1})\cdot(\alpha\cdot A)^{\rm o}} is an IV process, so {A^{\rm o}} is prelocally IV. Also, as {\alpha} is optional and {(\alpha\cdot A)^{\rm o}} is optional, hence adapted, it follows that {A^{\rm o}} is adapted or, equivalently, is optional. We show that {A^{\rm o}} satisfies (6) and (7).

Suppose that {\xi} is a process with optional projection {{}^{\rm o}\xi} such that {\xi\cdot A^{\rm o}} and {{}^{\rm o}\xi\cdot A} are IV. We start by assuming that {{}^{\rm o}\lvert\xi\rvert\alpha^{-1}} is bounded. Then, there exists a sequence of processes {\xi^n\rightarrow\xi} with {\lvert \xi^n\rvert\le\lvert\xi\rvert} and such that {\xi^n\alpha^{-1}} is bounded. For example, take {\xi^n=(\xi\wedge(n\alpha))\vee(-n\alpha)}. So {\lvert{}^{\rm o}\xi^n\rvert\le{}^{\rm o}\lvert\xi\rvert} and by dominated convergence for optional projections, {{}^{\rm o}\xi^n\rightarrow{}^{\rm o}\xi}. Therefore,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[\xi\cdot A^{\rm o}_\infty] &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[\xi^n\cdot A^{\rm o}_\infty]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[(\xi^n\alpha^{-1})\cdot (\alpha\cdot A)^{\rm o}_\infty]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[({}^{\rm o}\xi^n\alpha^{-1})\cdot (\alpha\cdot A)_\infty]\smallskip\\ &\displaystyle={\mathbb E}[({}^{\rm o}\xi\alpha^{-1})\cdot (\alpha\cdot A)_\infty] ={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]. \end{array} (9)

Dominated convergence was applied in the first and fourth equalities, and the definition of the dual optional projection {(\alpha\cdot A)^{\rm o}} was used in the third. The condition that {{}^{\rm o}\lvert\xi\rvert\alpha^{-1}} is bounded can be removed by, instead, choosing a sequence of optional processes {\beta^n\rightarrow1} such that {\lvert\beta^n\rvert\le1} and {\beta^n\,{}^{\rm o}\lvert\xi\rvert\alpha^{-1}} are bounded. For example, we can take {\beta^n=1\wedge(n\alpha/{}^{\rm o}\lvert\xi\rvert)}. Then,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[\xi\cdot A^{\rm o}_\infty] &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[(\beta^n\xi)\cdot A^{\rm o}_\infty]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}[(\beta^n\,{}^{\rm o}\xi)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]. \end{array}

The first and third equalities here are applications of dominated convergence, and the second is using (9). This shows that (6) holds.

Next, suppose that {\xi} is an optional process such that {\xi\cdot A} is IV. Using the fact that {{}^{\rm o}\xi=\xi}, (7) would follow immediately from (6) if it was known that {\xi\cdot A^{\rm o}} is also IV. To show this, choose an optional {0 < \beta\le1} such that {(\beta\xi)\cdot A^{\rm o}} is IV. For example, we can choose {\beta=1/(1+\lvert\xi\alpha^{-1}\rvert)} so that {\beta\xi\alpha^{-1}} is bounded. Then, for any bounded process {\zeta}, applying (6) gives,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}[(\zeta\beta\xi)\cdot A^{\rm o}] ={\mathbb E}[({}^{\rm o}\zeta\beta\xi)\cdot A]\smallskip\\ &\displaystyle\quad ={\mathbb E}[({}^{\rm o}\zeta\beta)\cdot(\xi\cdot A)] ={\mathbb E}[(\zeta\beta)\cdot(\xi\cdot A)^{\rm o}] \end{array}

As this holds for all bounded processes {\zeta},

\displaystyle  (\beta\xi)\cdot A^{\rm o}=\beta\cdot(\xi\cdot A)^{\rm o}

Then, as {\beta > 0} we can cancel it from both sides of the equality (i.e., integrate {\beta^{-1}} wrt both sides) to see that {\xi} is {A^{\rm o}}-integrable and

\displaystyle  \xi\cdot A^{\rm o}=(\xi\cdot A)^{\rm o}

is IV, as required.

Only the two uniqueness statements of the theorem remain to be established. Suppose that B is prelocally an IV process satisfying {{\mathbb E}[\xi\cdot B_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]} for all processes {\xi} such that {\xi\cdot B} and {{}^{\rm o}\xi\cdot A} are IV. As in the construction of {\alpha} above, we can find an optional process {\beta > 0} such that {\beta\cdot B} is IV. Replacing {\beta} by {\alpha\wedge\beta} if necessary, we can assume that {\beta\cdot A} is also IV. Then, for bounded processes {\zeta},

\displaystyle  {\mathbb E}[\zeta\cdot\beta\cdot B_\infty] ={\mathbb E}[{}^{\rm o}\zeta\cdot\beta\cdot A_\infty]

so, by Theorem 1, {\beta\cdot B=(\beta\cdot A)^{\rm o}}, and {B=\beta^{-1}\cdot(\beta\cdot A)^{\rm o}} is uniquely determined.

Finally, suppose that B is an optional FV process such that {\xi\cdot B} is IV and {{\mathbb E}[\xi\cdot B_\infty]={\mathbb E}[\xi\cdot A_\infty]} for all optional processes {\xi} such that {\xi\cdot A} is IV. In particular, with {\alpha} as above, {\alpha\cdot A} and {\alpha\cdot B} are IV. Furthermore, as B is adapted and {\alpha} is optional, it follows that {\alpha\cdot B} is adapted and, hence, optional. For bounded optional {\xi},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}[\xi\cdot(\alpha\cdot B)_\infty]={\mathbb E}[(\xi\alpha)\cdot B_\infty]\smallskip\\ &\displaystyle\quad={\mathbb E}[(\xi\alpha)\cdot A_\infty] ={\mathbb E}[\xi\cdot(\alpha\cdot A)_\infty]. \end{array}

Theorem 2 then says that {\alpha\cdot B=(\alpha\cdot A)^{\rm o}} and, hence, {B=\alpha^{-1}\cdot(\alpha\cdot A)^{\rm o}} is uniquely determined. ⬜

In a similar way, the dual predictable projection can be defined for all locally IV processes.

Theorem 4 (Dual Predictable Projection) Let A be locally a raw IV process. Then there exists a unique locally IV process {A^{\rm p}} satisfying

\displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm p}_\infty\right]={\mathbb E}\left[{}^{\rm p}\xi\cdot A_\infty\right] (10)

for all processes {\xi} with predictable projection {{}^{\rm p}\xi} such that {{}^{\rm p}\xi\cdot A} and {\xi\cdot A^{\rm p}} are both IV. Equivalently, {A^{\rm p}} is the unique predictable FV process such that, if {\xi} is a predictable process such that {\xi\cdot A} is IV, then {\xi\cdot A^{\rm p}} is also IV and,

\displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm p}_\infty\right]={\mathbb E}\left[\xi\cdot A_\infty\right]. (11)

Proof: Letting V be the variation of A, given by (8), the condition that A is locally IV means that there is a sequence of stopping times {\tau_n} increasing to infinity such that {1_{\{\tau_n > 0\}}V_{\tau_n}} are integrable. As in the proof of theorem 3, we choose a sequence of positive reals {\epsilon_n > 0} such that {\sum_n\epsilon_n{\mathbb E}[1_{\{\tau_n > 0\}}V_{\tau_n}]} is finite, and define the predictable process

\displaystyle  \alpha\equiv\sum_{n=1}^\infty\epsilon_n1_{\{\tau_n > 0\}}1_{[0,\tau_n]}.

The remainder of the proof follows exactly as for theorem 3, replacing `optional’ by `predictable’, `prelocally IV’ by `locally IV’, and the indicator processes {1_{[0,\tau_n)}} by {1_{\{\tau_n > 0\}}1_{[0,\tau_n]}}. One point to mention is that the proof of theorem 3 made use of the simple fact that an integral {\alpha\cdot A} is optional (i.e., adapted) whenever {\alpha} and A are optional. Then, the current proof, requires the fact that {\alpha\cdot A} is predictable whenever {\alpha} and A are predictable, which is only slightly less obvious. In fact, this statement reduces to showing that {\alpha\cdot A} is adapted and that {\Delta(\alpha\cdot A)=\alpha\Delta A} is predictable, which follows from the fact that it is the product of predictable processes. ⬜

{{\mathbb P}}-measures

It is possible to construct the dual projections satisfying the conclusions of theorem 1 is a relatively direct fashion. For a bounded random variable U, let M be the martingale given by {M_t={\mathbb E}[U\,\vert\mathcal F_t]} and with right-continuous paths (at least, outside of a countable subset of {{\mathbb R}^+}). The process with constant paths equal to U has optional projection M and predictable projection {M_-}. From this, we obtain

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}[UA^{\rm o}_t]={\mathbb E}[(1_{[0,t]}U)\cdot A^{\rm o}]={\mathbb E}[M\cdot A_t],\smallskip\\ &\displaystyle{\mathbb E}[UA^{\rm p}_t]={\mathbb E}[(1_{[0,t]}U)\cdot A^{\rm p}]={\mathbb E}[M_-\cdot A_t]. \end{array}

This is sufficient to uniquely determine both {A^{\rm o}_t} and {A^{\rm p}_t} up to almost-sure equivalent and, by right-continuity, determines {A^{\rm o}} and {A^{\rm p}} up to evanescence. Furthermore, {A^{\rm o}_t} and {A^{\rm p}_t} can be constructed from the expressions above for {{\mathbb E}[UA^{\rm o}_t]} and {{\mathbb E}[UA^{\rm p}_t]} by taking Radon-Nikodym derivatives. However, I will use a more structured approach applying {{\mathbb P}}-measures, although the underlying ideas are the same as just described. This has some benefits as {{\mathbb P}}-measures are also useful in related areas such as with Doléans measures and the Doob-Meyer decomposition.

Use {\mathcal M} to denote the product sigma-algebra {\mathcal B({\mathbb R}^+)\otimes\mathcal F} on {{\mathbb R}^+\times\Omega}. In relation to the optional and predictable sigma-algebras, we have the inclusions

\displaystyle  \mathcal P\subseteq\mathcal O\subseteq\mathcal M.

Definition 5 A {{\mathbb P}}-measure is a finite signed measure on {\mathcal M} (respectively, {\mathcal O}, {\mathcal P}) which vanishes on evanescent sets. Furthermore, a {{\mathbb P}}-measure {\mu} on {\mathcal M} is called

  • optional if {\mu({}^{\rm o}\xi)=\mu(\xi)} for all bounded processes {\xi}.
  • predictable if {\mu({}^{\rm p}\xi)=\mu(\xi)} for all bounded processes {\xi}.

Note that the requirement for {\mu} to vanish on evanescent sets is necessary for expressions such as {\mu({}^{\rm o}\xi)} and {\mu({}^{\rm p}\xi)} to make sense, as the projections are only defined up to evanescence. The following is the main result on {{\mathbb P}}-measures, and relates them to IV processes. Unless explicitly stated otherwise, any {{\mathbb P}}-measures will be assumed to be defined on {\mathcal M}.

Theorem 6 A raw IV process, A, uniquely defines a {{\mathbb P}}-measure, {\mu_A}, given by

\displaystyle  \mu_A(\xi)={\mathbb E}\left[\xi\cdot A_\infty\right] (12)

for all bounded measurable processes {\xi}. Furthermore, {\mu_A} is a positive measure if and only if A is nonnegative and increasing.

Conversely, for any {{\mathbb P}}-measure {\mu}, there exists a raw IV process A, uniquely defined up to evanescence, such that {\mu=\mu_A}.

Proof: If A is a raw IV process then, to show that {\mu_A} defined by (12) is a measure, we just need to verify that it is linear and satisfies monotone convergence. However, these follow immediately from linearity and monotone convergence for the integral with respect to A and for the expectation. Next, if {\xi} is evanescent, then it is zero outside of a set {S\in\mathcal F} of zero probability. So, {\xi\cdot A} is zero outside of S, and its expectation is zero. So, {\mu_A} vanishes on evanescent processes as required. Furthermore, for any time {t\ge 0} and bounded random variable U,

\displaystyle  {\mathbb E}[UA_t]=\mu_A(U1_{[0,t]}),

which defines {A_t} uniquely up to almost-sure equivalence. As A is right-continuous, this uniquely specifies A up to evanescence. It is clear that {\mu_A} is a positive measure whenever A is nonnegative and increasing.

Conversely, suppose that {\mu} is a nonnegative {{\mathbb P}}-measure and, for each {t\ge0}, define a measure {\mu_t} on {(\Omega,\mathcal F,{\mathbb P})} by

\displaystyle  \mu_t(U)=\mu(1_{[0,t]}U)

for bounded random variables U. This is clearly nonnegative for {U\ge0}, is linear, and satisfies monotone convergence. So {\mu_t} is indeed a positive measure. Also, as {\mu} is a {{\mathbb P}}-measure, we have {\mu_t(U)=0} whenever {U=0} almost surely. So, {\mu_t} is absolutely continuous with respect to {{\mathbb P}}. We can define a process A by the Radon-Nikodym derivative

\displaystyle  A_t=\frac{d\mu_t}{d{\mathbb P}},

so that {A_t} is a nonnegative integrable random variable. For any nonnegative U and times {s\le t}, we have

\displaystyle  {\mathbb E}[U(A_t-A_s)]=\mu_t(U)-\mu_s(U)=\mu(1_{(s,t]}U)\ge0.

So, {A_s\le A_t} almost surely. By countable additivity of probability measures, this means that {A_t} is almost-surely increasing as t runs over the nonnegative rationals. Define the nonnegative, right-continuous and increasing process

\displaystyle  \tilde A_t=\inf\left\{A_s\colon s\in{\mathbb Q}, s > t\right\}.

As {\tilde A_t} is the infimum of a countable set of random variables, it is measurable. Also, as A is almost surely increasing on the rationals, for any fixed {t\ge0} we can choose a sequence of rationals {t_n > t} tending to t, and, {\tilde A_t=\lim_nA_{t_n}} almost surely. Then, for any bounded random variable U,

\displaystyle  {\mathbb E}[U(\tilde A_t-A_t)]=\lim_{n\rightarrow\infty}{\mathbb E}[U(A_{t_n}-A_t)]=\lim_{n\rightarrow\infty}\mu(U1_{(t,t_n]})=0.

This shows that {\tilde A_t=A_t} almost surely, showing that A has a right-continuous and increasing modification. We suppose that we have chosen such a modification — that is, {A=\tilde A}. Next, for a bounded random variable U and {t\ge0}, let {\xi} be the measurable process {U1_{[0,t]}},

\displaystyle  {\mathbb E}[\xi\cdot A_\infty]={\mathbb E}[U A_t]=\mu(U1_{[0,t]}).

As this is bounded for all {\lvert U\rvert\le1} and for all {t\ge0}, this shows that A is a nonnegative increasing raw IV process and

\displaystyle  \mu_A(\xi)=\mu(\xi).

The functional monotone class theorem extends this to all bounded measurable {\xi}, so that {\mu=\mu_A}.

Next, suppose that {\mu} is a {{\mathbb P}}-measure, not assumed to be positive. By the Jordan decomposition, we can write {\mu=\mu^+-\mu^-} for positive {{\mathbb P}}-measures {\mu^+} and {\mu^-}. By the argument above, there are increasing raw IV processes {A^+} and {A^-} with {\mu^+=\mu_{A^+}} and {\mu^-=\mu_{A^-}}. So, {A=A^+-A^-} is a raw IV process satisfying {\mu=\mu_A}. ⬜

In the following, {\mu^{\rm o}} and {\mu^{\rm p}} are referred to, respectively, as the optional and predictable projections of {\mu}.

Theorem 7 Any {\mathcal P}-measure {\mu} on {\mathcal O} extends uniquely to an optional {{\mathbb P}}-measure, {\mu^{\rm o}}, on {\mathcal M}. Furthermore, {\mu^{\rm o}} is the unique {{\mathbb P}}-measure satisfying

\displaystyle  \mu^{\rm o}(\xi)=\mu({}^{\rm o}\xi) (13)

for bounded measurable {\xi}. Similarly, any {{\mathbb P}}-measure {\mu} on {\mathcal P} extends uniquely to a predictable {{\mathbb P}}-measure, {\mu^{\rm p}}, on {\mathcal M}. This is the unique {{\mathbb P}}-measure satisfying

\displaystyle  \mu^{\rm p}(\xi)=\mu({}^{\rm p}\xi) (14)

for bounded measurable {\xi}.

Proof: We start with the first statement, where {\mu} is a {{\mathbb P}}-measure on {\mathcal O}. Define {\mu^{\rm o}} by (13). Recall that the optional projection is uniquely defined up to evanescence. As {{\mathbb P}}-measures vanish on evanescent sets, this implies that {\mu({}^{\rm o}\xi)} does not depend on the particular modification of the projection used and, so, is uniquely defined by (13). It just needs to be shown that it is a {\mathcal P}-measure. If {\xi} is evanescent then, the projection is also evanescent and, so, {\mu({}^{\rm o}\xi)} vanishes. It only remains to demonstrate that {\mu^{\rm o}} is a (signed) measure. For this we need to show that linearity and dominated convergence hold.

Linearity follows immediately from linearity of the optional projection of {\xi}. For monotone convergence, suppose that {\xi^n} is a sequence of bounded measurable processes increasing to a bounded limit {\xi}. By dominated convergence for the projection, {{}^{\rm o}\xi^n} increases to {{}^{\rm o}\xi}. By dominated convergence for the measure {\mu},

\displaystyle  \mu^{\rm o}(\xi^n)=\mu({}^{\rm o}\xi^n)\rightarrow\mu({}^{\rm o}\xi)=\mu^{\rm o}(\xi)

as required.

Finally, it needs to be shown that {\mu^{\rm o}} is the unique extension of {\mu} to an optional {{\mathbb P}}-measure. Clearly, as {\xi^{\rm o}=\xi} for optional {\xi}, we have {\mu^{\rm o}=\mu} on the optional sigma-algebra. So, {\mu^{\rm o}} is an extension of {\mu} to a {{\mathbb P}}-measure on {\mathcal M}. From (13),

\displaystyle  \mu^{\rm o}({}^{\rm o}\xi)=\mu({}^{\rm o}({}^{\rm o}\xi))=\mu({}^{\rm o}\xi)=\mu^{\rm o}(\xi).

showing that {\mu^{\rm o}} is optional. Conversely, suppose that {\mu^{\rm o}} is an extension of {\mu} to an optional {{\mathbb P}}-measure on {\mathcal M}. From the definitions,

\displaystyle  \mu^{\rm o}(\xi)=\mu^{\rm o}({}^{\rm o}\xi)=\mu({}^{\rm o}\xi),

verifying that (13) holds.

For the second statement, where {\mu} is a {{\mathbb P}}-measure on {\mathcal P}, the same argument holds replacing `optional’ by `predictable’. ⬜

The existence of the optional and predictable projections of a {{\mathbb P}}-measure {\mu} on {\mathcal M} is an immediate consequence of Theorem 7. Equations (15) below are just restatements of (13) and (14).

Definition 8 Let {\mu} be a {{\mathbb P}}-measure {\mu} (on {\mathcal M}). We define its optional projection {\mu^{\rm o}} to be the unique optional {{\mathbb P}}-measure agreeing with {\mu} on {\mathcal O}, and its predictable projection, {\mu^{\rm p}}, to be the unique predictable {{\mathbb P}}-measure agreeing with {\mu} on {\mathcal P}.

Equivalently, {\mu^{\rm o}} and {\mu^{\rm p}} are the unique {{\mathbb P}}-measures satisfying

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\mu^{\rm o}(\xi)=\mu({}^{\rm o}\xi),\smallskip\\ &\displaystyle\mu^{\rm p}(\xi)=\mu({}^{\rm p}\xi) \end{array} (15)

for all bounded measurable {\xi}.

Using Theorem 6 to translate the definition of the optional and predictable projections of {{\mathbb P}}-measures into corresponding projections of raw IV processes immediately gives Theorem 1.

Proof of Theorem 1: Let {\mu_A} be the measure defined by theorem 6, and {\mu_A^{\rm o}}, {\mu_A^{\rm p}} be its optional and predictable projections, which are well-defined by theorem 7. Again, applying theorem 6, there exist unique IV processes {A^{\rm o}} and {A^{\rm p}} such that

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\mu_{A^{\rm o}}=\mu_A^{\rm o},\smallskip\\ &\displaystyle\mu_{A^{\rm p}}=\mu_A^{\rm p}. \end{array}

Identities (2) and (3) are equivalent to (15) applied to {\mu_A},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}[\xi\cdot A^{\rm o}_\infty]=\mu_{A^{\rm o}}(\xi)=\mu_A({}^{\rm o}\xi)={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty],\smallskip\\ &\displaystyle{\mathbb E}[\xi\cdot A^{\rm p}_\infty]=\mu_{A^{\rm p}}(\xi)=\mu_A({}^{\rm p}\xi)={\mathbb E}[{}^{\rm p}\xi\cdot A_\infty], \end{array}

for all bounded measurable processes {\xi}. Furthermore, if A is nonnegative and increasing then {\mu_{A^{\rm o}}} and {\mu_{A^{\rm p}}} are positive measures, so {A^{\rm o}} and {A^{\rm p}} are nonnegative and increasing. ⬜

We still need to prove theorem 2 and, in particular, show that dual optional projections are optional and that dual predictable projections are predictable. Starting with the dual optional projection, we prove the following result which relates optionality of a process with optionality of its associated {{\mathbb P}}-measure. This is simplified a bit by the observation that, for right-continuous processes, being optional is equivalent to being adapted.

Theorem 9 Let A be a raw IV process. Then, the following are equivalent,

  1. {A} is optional.
  2. {A} is adapted.
  3. For any bounded measurable random variable U and {t\in{\mathbb R}^+},

    \displaystyle  {\mathbb E}\left[UA_t\right]={\mathbb E}\left[M\cdot A_t\right]

    where M is the martingale {M_t={\mathbb E}[U\,\vert\mathcal F_t]} with right-continuous paths (outside of a countable subset of {{\mathbb R}^+}).

  4. {{\mathbb E}[\xi\cdot A_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]} for all bounded measurable {\xi}.
  5. {\mu_A} is an optional {{\mathbb P}}-measure.

Proof:
1 ⇔ 2: We already know that optional processes are adapted and, conversely, that cadlag adapted processes are optional.

2 ⇒ 4: We use a little change of variables trick to prove this implication. Suppose, first, that A is nonnegative and increasing. If, for each {u\ge 0}, we let {C_u} be the first time t at which {A_t\ge u},

\displaystyle  C_u=\inf\left\{t\in{\mathbb R}^+\colon A_t\ge u\right\}, (16)

This is increasing and is a stopping time as, for each {t\ge0}, the event {\{C_u\le t\}} is equal to {\{A_t\ge u\}\in\mathcal F_t}. By a change of variables,

\displaystyle  \xi\cdot A_\infty=\int_0^\infty\xi_{C_t}\,dt.

for all bounded measurable {\xi}. Here, we are adopting the convention that {\xi} is equal to zero at time {t=\infty}. From the definition of the optional projection, {{\mathbb E}[{}^{\rm o}\xi_{C_u}]} is equal to {{\mathbb E}[\xi_{C_u}]} and,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[{}^{\rm o}\xi\cdot A_\infty\right]&\displaystyle=\int_0^\infty{\mathbb E}\left[{}^{\rm o}\xi_{C_t}\right]\,dt\smallskip\\ &\displaystyle=\int_0^\infty{\mathbb E}\left[\xi_{C_t}\right]\,dt\smallskip\\ &\displaystyle={\mathbb E}\left[\xi\cdot A_\infty\right] \end{array}

as required. Now suppose that A is any adapted IV process. The idea is that the variation V of A can be expressed as the limit as n goes to infinity of the approximations,

\displaystyle  V^n_t=\lvert A_0\rvert + \sum_{k=1}^\infty\lvert A_{t^n_k\wedge t}-A_{t^n_{k-1}\wedge t}\rvert. (17)

Here, {0=t^n_0\le t^n_1\le\cdots} is an increasing sequence of times with mesh {\sup_k(t^n_k-t^n_{k-1})} going to zero as n goes to infinity. As the {V^n} are clearly adapted, V is adapted. Decomposing {A=A^+-A^-} for nonnegative adapted and increasing IV processes {A^\pm=(V\pm A)/2}, and applying the argument above to {A^\pm}, proves the implication for A.

4 ⇒ 3: Using the fact that M is the optional projection of the constant process equal to U at all times,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[UA_t]&\displaystyle={\mathbb E}[(1_{[0,t]}U)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[(1_{[0,t]}M)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[M\cdot A_t] \end{array}

as required.

3 ⇒ 2: Letting {V={\mathbb E}[U\,\vert\mathcal F_t]}, the martingale M defined by {M_s={\mathbb E}[U\,\vert\mathcal F_s]} also satisfies {M_s={\mathbb E}[V\,\vert\mathcal F_s]} at times {s\le t}. So,

\displaystyle  {\mathbb E}[UA_t]={\mathbb E}[M\cdot A_t]={\mathbb E}[VA_t]

from which it follows that A is adapted.

4 ⇔ 5: This is immediate from the definition (12) of {\mu_A}. ⬜

We can similarly relate predictability of a process to predictability of its associated {{\mathbb P}}-measure. This is a bit more difficult than for optionality, as we do not have the simple equivalence of predictability of a process with being adapted. Instead of being adapted, we have the second statement of the theorem below. Recall that we have seen this same condition for a cadlag process to be predictable before in these notes although, there, some use was made of stochastic integration with respect to martingale integrators. In keeping with the current topic, I give an independent proof here which does not make use of advanced methods of stochastic calculus other than predictable projection.

Theorem 10 Let A be a raw IV process. Then, the following are equivalent,

  1. {A} is predictable.
  2. {A_\tau} is {\mathcal F_{\tau-}}-measurable for every predictable stopping time {\tau}, and {A_\tau=A_{\tau-}} (a.s.) for every totally inaccessible stopping time {\tau}.
  3. For any bounded measurable random variable U and {t\in{\mathbb R}^+},

    \displaystyle  {\mathbb E}\left[UA_t\right]={\mathbb E}\left[M_-\cdot A_t\right]

    where M is the martingale {M_t={\mathbb E}[U\,\vert\mathcal F_t]} with right-continuous paths (outside of a countable subset of {{\mathbb R}^+}).

  4. {{\mathbb E}[\xi\cdot A_\infty]={\mathbb E}[{}^{\rm p}\xi\cdot A_\infty]} for all bounded measurable {\xi}.
  5. {\mu_A} is a predictable {{\mathbb P}}-measure.

Proof:
1 ⇒ 4: We use the same change of variables trick as in the proof of theorem 9. Supposing that A is increasing then, for each {u\ge 0}, we let {C_u} be the first time t at which {A_t\ge u}, as defined by (16). As A is increasing, the stochastic interval {[\tau,\infty)} is equal to {\{A\ge u\}} and, so, is predictable. Hence, {C_u} is a predictable stopping time. From the definition of predictable projection, {{\mathbb E}[{}^{\rm p}\xi_{C_u}]} is equal to {{\mathbb E}[\xi_{C_u}]} and, using a change of variables,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[{}^{\rm p}\xi\cdot A_\infty\right]&\displaystyle=\int_0^\infty{\mathbb E}\left[{}^{\rm p}\xi_{C_t}\right]\,dt\smallskip\\ &\displaystyle=\int_0^\infty{\mathbb E}\left[\xi_{C_t}\right]\,dt\smallskip\\ &\displaystyle={\mathbb E}\left[\xi\cdot A_\infty\right] \end{array}

as required. Now suppose that A is any predictable IV process. It is trivial that A stopped at a fixed time s is predictable. Then, expression (17) for the approximations {V^n} to the variation V of A gives predictable processes and, hence, V is predictable. Decomposing {A=A^+-A^-} for nonnegative predictable and increasing IV processes {A^\pm=(V\pm A)/2}, and applying the argument above to {A^\pm}, proves the implication for A.

4 ⇒ 3: Using the fact that {M_-} is the predictable projection of the constant process equal to U at all times,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[UA_t]&\displaystyle={\mathbb E}[(1_{[0,t]}U)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[(1_{[0,t]}M_-)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[M_-\cdot A_t] \end{array}

as required.

3 ⇒ 4: If {\xi} is of the form {1_{[0,t]}U} for a time {t\in{\mathbb R}^+} and bounded random variable U, then, {M_-} defined as above is the predictable projection of the constant process equal to U at all times. So,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[\xi\cdot A_\infty]&\displaystyle={\mathbb E}[UA_t]={\mathbb E}[M_-\cdot A_t]\smallskip\\ &\displaystyle={\mathbb E}[(1_{[0,t]}M_-)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[{}^{\rm p}\xi\cdot A_\infty]. \end{array}

This extends to all bounded measurable {\xi} by the functional monotone class theorem.

4 ⇒ 2: First, the argument used to show that A is adapted in theorem 10 also applies here, so A is adapted. In particular, for a predictable stopping time {\tau}, this implies that {A_{\tau-}} is {\mathcal F_{\tau-}}-measurable. Then, for a bounded random variable U, the predictable projection of {U1_{[\tau]}} is equal to {{\mathbb E}[U\,\vert\mathcal F_{\tau-}]1_{[\tau]}}. So,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[U\Delta A_\tau]&\displaystyle={\mathbb E}[U1_{[\tau]}\cdot A_\infty]={\mathbb E}[{}^{\rm p}(U1_{[\tau]})\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[{\mathbb E}[U\,\vert\mathcal F_{\tau-}]\Delta A_\tau], \end{array}

showing that {\Delta A_\tau} is {\mathcal F_{\tau-}}-measurable and, hence, {A_\tau=A_{\tau-}+\Delta A_\tau} is {\mathcal F_{\tau-}}-measurable.

Now let {\tau} be a totally inaccessible stopping time so, by definition, {{\mathbb P}(\sigma=\tau < \infty)=0} for all predictable stopping times {\sigma}. Therefore, {U1_{[\tau]}} is almost surely zero at each predictable stopping time and, so, has predictable projection equal to zero. That is,

\displaystyle  {\mathbb E}[U\Delta A_\tau]={\mathbb E}[U1_{[\tau]}\cdot A_\infty]={\mathbb E}[{}^{\rm p}(U1_{[\tau]})\cdot A_\infty]=0

and, hence, {\Delta A_\tau=0} almost surely.

2 ⇒ 1: As {A_t} is {\mathcal F_{t-}}-measurable at each fixed time t, A is adapted. Then, {\Delta A} is a thin process and, hence, {\{\Delta A\not=0\}\subseteq\bigcup_n[\tau_n]} for a sequence {\tau_n} of stopping times satisfying the properties that each {\tau_n} is either predictable or totally inaccessible and that {{\mathbb P}(\tau_m=\tau_n < \infty)=0} whenever {m\not=n}. Then, for any n where {\tau_n} is totally inaccessible, by assumption we have {\Delta A_{\tau_n}=0} almost surely. So, without loss of generality, we can suppose that {\tau_n} is predictable. As A is adapted, {A_{\tau_n-}} is {\mathcal F_{\tau_n-}}-measurable. By assumption, {A_{\tau_n}} is also {\mathcal F_{\tau_n-}}-measurable. This implies that the process {\Delta A_{\tau_n}1_{[\tau_n]}} is predictable. Furthermore, being adapted and left-continuous, {A_-} is predictable. Hence,

\displaystyle  A=A_- + \sum_{n=1}^\infty\Delta A_{\tau_n}1_{[\tau_n]}

decomposes A as the sum of predictable processes, showing that A is predictable.

4 ⇔ 5: This is immediate from the definition (12) of {\mu_A}. ⬜

Theorems 9 and 10 can be used to prove Theorem 2 showing, in particular, that optional projections are optional and predictable projections are predictable.

Proof of Theorem 2: For a raw IV process A, by the definition given in theorem 1, the dual optional projection satisfies,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[{}^{\rm o}\xi\cdot A^{\rm o}_\infty] &\displaystyle={\mathbb E}[{}^{\rm o}({}^{\rm o}\xi)\cdot A_\infty]\smallskip\\ &\displaystyle={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty] ={\mathbb E}[\xi\cdot A^{\rm o}_\infty] \end{array}

for all bounded measurable processes {\xi}. The equivalence of the first and fourth statements of Theorem 9 implies that {A^{\rm o}} is optional and, for any bounded optional {\xi},

\displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]={\mathbb E}[\xi\cdot A_\infty],

showing that (4) holds. Conversely, suppose that B is an optional IV process satisfying

\displaystyle  {\mathbb E}[\xi\cdot B_\infty]={\mathbb E}[\xi\cdot A_\infty]

for all bounded optional {\xi}. The equivalence of the first and fourth statements of Theorem 9 gives

\displaystyle  {\mathbb E}[\xi\cdot B_\infty]= {\mathbb E}[{}^{\rm o}\xi\cdot B_\infty]= {\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]

showing that {B=A^{\rm o}}.

Finally, the statements of the theorem regarding the dual predictable projection follow by the same argument as above, replacing `optional’ by `predictable’, {A^{\rm o}} by {A^{\rm p}}, {{}^{\rm o}\xi} by {{}^{\rm p}\xi}, and by invoking Theorem 10 instead of 9. ⬜

One thought on “Dual Projections

  1. Dear George,

    Can you please elaborate why in the proof of theorem 10 in the direction 4 \rightarrow 2 A is adapted and the equality \mathbb{E}[U\Delta A_{\tau}] = \mathbb{E}[\mathbb{E}[U|\mathcal{F}_{\tau -}]\Delta A_{\tau}] means that \Delta A_{\tau} is \mathcal{F}_{\tau -} adapted.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s