Properties of the Dual Projections

In the previous post I introduced the definitions of the dual optional and predictable projections, firstly for processes of integrable variation and, then, generalised to processes which are only required to be locally (or prelocally) of integrable variation. We did not look at the properties of these dual projections beyond the fact that they exist and are uniquely defined, which are significant and important statements in their own right.

To recap, recall that an IV process, A, is right-continuous and such that its variation

\displaystyle  V_t\equiv \lvert A_0\rvert+\int_0^t\,\lvert dA\rvert (1)

is integrable at time {t=\infty}, so that {{\mathbb E}[V_\infty] < \infty}. The dual optional projection is defined for processes which are prelocally IV. That is, A has a dual optional projection {A^{\rm o}} if it is right-continuous and its variation process is prelocally integrable, so that there exist a sequence {\tau_n} of stopping times increasing to infinity with {1_{\{\tau_n > 0\}}V_{\tau_n-}} integrable. More generally, A is a raw FV process if it is right-continuous with almost-surely finite variation over finite time intervals, so {V_t < \infty} (a.s.) for all {t\in{\mathbb R}^+}. Then, if a jointly measurable process {\xi} is A-integrable on finite time intervals, we use

\displaystyle  \xi\cdot A_t\equiv\xi_0A_0+\int_0^t\xi\,dA

to denote the integral of {\xi} with respect to A over the interval {[0,t]}, which takes into account the value of {\xi} at time 0 (unlike the integral {\int_0^t\xi\,dA} which, implicitly, is defined on the interval {(0,t]}). In what follows, whenever we state that {\xi\cdot A} has any properties, such as being IV or prelocally IV, we are also including the statement that {\xi} is A-integrable so that {\xi\cdot A} is a well-defined process. Also, whenever we state that a process has a dual optional projection, then we are also implicitly stating that it is prelocally IV.

From theorem 3 of the previous post, the dual optional projection {A^{\rm o}} is the unique prelocally IV process satisfying

\displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]

for all measurable processes {\xi} with optional projection {{}^{\rm o}\xi} such that {\xi\cdot A^{\rm o}} and {{}^{\rm o}\xi\cdot A} are IV. Equivalently, {A^{\rm o}} is the unique optional FV process such that

\displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[\xi\cdot A_\infty]

for all optional {\xi} such that {\xi\cdot A} is IV, in which case {\xi\cdot A^{\rm o}} is also IV so that the expectations in this identity are well-defined.

I now look at the elementary properties of dual optional projections, as well as the corresponding properties of dual predictable projections. The most important property is that, according to the definition just stated, the dual projection exists and is uniquely defined. By comparison, the properties considered in this post are elementary and relatively easy to prove. So, I will simply state a theorem consisting of a list of all the properties under consideration, and will then run through their proofs. Starting with the dual optional projection, the main properties are listed below as Theorem 1.

Note that the first three statements are saying that the dual projection is indeed a linear projection from the prelocally IV processes onto the linear subspace of optional FV processes. As explained in the previous post, by comparison with the discrete-time setting, the dual optional projection can be expressed, in a non-rigorous sense, as taking the optional projection of the infinitesimal increments,

\displaystyle  dA^{\rm o}={}^{\rm o}dA. (2)

As {dA} is interpreted via the Lebesgue-Stieltjes integral {\int\cdot\,dA}, it is a random measure rather than a real-valued process. So, the optional projection of {dA} appearing in (2) does not really make sense. However, Theorem 1 does allow us to make sense of (2) in certain restricted cases. For example, if A is differentiable so that {dA=\xi\,dt} for a process {\xi}, then (9) below gives {dA={}^{\rm o}\xi\,dt}. This agrees with (2) so long as {{}^{\rm o}(\xi\,dt)} is interpreted to mean {{}^{\rm o}\xi\,dt}. Also, restricting to the jump component of the increments, {\Delta A=A-A_-}, (2) reduces to (11) below.

We defined the dual projection via expectations of integrals {\xi\cdot A} with the restriction that this is IV. An alternative approach is to first define the dual projections for IV processes, as was done in theorems 1 and 2 of the previous post, and then extend to (pre)locally IV processes by localisation of the projection. That this is consistent with our definitions follows from the fact that (pre)localisation commutes with the dual projection, as stated in (10) below.

Theorem 1

  1. A raw FV process A is optional if and only if {A^{\rm o}} exists and is equal to A.
  2. If the dual optional projection of A exists then,
    \displaystyle  (A^{\rm o})^{\rm o}=A^{\rm o}. (3)
  3. If the dual optional projections of A and B exist, and {\lambda}, {\mu} are {\mathcal F_0}-measurable random variables then,
    \displaystyle  (\lambda A+\mu B)^{\rm o}=\lambda A^{\rm o}+\mu B^{\rm o}. (4)
  4. If the dual optional projection {A^{\rm o}} exists then {{\mathbb E}[\lvert A_0\rvert\,\vert\mathcal F_0]} is almost-surely finite and
    \displaystyle  A^{\rm o}_0={\mathbb E}[A_0\,\vert\mathcal F_0]. (5)
  5. If U is a random variable and {\tau} is a stopping time, then {U1_{[\tau,\infty)}} is prelocally IV if and only if {{\mathbb E}[1_{\{\tau < \infty\}}\lvert U\rvert\,\vert\mathcal F_\tau]} is almost surely finite, in which case
    \displaystyle  \left(U1_{[\tau,\infty)}\right)^{\rm o}={\mathbb E}[1_{\{\tau < \infty\}}U\,\vert\mathcal F_\tau]1_{[\tau,\infty)}. (6)
  6. If the prelocally IV process A is nonnegative and increasing then so is {A^{\rm o}} and,
    \displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty] (7)

    for all nonnegative measurable {\xi} with optional projection {{}^{\rm o}\xi}. If A is merely increasing then so is {A^{\rm o}} and (7) holds for nonnegative measurable {\xi} with {\xi_0=0}.

  7. If A has dual optional projection {A^{\rm o}} and {\xi} is an optional process such that {\xi\cdot A} is prelocally IV then, {\xi} is {A^{\rm o}}-integrable and,
    \displaystyle  (\xi\cdot A)^{\rm o}=\xi\cdot A^{\rm o}. (8)
  8. If A is an optional FV process and {\xi} is a measurable process with optional projection {{}^{\rm o}\xi} such that {\xi\cdot A} is prelocally IV then, {{}^{\rm o}\xi} is A-integrable and,
    \displaystyle  (\xi\cdot A)^{\rm o}={}^{\rm o}\xi\cdot A. (9)
  9. If A has dual optional projection {A^{\rm o}} and {\tau} is a stopping time then,
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle(A^{\tau})^{\rm o}=(A^{\rm o})^{\tau},\smallskip\\ &\displaystyle(A^{\tau-})^{\rm o}=(A^{\rm o})^{\tau-}. \end{array} (10)
  10. If the dual optional projection {A^{\rm o}} exists, then its jump process is the optional projection of the jump process of A,
    \displaystyle  \Delta A^{\rm o}={}^{\rm o}\!\Delta A. (11)
  11. If A has dual optional projection {A^{\rm o}} then
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}\left[\xi_0\lvert A^{\rm o}_0\rvert + \int_0^\infty\xi\,\lvert dA^{\rm o}\rvert\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0\lvert A_0\rvert + \int_0^\infty{}^{\rm o}\xi\,\lvert dA\rvert\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\xi_0(A^{\rm o}_0)_+ + \int_0^\infty\xi\,(dA^{\rm o})_+\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0(A_0)_+ + \int_0^\infty{}^{\rm o}\xi\,(dA)_+\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\xi_0(A^{\rm o}_0)_- + \int_0^\infty\xi\,(dA^{\rm o})_-\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0(A_0)_- + \int_0^\infty{}^{\rm o}\xi\,(dA)_-\right], \end{array} (12)

    for all nonnegative measurable {\xi} with optional projection {{}^{\rm o}\xi}.

  12. Let {\{A^n\}_{n=1,2,\ldots}} be a sequence of right-continuous processes with variation

    \displaystyle  V^n_t=\lvert A^n_0\rvert + \int_0^t\lvert dA^n\rvert.

    If {\sum_n V^n} is prelocally IV then,

    \displaystyle  \left(\sum\nolimits_n A^n\right)^{\rm o}=\sum\nolimits_n\left(A^n\right)^{\rm o}. (13)

Continue reading “Properties of the Dual Projections”

Compensators of Counting Processes

A counting process, X, is defined to be an adapted stochastic process starting from zero which is piecewise constant and right-continuous with jumps of size 1. That is, letting {\tau_n} be the first time at which {X_t=n}, then

\displaystyle  X_t=\sum_{n=1}^\infty 1_{\{\tau_n\le t\}}.

By the debut theorem, {\tau_n} are stopping times. So, X is an increasing integer valued process counting the arrivals of the stopping times {\tau_n}. A basic example of a counting process is the Poisson process, for which {X_t-X_s} has a Poisson distribution independently of {\mathcal{F}_s}, for all times {t > s}, and for which the gaps {\tau_n-\tau_{n-1}} between the stopping times are independent exponentially distributed random variables. As we will see, although Poisson processes are just one specific example, every quasi-left-continuous counting process can actually be reduced to the case of a Poisson process by a time change. As always, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}.

Note that, as a counting process X has jumps bounded by 1, it is locally integrable and, hence, the compensator A of X exists. This is the unique right-continuous predictable and increasing process with {A_0=0} such that {X-A} is a local martingale. For example, if X is a Poisson process of rate {\lambda}, then the compensated Poisson process {X_t-\lambda t} is a martingale. So, the compensator of X is the continuous process {A_t=\lambda t}. More generally, X is said to be quasi-left-continuous if {{\mathbb P}(\Delta X_\tau=0)=1} for all predictable stopping times {\tau}, which is equivalent to the compensator of X being almost surely continuous. Another simple example of a counting process is {X=1_{[\tau,\infty)}} for a stopping time {\tau > 0}, in which case the compensator of X is just the same thing as the compensator of {\tau}.

As I will show in this post, compensators of quasi-left-continuous counting processes have many parallels with the quadratic variation of continuous local martingales. For example, Lévy’s characterization states that a local martingale X starting from zero is standard Brownian motion if and only if its quadratic variation is {[X]_t=t}. Similarly, as we show below, a counting process is a homogeneous Poisson process of rate {\lambda} if and only if its compensator is {A_t=\lambda t}. It was also shown previously in these notes that a continuous local martingale X has a finite limit {X_\infty=\lim_{t\rightarrow\infty}X_t} if and only if {[X]_\infty} is finite. Similarly, a counting process X has finite value {X_\infty} at infinity if and only if the same is true of its compensator. Another property of a continuous local martingale X is that it is constant over all intervals on which its quadratic variation is constant. Similarly, a counting process X is constant over any interval on which its compensator is constant. Finally, it is known that every continuous local martingale is simply a continuous time change of standard Brownian motion. In the main result of this post (Theorem 5), we show that a similar statement holds for counting processes. That is, every quasi-left-continuous counting process is a continuous time change of a Poisson process of rate 1. Continue reading “Compensators of Counting Processes”

Compensators of Stopping Times

The previous post introduced the concept of the compensator of a process, which is known to exist for all locally integrable semimartingales. In this post, I’ll just look at the very special case of compensators of processes consisting of a single jump of unit size.

Definition 1 Let {\tau} be a stopping time. The compensator of {\tau} is defined to be the compensator of {1_{[\tau,\infty)}}.

So, the compensator A of {\tau} is the unique predictable FV process such that {A_0=0} and {1_{[\tau,\infty)}-A} is a local martingale. Compensators of stopping times are sufficiently special that we can give an accurate description of how they behave. For example, if {\tau} is predictable, then its compensator is just {1_{\{\tau > 0\}}1_{[\tau,\infty)}}. If, on the other hand, {\tau} is totally inaccessible and almost surely finite then, as we will see below, its compensator, A, continuously increases to a value {A_\infty} which has the exponential distribution.

However, compensators of stopping times are sufficiently general to be able to describe the compensator of any cadlag adapted process X with locally integrable variation. We can break X down into a continuous part plus a sum over its jumps,

\displaystyle  X_t=X_0+X^c_t+\sum_{n=1}^\infty\Delta X_{\tau_n}1_{[\tau_n,\infty)}. (1)

Here, {\tau_n > 0} are disjoint stopping times such that the union {\bigcup_n[\tau_n]} of their graphs contains all the jump times of X. That they are disjoint just means that {\tau_m\not=\tau_n} whenever {\tau_n < \infty}, for any {m\not=n}. As was shown in an earlier post, not only is such a sequence {\tau_n} of the stopping times guaranteed to exist, but each of the times can be chosen to be either predictable or totally inaccessible. As the first term, {X^c_t}, on the right hand side of (1) is a continuous FV process, it is by definition equal to its own compensator. So, the compensator of X is equal to {X^c} plus the sum of the compensators of {\Delta X_{\tau_n}1_{[\tau_n,\infty)}}. The reduces compensators of locally integrable FV processes to those of processes consisting of a single jump at either a predictable or a totally inaccessible time. Continue reading “Compensators of Stopping Times”

Compensators

A very common technique when looking at general stochastic processes is to break them down into separate martingale and drift terms. This is easiest to describe in the discrete time situation. So, suppose that {\{X_n\}_{n=0,1,\ldots}} is a stochastic process adapted to the discrete-time filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n=0,1,\ldots},{\mathbb P})}. If X is integrable, then it is possible to decompose it into the sum of a martingale M and a process A, starting from zero, and such that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n\ge1}. That is, A is a predictable process. The martingale condition on M enforces the identity

\displaystyle  A_n-A_{n-1}={\mathbb E}[A_n-A_{n-1}\vert\mathcal{F}_{n-1}]={\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}].

So, A is uniquely defined by

\displaystyle  A_n=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\vert\mathcal{F}_{k-1}\right], (1)

and is referred to as the compensator of X. This is just the predictable term in the Doob decomposition described at the start of the previous post.

In continuous time, where we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}, the situation is much more complicated. There is no simple explicit formula such as (1) for the compensator of a process. Instead, it is defined as follows.

Definition 1 The compensator of a cadlag adapted process X is a predictable FV process A, with {A_0=0}, such that {X-A} is a local martingale.

For an arbitrary process, there is no guarantee that a compensator exists. From the previous post, however, we know exactly when it does. The processes for which a compensator exists are precisely the special semimartingales or, equivalently, the locally integrable semimartingales. Furthermore, if it exists, then the compensator is uniquely defined up to evanescence. Definition 1 is considerably different from equation (1) describing the discrete-time case. However, we will show that, at least for processes with integrable variation, the continuous-time definition does follow from the limit of discrete time compensators calculated along ever finer partitions (see below).

Although we know that compensators exist for all locally integrable semimartingales, the notion is often defined and used specifically for the case of adapted processes with locally integrable variation or, even, just integrable increasing processes. As with all FV processes, these are semimartingales, with stochastic integration for locally bounded integrands coinciding with Lebesgue-Stieltjes integration along the sample paths. As an example, consider a homogeneous Poisson process X with rate {\lambda}. The compensated Poisson process {M_t=X_t-\lambda t} is a martingale. So, X has compensator {\lambda t}.

We start by describing the jumps of the compensator, which can be done simply in terms of the jumps of the original process. Recall that the set of jump times {\{t\colon\Delta X_t\not=0\}} of a cadlag process are contained in the graphs of a sequence of stopping times, each of which is either predictable or totally inaccessible. We, therefore, only need to calculate {\Delta A_\tau} separately for the cases where {\tau} is a predictable stopping time and when it is totally inaccessible.

For the remainder of this post, it is assumed that the underlying filtered probability space is complete. Whenever we refer to the compensator of a process X, it will be understood that X is a special semimartingale. Also, the jump {\Delta X_t} of a process is defined to be zero at time {t=\infty}.

Lemma 2 Let A be the compensator of a process X. Then, for a stopping time {\tau},

  1. {\Delta A_\tau=0} if {\tau} is totally inaccessible.
  2. {\Delta A_\tau={\mathbb E}\left[\Delta X_\tau\vert\mathcal{F}_{\tau-}\right]} if {\tau} is predictable.

Continue reading “Compensators”

Special Semimartingales

For stochastic processes in discrete time, the Doob decomposition uniquely decomposes any integrable process into the sum of a martingale and a predictable process. If {\{X_n\}_{n=0,1,\ldots}} is an integrable process adapted to a filtration {\{\mathcal{F}_n\}_{n=0,1,\ldots}} then we write {X_n=M_n+A_n}. Here, M is a martingale, so that {M_{n-1}={\mathbb E}[M_n\vert\mathcal{F}_{n-1}]}, and A is predictable with {A_0=0}. By saying that A is predictable, we mean that {A_n} is {\mathcal{F}_{n-1}} measurable for each {n\ge1}. It can be seen that this implies that

\displaystyle  A_n-A_{n-1}={\mathbb E}[A_n-A_{n-1}\vert\mathcal{F}_{n-1}]={\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}].

Then it is possible to write A and M as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle A_n&\displaystyle=\sum_{k=1}^n{\mathbb E}[X_k-X_{k-1}\vert\mathcal{F}_{k-1}],\smallskip\\ \displaystyle M_n&\displaystyle=X_n-A_n. \end{array} (1)

So, the Doob decomposition is unique and, conversely, the processes A and M constructed according to equation (1) can be seen to be respectively, a predictable process starting from zero and a martingale. For many purposes, this allows us to reduce problems concerning processes in discrete time to simpler statements about martingales and separately about predictable processes. In the case where X is a submartingale then things reduce further as, in this case, A will be an increasing process.

The situation is considerably more complicated when looking at processes in continuous time. The extension of the Doob decomposition to continuous time processes, known as the Doob-Meyer decomposition, was an important result historically in the development of stochastic calculus. First, we would usually restrict attention to sufficiently nice modifications of the processes and, in particular, suppose that X is cadlag. When attempting an analogous decomposition to the one above, it is not immediately clear what should be meant by the predictable component. The continuous time predictable processes are defined to be the set of all processes which are measurable with respect to the predictable sigma algebra, which is the sigma algebra generated by the space of processes which are adapted and continuous (or, equivalently, left-continuous). In particular, all continuous and adapted processes are predictable but, due to the existence of continuous martingales such as Brownian motion, this means that decompositions as sums of martingales and predictable processes are not unique. It is therefore necessary to impose further conditions on the term A in the decomposition. It turns out that we obtain unique decompositions if, in addition to being predictable, A is required to be cadlag with locally finite variation (an FV process). The processes which can be decomposed into a local martingale and a predictable FV process are known as special semimartingales. This is precisely the space of locally integrable semimartingales. As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})} and two stochastic processes are considered to be the same if they are equivalent up to evanescence.

Theorem 1 For a process X, the following are equivalent.

  • X is a locally integrable semimartingale.
  • X decomposes as
    \displaystyle  X=M+A (2)

    for a local martingale M and predictable FV process A.

Furthermore, choosing {A_0=0}, decomposition (2) is unique.

Theorem 1 is a general version of the Doob-Meyer decomposition. However, the name `Doob-Meyer decomposition’ is often used to specifically refer to the important special case where X is a submartingale. Historically, the theorem was first stated and proved for that case, and I will look at the decomposition for submartingales in more detail in a later post. Continue reading “Special Semimartingales”